Skip to main content

Ingress: Orchestrating Layer 7 Traffic

In a production Kubernetes environment, standard services like NodePort or LoadBalancer (Layer 4) are insufficient for complex web traffic. Ingress provides a specialized API to manage HTTP/S routing, allowing for host-based branching, path-based redirection, and centralized TLS termination.


1. ARCHITECTURAL ABSTRACTION: THE BRAIN VS. THE RULEBOOK

Senior Engineers must distinguish between the Ingress Resource and the Ingress Controller (IC).

1.1 The Ingress Resource (The Rulebook)

A declarative YAML object that defines how traffic should be routed. It is static metadata stored in Etcd.

1.2 The Ingress Controller (The Brain)

A specialized daemon that runs in the cluster. It watches the API Server for Ingress resources and synchronizes the state with the underlying Layer 7 proxy configuration (e.g., NGINX .conf files).

The Control Loop: Watch (API Server) -> Translate (YAML to NGINX/HAProxy Config) -> Sync (Hot Reload Proxy)

1.3 Traffic Flow Comparison

FeatureL4 LoadBalancer ServiceL7 Ingress
OSI LayerLayer 4 (TCP/UDP)Layer 7 (HTTP/S)
Routing LogicIP and Port only.Hostnames, Paths, Headers.
TLSTerminated at Pod or Passthrough.Centralized at the Ingress Gateway.
Cost1 Cloud LB per Service ($$$).1 Cloud LB for N Services ($).

2. INGRESS CONTROLLER MODELS (In-Cluster Focus)

2.1 The In-Cluster Model (NGINX, HAProxy, Traefik)

This model is Cloud-Agnostic and relies on a NodePort or Cloud LoadBalancer to forward traffic to the IC Pod(s).

  • Data Flow: Internet -> Cloud NLB/Firewall -> NodePort -> IC Pod (Nginx/HAProxy) -> App Pod.
  • Pros: Vendor agnostic, high customization via ConfigMaps/Annotations, full control over the data plane proxy.
  • Cons: Double Hop Latency (Adds the NodePort/kube-proxy layer), and the IC Pods can become a bottleneck under extreme load.

2.2 Cloud-Native Controllers (e.g., AWS LBC, GCP GCLB)

These controllers provision and manage the external cloud provider's Load Balancer.

  • Data Flow: Internet -> Cloud L7 ALB -> Pod IP (Bypasses NodePort/Kube-Proxy).
  • Pros: Lower latency, deep integration with cloud security (WAF).
  • Cons: Cloud-specific annotations, vendor lock-in.

3. NGINX INGRESS CONTROLLER (IC) DEEP DIVE

The NGINX IC is the most popular in-cluster solution, using annotations to manage complex features via NGINX modules.

3.1 Nginx Architecture

The NGINX IC runs as a Deployment or DaemonSet. It creates a single NGINX process instance that:

  1. Watches Ingress resources.
  2. Generates a dynamic NGINX configuration (nginx.conf).
  3. Performs a soft reload to apply the new config without dropping connections.

3.2 Service Exposure (The External Entry Point)

The NGINX IC Pod is typically exposed via one of these Service types:

  • LoadBalancer: (Recommended) Automatically provisions an external Cloud LB, which points traffic directly to the IC Pods.
  • NodePort: Provides a port (30000-32767) on every node. Requires an external firewall/LoadBalancer to manage.

3.3 Annotations: The Configuration Overload

Since standard Ingress doesn't support complex features, NGINX uses annotations:

AnnotationPurposeExample
nginx.ingress.kubernetes.io/rewrite-targetChanges the path forwarded to the backend./$1
nginx.ingress.kubernetes.io/ssl-redirectForces HTTP traffic to HTTPS.true
nginx.ingress.kubernetes.io/limit-rpsRate Limiting (Requests Per Second).10

4. BIBLE-GRADE MANIFEST: PATH-BASED ROUTING (NGINX)

This manifest demonstrates path-based routing and centralized TLS termination using a Secret-based certificate (required for NGINX IC).

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-path-router
namespace: prod-apps
annotations:
# 1. Use the NGINX Ingress Controller
kubernetes.io/ingress.class: nginx

# 2. Force SSL redirection globally
nginx.ingress.kubernetes.io/ssl-redirect: "true"

# 3. Path Rewrite Example: /api/v1/user -> /user
nginx.ingress.kubernetes.io/rewrite-target: /$1

spec:
# 4. Centralized TLS configuration
tls:
- hosts:
- api.company.com
secretName: api-tls-secret # Secret type: kubernetes.io/tls

ingressClassName: nginx # Required for K8s v1.18+
rules:
- host: api.company.com
http:
paths:
- path: /api/v1/payments(/|$)(.*) # Capture everything after payments
pathType: Prefix
backend:
service:
name: payment-svc
port:
number: 8080
- path: /api/v1/orders(/|$)(.*)
pathType: Prefix
backend:
service:
name: order-svc
port:
number: 8080

5. PRODUCTION PITFALLS & ARCHITECT WARNINGS

5.1 The rewrite-target Trap

If your application expects /api/v1/payments but the Ingress Controller strips it, the application will return a 404. Always use capture groups (.*) and rewrite-target: /$1 if the backend expects a simplified path.

5.2 Certificate Mismatch (NGINX)

For NGINX IC, the TLS certificate must be stored in a Kubernetes Secret of type kubernetes.io/tls. It cannot natively access cloud services like AWS ACM.

5.3 Bottleneck Warning

In the in-cluster model, the NGINX IC Pod becomes a single point of congestion. If the Pod dies, all Layer 7 traffic stops. Ensure you run multiple replicas and deploy it with Pod Anti-Affinity across different nodes/AZs.


6. TROUBLESHOOTING & NINJA COMMANDS

6.1 Inspecting the Nginx Configuration

This is the ultimate debug step for NGINX: check the generated config file.

# Exec into the NGINX Ingress Controller Pod
kubectl exec -it <nginx-ic-pod> -n ingress-nginx -- cat /etc/nginx/nginx.conf
# Look for the 'server' and 'location' blocks that map your Ingress rules.
kubectl get ingress <ingress-name> -n prod-apps
# The ADDRESS column should be populated with the Cloud LB or NodePort IP.

6.3 Checking NGINX Logs

kubectl logs -f <nginx-ic-pod> -n ingress-nginx
# Look for reload messages: "Reloading NGINX configuration" (confirms sync)