Project Lab 10: Scalable State & Configuration Management
Managing stateless apps is easy; managing StatefulSets (Databases, Cache, Queues) requires deterministic naming and sticky storage. This lab demonstrates how to package a complex Redis cluster using Helm and hydrate it for production using Kustomize.
Reference Material:
docs/09-kustomize-helm/1-kustomize.mddocs/09-kustomize-helm/2-helm.mddocs/09-kustomize-helm/3-statefulsets.md- Previous Knowledge: StorageClasses (Chapter 05).
1. OBJECTIVE: THE MULTI-ENVIRONMENT DATA STORE
The goal is to deploy a Redis StatefulSet with the following requirements:
- Environment Logic: Use Helm to toggle between "Standalone" (Dev) and "Clustered" (Prod).
- Network Identity: Use a Headless Service to provide stable hostnames (
redis-0,redis-1). - Persistence: Use
volumeClaimTemplatesto ensure data sticks to specific pod ordinals. - Security Patching: Use Kustomize to inject an
imagePullSecretinto the production build without modifying the Helm chart.
2. PHASE 1: THE HELM ENGINE (TEMPLATING)
We will build a modular chart that creates both the Service and the StatefulSet.
2.1 The StatefulSet Template (templates/statefulset.yaml)
Note the use of .Values for dynamic replica counts and storage classes.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Release.Name }}-redis
spec:
serviceName: {{ .Release.Name }}-headless
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
- containerPort: 6379
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: {{ .Values.storage.className | quote }}
resources:
requests:
storage: {{ .Values.storage.size }}
2.2 The Environment Values (values-prod.yaml)
replicaCount: 3
image:
repository: redis
tag: "7.0-alpine"
storage:
className: "premium-ssd"
size: "50Gi"
3. PHASE 2: THE KUSTOMIZE LAYER (HYDRATION)
Sometimes you cannot change the Helm chart (e.g., it's a third-party chart). We use Kustomize to "patch" the output of Helm.
3.1 The Production Overlay (kustomization.yaml)
This configuration pulls the Helm chart as a base and adds a security patch.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# In a real setup, we would use 'helm template' and pipe it in
resources:
- rendered-helm-manifests.yaml
# Patch: Inject a sidecar for log rotation (Common production requirement)
patches:
- target:
kind: StatefulSet
name: .*
patch: |-
- op: add
path: /spec/template/spec/containers/-
value:
name: log-exporter
image: fluent-bit:latest
4. EXECUTION & VALIDATION
4.1 Dry-Run the Helm Rendering
Before deploying, audit the generated YAML to ensure ordinals and services are linked.
helm template my-redis ./redis-chart -f values-prod.yaml
4.2 Deploy the Stack
helm install my-redis ./redis-chart -f values-prod.yaml --namespace database --create-namespace
4.3 Validate Stateful Identity
Verify that Pods have deterministic names and unique PVCs.
kubectl get pods -n database
# Expected: my-redis-0, my-redis-1, my-redis-2
kubectl get pvc -n database
# Expected: data-my-redis-0, data-my-redis-1, data-my-redis-2
5. THE "PET" TEST: VERIFYING PERSISTENCE
Objective: Prove that redis-1 always gets its own specific disk back even after a failure.
- Write Data:
kubectl exec my-redis-1 -n database -- redis-cli set mykey "BibleData" - Simulate Failure:
kubectl delete pod my-redis-1 -n database - Check Re-attachment:
Wait for the pod to restart, then query the data.
kubectl exec my-redis-1 -n database -- redis-cli get mykey
# Result: "BibleData"
Architectural Insight: Unlike a Deployment, the StatefulSet controller mapped the Pod ordinal: 1 to the PVC data-my-redis-1. Even if the Pod moves to a different node, the disk follows.
6. TROUBLESHOOTING & NINJA COMMANDS
6.1 Inspecting the Headless DNS
StatefulSets require a Headless Service for peer discovery. Test the individual DNS records.
kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup my-redis-0.my-redis-headless.database.svc.cluster.local
6.2 Rollback a State-Breaking Change
If a Helm upgrade corrupts the configuration, use the Helm history to revert.
helm history my-redis -n database
helm rollback my-redis 1 -n database
6.3 Forced Deletion (The Hazard)
If a node is lost, a Stateful pod will be Terminating forever.
The Bible Rule: Do NOT use --force unless the node is confirmed dead. If two pods with the same ID write to one disk, data is corrupted.
7. ARCHITECT'S KEY TAKEAWAYS
- Helm is for Logic, Kustomize is for Environment: Use Helm to manage variables (replicas, CPU) and Kustomize to manage environment-specific "hard-coding" (labels, security contexts).
- OrderedReady vs Parallel: For databases, always use the default
OrderedReadyto prevent boot-up race conditions. - Storage Classes: Always use
volumeBindingMode: WaitForFirstConsumerwith StatefulSets to ensure the disk is created in the same AZ as the pod ordinal. - Headless is Mandatory: Without a Headless service, your stateful pods will not have stable DNS names, breaking most clustering logic (e.g., MongoDB replica sets).