Deploy FlagFlow on Kubernetes with high availability and scalability
Before deploying FlagFlow on Kubernetes, ensure you have:
kubectl
configured to access your clusterFirst, create a dedicated namespace for FlagFlow:
apiVersion: v1 kind: Namespace metadata: name: flagflow labels: name: flagflow
kubectl apply -f namespace.yaml
Deploy etcd as the key-value store backend for FlagFlow:
apiVersion: apps/v1 kind: StatefulSet metadata: name: etcd namespace: flagflow spec: serviceName: etcd replicas: 1 selector: matchLabels: app: etcd template: metadata: labels: app: etcd spec: containers: - name: etcd image: bitnami/etcd:3.6.4-debian-12-r2 ports: - containerPort: 2379 name: client - containerPort: 2380 name: peer env: - name: ETCD_ROOT_PASSWORD value: "pw_flagflow" - name: ETCD_DATA_DIR value: /etcd-data volumeMounts: - name: etcd-storage mountPath: /etcd-data resources: requests: memory: "512Mi" cpu: "250m" limits: memory: "1Gi" cpu: "500m" # Kubernetes healthchecks for production reliability livenessProbe: exec: command: - etcdctl - --user=root:pw_flagflow - endpoint - health initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3 readinessProbe: exec: command: - etcdctl - --user=root:pw_flagflow - endpoint - health initialDelaySeconds: 15 periodSeconds: 5 timeoutSeconds: 3 failureThreshold: 2 volumeClaimTemplates: - metadata: name: etcd-storage spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 10Gi --- apiVersion: v1 kind: Service metadata: name: etcd namespace: flagflow spec: selector: app: etcd ports: - port: 2379 targetPort: 2379 name: client - port: 2380 targetPort: 2380 name: peer clusterIP: None
kubectl apply -f etcd-deployment.yaml
Now deploy the FlagFlow application:
apiVersion: apps/v1 kind: Deployment metadata: name: flagflow namespace: flagflow spec: replicas: 2 selector: matchLabels: app: flagflow template: metadata: labels: app: flagflow spec: containers: - name: flagflow image: ghcr.io/flagflow/flagflow:1.7.0 ports: - containerPort: 3000 env: - name: ETCD_SERVER value: "etcd.flagflow.svc.cluster.local:2379" - name: ETCD_USERNAME value: "root" - name: ETCD_PASSWORD value: "pw_flagflow" resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "512Mi" cpu: "500m" readinessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 10 periodSeconds: 5 livenessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 30 periodSeconds: 10 --- apiVersion: v1 kind: Service metadata: name: flagflow namespace: flagflow spec: selector: app: flagflow ports: - port: 80 targetPort: 3000 protocol: TCP type: ClusterIP
kubectl apply -f flagflow-deployment.yaml
To expose FlagFlow externally, create an Ingress resource:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: flagflow-ingress namespace: flagflow annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: flagflow.your-domain.com http: paths: - path: / pathType: Prefix backend: service: name: flagflow port: number: 80 # Uncomment for TLS # tls: # - hosts: # - flagflow.your-domain.com # secretName: flagflow-tls
kubectl apply -f flagflow-ingress.yaml
Replace flagflow.your-domain.com
with your actual domain. Make sure you have an Ingress
controller (like NGINX) installed in your cluster.
For advanced configuration, you can use a ConfigMap:
apiVersion: v1 kind: ConfigMap metadata: name: flagflow-config namespace: flagflow data: config.yaml: | etcd: endpoints: - etcd.flagflow.svc.cluster.local:2379 username: root password: pw_flagflow server: port: 3000 log_level: info features: metrics_enabled: true tracing_enabled: false
Verify your FlagFlow deployment:
# Check all resources in the flagflow namespace kubectl get all -n flagflow # Check pod logs kubectl logs -l app=flagflow -n flagflow # Check etcd logs kubectl logs -l app=etcd -n flagflow # Port forward to test locally (optional) kubectl port-forward service/flagflow 8080:80 -n flagflow
If using port forwarding, you can access FlagFlow at http://localhost:8080.
For production environments, consider these scaling options:
# Scale FlagFlow to 3 replicas kubectl scale deployment flagflow --replicas=3 -n flagflow # Scale etcd to 3 replicas for HA (requires cluster configuration) kubectl scale statefulset etcd --replicas=3 -n flagflow
Common troubleshooting commands:
# Describe problematic pods kubectl describe pod <pod-name> -n flagflow # Check events in the namespace kubectl get events -n flagflow --sort-by='.lastTimestamp' # Check persistent volumes kubectl get pv,pvc -n flagflow # Test etcd connectivity from a pod kubectl run etcd-test --rm -it --image=alpine --restart=Never -n flagflow -- sh # Inside the pod: # apk add --no-cache curl # curl -L http://etcd.flagflow.svc.cluster.local:2379/health
Common issues: