Kubernetes

Deploy FlagFlow on Kubernetes with high availability and scalability

Prerequisites

Before deploying FlagFlow on Kubernetes, ensure you have:

  • A running Kubernetes cluster (version 1.20 or higher)
  • kubectl configured to access your cluster
  • Sufficient cluster resources (minimum 2 CPU cores, 4GB RAM)
  • StorageClass configured for persistent volumes

Create Namespace

First, create a dedicated namespace for FlagFlow:

Create FlagFlow namespace
apiVersion: v1
kind: Namespace
metadata:
  name: flagflow
  labels:
    name: flagflow
Apply namespace
kubectl apply -f namespace.yaml

Deploy etcd

Deploy etcd as the key-value store backend for FlagFlow:

etcd-deployment.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: etcd
  namespace: flagflow
spec:
  serviceName: etcd
  replicas: 1
  selector:
    matchLabels:
      app: etcd
  template:
    metadata:
      labels:
        app: etcd
    spec:
      containers:
      - name: etcd
        image: bitnami/etcd:3.6.4-debian-12-r2
        ports:
        - containerPort: 2379
          name: client
        - containerPort: 2380
          name: peer
        env:
        - name: ETCD_ROOT_PASSWORD
          value: "pw_flagflow"
        - name: ETCD_DATA_DIR
          value: /etcd-data
        volumeMounts:
        - name: etcd-storage
          mountPath: /etcd-data
        resources:
          requests:
            memory: "512Mi"
            cpu: "250m"
          limits:
            memory: "1Gi"
            cpu: "500m"
        # Kubernetes healthchecks for production reliability
        livenessProbe:
          exec:
            command:
            - etcdctl
            - --user=root:pw_flagflow
            - endpoint
            - health
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
          failureThreshold: 3
        readinessProbe:
          exec:
            command:
            - etcdctl
            - --user=root:pw_flagflow
            - endpoint
            - health
          initialDelaySeconds: 15
          periodSeconds: 5
          timeoutSeconds: 3
          failureThreshold: 2
  volumeClaimTemplates:
  - metadata:
      name: etcd-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
  name: etcd
  namespace: flagflow
spec:
  selector:
    app: etcd
  ports:
  - port: 2379
    targetPort: 2379
    name: client
  - port: 2380
    targetPort: 2380
    name: peer
  clusterIP: None
Deploy etcd
kubectl apply -f etcd-deployment.yaml

Deploy FlagFlow

Now deploy the FlagFlow application:

flagflow-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: flagflow
  namespace: flagflow
spec:
  replicas: 2
  selector:
    matchLabels:
      app: flagflow
  template:
    metadata:
      labels:
        app: flagflow
    spec:
      containers:
      - name: flagflow
        image: ghcr.io/flagflow/flagflow:1.7.0
        ports:
        - containerPort: 3000
        env:
        - name: ETCD_SERVER
          value: "etcd.flagflow.svc.cluster.local:2379"
        - name: ETCD_USERNAME
          value: "root"
        - name: ETCD_PASSWORD
          value: "pw_flagflow"
        resources:
          requests:
            memory: "256Mi"
            cpu: "100m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        readinessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 10
          periodSeconds: 5
        livenessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 30
          periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
  name: flagflow
  namespace: flagflow
spec:
  selector:
    app: flagflow
  ports:
  - port: 80
    targetPort: 3000
    protocol: TCP
  type: ClusterIP
Deploy FlagFlow
kubectl apply -f flagflow-deployment.yaml

Expose with Ingress

To expose FlagFlow externally, create an Ingress resource:

flagflow-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: flagflow-ingress
  namespace: flagflow
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: flagflow.your-domain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: flagflow
            port:
              number: 80
  # Uncomment for TLS
  # tls:
  # - hosts:
  #   - flagflow.your-domain.com
  #   secretName: flagflow-tls
Apply Ingress
kubectl apply -f flagflow-ingress.yaml

Replace flagflow.your-domain.com with your actual domain. Make sure you have an Ingress controller (like NGINX) installed in your cluster.

Configuration with ConfigMap

For advanced configuration, you can use a ConfigMap:

flagflow-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: flagflow-config
  namespace: flagflow
data:
  config.yaml: |
    etcd:
      endpoints:
        - etcd.flagflow.svc.cluster.local:2379
      username: root
      password: pw_flagflow
    server:
      port: 3000
      log_level: info
    features:
      metrics_enabled: true
      tracing_enabled: false

Verification

Verify your FlagFlow deployment:

Check deployment status
# Check all resources in the flagflow namespace
kubectl get all -n flagflow

# Check pod logs
kubectl logs -l app=flagflow -n flagflow

# Check etcd logs
kubectl logs -l app=etcd -n flagflow

# Port forward to test locally (optional)
kubectl port-forward service/flagflow 8080:80 -n flagflow

If using port forwarding, you can access FlagFlow at http://localhost:8080.

Scaling and High Availability

For production environments, consider these scaling options:

Scale FlagFlow deployment
# Scale FlagFlow to 3 replicas
kubectl scale deployment flagflow --replicas=3 -n flagflow

# Scale etcd to 3 replicas for HA (requires cluster configuration)
kubectl scale statefulset etcd --replicas=3 -n flagflow
  • FlagFlow pods: Can be scaled horizontally as they are stateless
  • etcd cluster: Requires proper cluster configuration for multi-node setup
  • Resource requests: Adjust based on your workload requirements
  • Persistent storage: Use appropriate StorageClass for your environment

Troubleshooting

Common troubleshooting commands:

Debugging commands
# Describe problematic pods
kubectl describe pod <pod-name> -n flagflow

# Check events in the namespace
kubectl get events -n flagflow --sort-by='.lastTimestamp'

# Check persistent volumes
kubectl get pv,pvc -n flagflow

# Test etcd connectivity from a pod
kubectl run etcd-test --rm -it --image=alpine --restart=Never -n flagflow -- sh
# Inside the pod:
# apk add --no-cache curl
# curl -L http://etcd.flagflow.svc.cluster.local:2379/health

Common issues:

  • Pods stuck in Pending: Check resource requests and node capacity
  • etcd connection failed: Verify service names and network policies
  • Persistent volume issues: Ensure StorageClass is available and configured
© 2025 FlagFlow All Rights Reserved. llms.txt