Filesystem storage provides a simple, lightweight alternative to etcd for storing FlagFlow data, ideal for small companies and single-instance deployments
FlagFlow 1.5.0 introduces filesystem storage through the new PersistentService abstraction layer as an alternative to etcd. This storage option is perfect for:
Choose filesystem storage when you need a simple, reliable storage solution without the complexity of distributed systems. It's perfect for teams that want to get started quickly with FlagFlow or don't need real-time synchronization across multiple instances.
Feature | Filesystem Storage | etcd Storage |
---|---|---|
Setup Complexity | Simple - No additional services | Moderate - Requires etcd installation |
Real-time Sync | Delayed - Auto refresh with few milliseconds delay across replicas | Excellent - Automatic watch-based updates |
Performance | Fast - Direct file system access | Fast - Optimized for key-value operations |
Scalability | Limited - Works with replicas but sync has few milliseconds delay | Distributed - Multiple nodes with clustering |
Data Persistence | Reliable - Direct filesystem storage | Reliable - etcd's durability guarantees |
Resource Usage | Low - No additional processes | Higher - etcd server resources |
Ideal For | Small teams, development, single instances | Production, distributed systems, high availability |
To use filesystem storage, simply omit the etcd configuration. FlagFlow's PersistentService abstraction will automatically detect the absence of etcd settings and initialize the filesystem storage engine with full type safety and Zod schema validation.
When these etcd environment variables are not set or empty, FlagFlow uses filesystem storage:
# These should NOT be set for filesystem storage # ETCD_SERVER= # ETCD_USERNAME= # ETCD_PASSWORD= # ETCD_NAMESPACE= # Other FlagFlow configuration remains the same LOGLEVEL=info ENVIRONMENT=production
⚠️ Critical: When using filesystem storage, you must mount the /data
volume to persist data between container restarts. Without this mount,
all your feature flags and configuration will be lost when the container stops or new version
installed.
FlagFlow stores all data in the /data
directory inside the container. This includes:
FlagFlow 1.5.0 introduces a new dual-engine persistence architecture that supports both etcd and filesystem storage through the PersistentService abstraction layer:
The service layer now supports pluggable persistence engines, allowing seamless switching between storage types without changing application logic.
The filesystem storage engine provides robust local storage with automatic file watching and change detection for real-time updates.
version: '3.8' services: flagflow: image: flagflow/flagflow:latest container_name: flagflow ports: - "3000:3000" # CRITICAL: Mount /data volume for persistence volumes: - flagflow-data:/data environment: - LOGLEVEL=info - ENVIRONMENT=production # Notice: NO etcd configuration - filesystem storage will be used automatically - SESSION_USERS_ENABLED=true - SESSION_DEFAULT_USERNAME=admin - SESSION_DEFAULT_PASSWORD=change_this_password restart: unless-stopped healthcheck: test: ["CMD", "curl", "-f", "http://localhost:3000/health"] interval: 30s timeout: 10s retries: 3 volumes: flagflow-data: driver: local
# Create a named volume for data persistence docker volume create flagflow-data # Run FlagFlow with filesystem storage docker run -d \ --name flagflow \ -p 3000:3000 \ -v flagflow-data:/data \ -e LOGLEVEL=info \ -e ENVIRONMENT=production \ -e SESSION_USERS_ENABLED=true \ -e SESSION_DEFAULT_USERNAME=admin \ -e SESSION_DEFAULT_PASSWORD=change_this_password \ --restart unless-stopped \ flagflow/flagflow:latest
For Kubernetes deployments, use a PersistentVolumeClaim to ensure data persistence:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: flagflow-data namespace: flagflow spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: flagflow namespace: flagflow spec: replicas: 1 # Single replica recommended for filesystem storage selector: matchLabels: app: flagflow template: metadata: labels: app: flagflow spec: containers: - name: flagflow image: flagflow/flagflow:latest ports: - containerPort: 3000 # Mount the persistent volume to /data volumeMounts: - name: data mountPath: /data env: - name: LOGLEVEL value: "info" - name: ENVIRONMENT value: "production" - name: SESSION_USERS_ENABLED value: "true" - name: SESSION_DEFAULT_USERNAME value: "admin" - name: SESSION_DEFAULT_PASSWORD valueFrom: secretKeyRef: name: flagflow-secret key: admin-password livenessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 5 periodSeconds: 5 volumes: - name: data persistentVolumeClaim: claimName: flagflow-data
FlagFlow stores data in a structured directory format within /data
:
/data/ ├── flag/ │ ├── feature_a │ ├── feature_b │ └── group/ │ └── nested_feature ├── user/ │ ├── admin │ └── developer └── session/ ├── session_1 └── session_2
Each file contains JSON data representing the corresponding FlagFlow entity. This structure provides:
One advantage of filesystem storage is the simplicity of backup and restore operations:
# Backup the entire data directory docker run --rm -v flagflow-data:/data -v $(pwd):/backup alpine:latest \ tar czf /backup/flagflow-backup-$(date +%Y%m%d-%H%M%S).tar.gz -C /data . # Restore from backup docker run --rm -v flagflow-data:/data -v $(pwd):/backup alpine:latest \ tar xzf /backup/flagflow-backup-20241201-143000.tar.gz -C /data
Set up automated backups using cron or Kubernetes CronJobs:
apiVersion: batch/v1 kind: CronJob metadata: name: flagflow-backup namespace: flagflow spec: schedule: "0 2 * * *" # Daily at 2 AM jobTemplate: spec: template: spec: containers: - name: backup image: alpine:latest command: - /bin/sh - -c - | apk add --no-cache aws-cli tar czf /tmp/flagflow-backup-$(date +%Y%m%d-%H%M%S).tar.gz -C /data . aws s3 cp /tmp/flagflow-backup-*.tar.gz s3://your-backup-bucket/flagflow/ volumeMounts: - name: data mountPath: /data readOnly: true volumes: - name: data persistentVolumeClaim: claimName: flagflow-data restartPolicy: OnFailure
When running multiple FlagFlow instances with filesystem storage, changes made in one instance will be visible in others with a few milliseconds delay due to file system watching. The PersistentService abstraction ensures consistent behavior, though this is slower than etcd's instant distributed watch-based updates.
Filesystem storage works best with single-instance deployments. If you need to scale horizontally with real-time synchronization, consider migrating to etcd storage.
For teams with 1-50 developers and simple deployment requirements, filesystem storage provides excellent reliability with minimal operational overhead.
If you start with filesystem storage and later need etcd's distributed capabilities, FlagFlow provides migration tools:
/migration/export
endpoint to create a backup📚 For detailed migration instructions, see the Migration Documentation.