Kubernetes / Helm Deployment
Overview
flow8 does not ship with built-in Helm charts. However, it is designed to be Kubernetes-native and can be deployed using standard K8s resource types. This guide covers the recommended patterns and resource configurations.
Deployment Architecture
┌──────────────────────────────────────┐│ Ingress Controller ││ (nginx, Istio, etc.) ││ TLS termination, routing │└──────────────────┬───────────────────┘ │ ┌──────────┴──────────┐ │ │ ┌────▼────┐ ┌────▼────┐ │ flow8-1 │ │ flow8-2 │ │ :4454 │ │ :4454 │ └────┬────┘ └────┬────┘ │ (load balanced by Ingress) └──────────┬──────────┘ │ ┌──────────┴──────────────────┐ │ │ ┌───▼─────────┐ ┌──────▼───────┐ │ MongoDB │ │ S3 Storage │ │ StatefulSet│ │ (AWS, GCS) │ │ or Atlas │ │ │ └─────────────┘ └──────────────┘Kubernetes Resources
1. Namespace
apiVersion: v1kind: Namespacemetadata: name: flow8 labels: app: flow82. Secret (Encryption Keys & Credentials)
apiVersion: v1kind: Secretmetadata: name: flow8-secrets namespace: flow8type: OpaquestringData: ENCRYPTION_KEY: "<generate: openssl rand -hex 128>" # 256-char hex ENCRYPTION_KEY_SALT: "<generate: openssl rand -hex 32>" # 64-char hex MONGODB_URI: "mongodb+srv://<user>:<password>@cluster.mongodb.net/flow8" OAUTH2_CLIENT_ID: "..." OAUTH2_CLIENT_SECRET: "..."
---apiVersion: v1kind: Secretmetadata: name: flow8-tls namespace: flow8type: kubernetes.io/tlsdata: tls.crt: <base64-encoded-cert> tls.key: <base64-encoded-key>3. ConfigMap (Configuration)
apiVersion: v1kind: ConfigMapmetadata: name: flow8-config namespace: flow8data: config.yml: | server: port: 4454 max_request_size_mb: 100 read_timeout_seconds: 30
mongodb: max_pool_size: 100
encryption: algorithm: argon2id
session: ttl_hours: 1 cookie_secure: true cookie_http_only: true cookie_same_site: strict
retention: cleanup_interval: "2m" policies: audit_logs: cadence: "30d" enforced_minimum: "14d"4. PersistentVolumeClaim (Data Storage)
apiVersion: v1kind: PersistentVolumeClaimmetadata: name: flow8-data namespace: flow8spec: accessModes: - ReadWriteOnce storageClassName: fast-ssd # Must exist in your cluster resources: requests: storage: 100GiFor multiple replicas (ReadWriteMany):
apiVersion: v1kind: PersistentVolumeClaimmetadata: name: flow8-data-shared namespace: flow8spec: accessModes: - ReadWriteMany # Required for multi-pod access storageClassName: efs # or nfs, azurefile resources: requests: storage: 500Gi5. ServiceAccount & RBAC
apiVersion: v1kind: ServiceAccountmetadata: name: flow8 namespace: flow8
---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: flow8rules:- apiGroups: [""] resources: ["services", "configmaps"] verbs: ["get", "list", "watch"]- apiGroups: [""] resources: ["secrets"] verbs: ["get"]
---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: flow8roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flow8subjects:- kind: ServiceAccount name: flow8 namespace: flow86. Deployment
apiVersion: apps/v1kind: Deploymentmetadata: name: flow8 namespace: flow8 labels: app: flow8spec: replicas: 2 # High availability strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 selector: matchLabels: app: flow8 template: metadata: labels: app: flow8 annotations: prometheus.io/scrape: "true" prometheus.io/port: "4454" prometheus.io/path: "/metrics" spec: serviceAccountName: flow8 securityContext: runAsNonRoot: true runAsUser: 911 fsGroup: 911
containers: - name: flow8 image: ghcr.io/osbits/flow8core:latest imagePullPolicy: IfNotPresent ports: - name: http containerPort: 4454 protocol: TCP - name: mcp containerPort: 4445 protocol: TCP - name: channels containerPort: 7701 protocol: TCP
env: - name: SERVER_PORT value: "4454" - name: LOG_LEVEL value: "info" - name: ENCRYPTION_KEY valueFrom: secretKeyRef: name: flow8-secrets key: ENCRYPTION_KEY - name: ENCRYPTION_KEY_SALT valueFrom: secretKeyRef: name: flow8-secrets key: ENCRYPTION_KEY_SALT - name: MONGODB_URI valueFrom: secretKeyRef: name: flow8-secrets key: MONGODB_URI - name: OAUTH2_CLIENT_ID valueFrom: secretKeyRef: name: flow8-secrets key: OAUTH2_CLIENT_ID - name: OAUTH2_CLIENT_SECRET valueFrom: secretKeyRef: name: flow8-secrets key: OAUTH2_CLIENT_SECRET - name: GOMAXPROCS value: "2"
volumeMounts: - name: data mountPath: /app/data - name: config mountPath: /app/config - name: tmp mountPath: /tmp
resources: requests: cpu: 500m memory: 512Mi limits: cpu: 2000m memory: 2Gi
livenessProbe: httpGet: path: /health port: http initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3
readinessProbe: httpGet: path: /health port: http initialDelaySeconds: 10 periodSeconds: 5 timeoutSeconds: 3 failureThreshold: 3
securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsNonRoot: true capabilities: drop: - ALL
volumes: - name: data persistentVolumeClaim: claimName: flow8-data - name: config configMap: name: flow8-config - name: tmp emptyDir: {}
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - flow8 topologyKey: kubernetes.io/hostname7. Service
apiVersion: v1kind: Servicemetadata: name: flow8 namespace: flow8 labels: app: flow8spec: type: ClusterIP # Internal service ports: - name: http port: 80 targetPort: 4454 protocol: TCP - name: mcp port: 4445 targetPort: 4445 protocol: TCP - name: channels port: 7701 targetPort: 7701 protocol: TCP selector: app: flow88. Ingress (TLS Termination)
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: flow8 namespace: flow8 annotations: cert-manager.io/cluster-issuer: "letsencrypt-prod" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/rate-limit: "100" nginx.ingress.kubernetes.io/limit-rps: "10"spec: ingressClassName: nginx tls: - hosts: - app.flow8.io secretName: flow8-tls rules: - host: app.flow8.io http: paths: - path: / pathType: Prefix backend: service: name: flow8 port: number: 80MongoDB Deployment Options
Option 1: MongoDB Atlas (Managed Service - Recommended)
# No K8s resources needed, just configure connection stringapiVersion: v1kind: Secretmetadata: name: flow8-mongodb namespace: flow8stringData: MONGODB_URI: "mongodb+srv://user:password@cluster.mongodb.net/flow8?retryWrites=true&w=majority"Benefits:
- No operational overhead
- Automatic backups
- Automatic failover
- Built-in mTLS
Option 2: Self-Hosted StatefulSet
apiVersion: apps/v1kind: StatefulSetmetadata: name: mongodb namespace: flow8spec: serviceName: mongodb replicas: 3 # Replica set selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo:6.0 ports: - containerPort: 27017 env: - name: MONGO_INITDB_ROOT_USERNAME value: admin - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongodb-creds key: password volumeMounts: - name: data mountPath: /data/db resources: requests: cpu: 1000m memory: 2Gi limits: cpu: 2000m memory: 4Gi
volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: fast-ssd resources: requests: storage: 200Gi
---apiVersion: v1kind: Servicemetadata: name: mongodb namespace: flow8spec: clusterIP: None # Headless service for StatefulSet ports: - port: 27017 selector: app: mongodbHelm Chart Template
For a repeatable deployment, create a Helm chart:
flow8-chart/├── Chart.yaml├── values.yaml├── templates/│ ├── namespace.yaml│ ├── configmap.yaml│ ├── secret.yaml│ ├── pvc.yaml│ ├── deployment.yaml│ ├── service.yaml│ ├── ingress.yaml│ └── _helpers.tplChart.yaml:
apiVersion: v2name: flow8description: flow8 workflow automation platformtype: applicationversion: 1.0.0appVersion: "1.0.0"maintainers:- name: flow8 Team email: support@flow8.iovalues.yaml:
replicaCount: 2
image: repository: ghcr.io/osbits/flow8core tag: latest pullPolicy: IfNotPresent
service: type: ClusterIP port: 80
ingress: enabled: true className: nginx annotations: cert-manager.io/cluster-issuer: "letsencrypt-prod" hosts: - host: app.flow8.io paths: - path: / pathType: Prefix
resources: limits: cpu: 2000m memory: 2Gi requests: cpu: 500m memory: 512Mi
storage: size: 100Gi storageClass: fast-ssd
mongodb: # Set to false to use MongoDB Atlas enabled: false # If enabled: false, set external connection string uri: "mongodb+srv://..."Install:
helm install flow8 ./flow8-chart \ --namespace flow8 \ --values values-prod.yamlScaling Considerations
Horizontal Scaling
flow8 is stateless and scales horizontally:
# Scale to 3 replicaskubectl scale deployment flow8 -n flow8 --replicas=3
# Or update deploymentkubectl patch deployment flow8 -n flow8 -p '{"spec":{"replicas":5}}'Benefits:
- Shared MongoDB (horizontal scalability)
- No session affinity needed
- Load balanced by Service/Ingress
- Each instance has 100 channel workers (7701-7799)
Limitations:
- MongoDB becomes bottleneck at ~10,000 concurrent plays
- Channel port contention (max 100 concurrent per instance)
- Solution: Deploy multiple instances with shared DB, or use MongoDB sharding
Vertical Scaling
Increase resources for single instance:
resources: requests: cpu: 4000m memory: 4Gi limits: cpu: 8000m memory: 8GiAuto-Scaling
apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata: name: flow8 namespace: flow8spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: flow8 minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 50 periodSeconds: 60Monitoring & Logging
Prometheus Metrics
flow8 exposes metrics at /metrics:
apiVersion: monitoring.coreos.com/v1kind: ServiceMonitormetadata: name: flow8 namespace: flow8spec: selector: matchLabels: app: flow8 endpoints: - port: http interval: 30s path: /metricsELK/Datadog Logging
Collect logs via stdout (Kubernetes native):
kubectl logs -n flow8 -l app=flow8 -fOr use a log aggregator:
apiVersion: v1kind: ConfigMapmetadata: name: filebeat-config namespace: flow8data: filebeat.yml: | filebeat.inputs: - type: container paths: - /var/lib/docker/containers/*/*.log output.elasticsearch: hosts: ["elasticsearch:9200"]Backup & Recovery
MongoDB Backup
# Backup MongoDB Atlas automatically (included)# For self-hosted, use:kubectl exec -it mongodb-0 -n flow8 -- \ mongodump --out=/backup
# Copy to S3kubectl cp flow8/mongodb-0:/backup /tmp/backupaws s3 cp /tmp/backup s3://backups/flow8/ --recursivePVC Backup (Velero)
velero backup create flow8-backup \ --include-namespaces flow8 \ --include-cluster-resources=trueTroubleshooting
Check Pod Logs
kubectl logs -n flow8 deployment/flow8 --tail=100 -fDescribe Pod
kubectl describe pod -n flow8 -l app=flow8Port Forwarding (Debugging)
# Forward local port to servicekubectl port-forward -n flow8 svc/flow8 8888:80
# Access at http://localhost:8888Next Steps
- Create a Git repo for your Helm chart
- Use GitOps (Flux, ArgoCD) to manage deployments
- Set up Prometheus + Grafana for monitoring
- Configure backup strategy (Velero, or manual MongoDB dumps)
- Implement network policies for security