Skip to content

Kubernetes / Helm Deployment

Overview

flow8 does not ship with built-in Helm charts. However, it is designed to be Kubernetes-native and can be deployed using standard K8s resource types. This guide covers the recommended patterns and resource configurations.

Deployment Architecture

┌──────────────────────────────────────┐
│ Ingress Controller │
│ (nginx, Istio, etc.) │
│ TLS termination, routing │
└──────────────────┬───────────────────┘
┌──────────┴──────────┐
│ │
┌────▼────┐ ┌────▼────┐
│ flow8-1 │ │ flow8-2 │
│ :4454 │ │ :4454 │
└────┬────┘ └────┬────┘
│ (load balanced by Ingress)
└──────────┬──────────┘
┌──────────┴──────────────────┐
│ │
┌───▼─────────┐ ┌──────▼───────┐
│ MongoDB │ │ S3 Storage │
│ StatefulSet│ │ (AWS, GCS) │
│ or Atlas │ │ │
└─────────────┘ └──────────────┘

Kubernetes Resources

1. Namespace

apiVersion: v1
kind: Namespace
metadata:
name: flow8
labels:
app: flow8

2. Secret (Encryption Keys & Credentials)

apiVersion: v1
kind: Secret
metadata:
name: flow8-secrets
namespace: flow8
type: Opaque
stringData:
ENCRYPTION_KEY: "<generate: openssl rand -hex 128>" # 256-char hex
ENCRYPTION_KEY_SALT: "<generate: openssl rand -hex 32>" # 64-char hex
MONGODB_URI: "mongodb+srv://<user>:<password>@cluster.mongodb.net/flow8"
OAUTH2_CLIENT_ID: "..."
OAUTH2_CLIENT_SECRET: "..."
---
apiVersion: v1
kind: Secret
metadata:
name: flow8-tls
namespace: flow8
type: kubernetes.io/tls
data:
tls.crt: <base64-encoded-cert>
tls.key: <base64-encoded-key>

3. ConfigMap (Configuration)

apiVersion: v1
kind: ConfigMap
metadata:
name: flow8-config
namespace: flow8
data:
config.yml: |
server:
port: 4454
max_request_size_mb: 100
read_timeout_seconds: 30
mongodb:
max_pool_size: 100
encryption:
algorithm: argon2id
session:
ttl_hours: 1
cookie_secure: true
cookie_http_only: true
cookie_same_site: strict
retention:
cleanup_interval: "2m"
policies:
audit_logs:
cadence: "30d"
enforced_minimum: "14d"

4. PersistentVolumeClaim (Data Storage)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: flow8-data
namespace: flow8
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast-ssd # Must exist in your cluster
resources:
requests:
storage: 100Gi

For multiple replicas (ReadWriteMany):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: flow8-data-shared
namespace: flow8
spec:
accessModes:
- ReadWriteMany # Required for multi-pod access
storageClassName: efs # or nfs, azurefile
resources:
requests:
storage: 500Gi

5. ServiceAccount & RBAC

apiVersion: v1
kind: ServiceAccount
metadata:
name: flow8
namespace: flow8
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: flow8
rules:
- apiGroups: [""]
resources: ["services", "configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: flow8
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flow8
subjects:
- kind: ServiceAccount
name: flow8
namespace: flow8

6. Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
name: flow8
namespace: flow8
labels:
app: flow8
spec:
replicas: 2 # High availability
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: flow8
template:
metadata:
labels:
app: flow8
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "4454"
prometheus.io/path: "/metrics"
spec:
serviceAccountName: flow8
securityContext:
runAsNonRoot: true
runAsUser: 911
fsGroup: 911
containers:
- name: flow8
image: ghcr.io/osbits/flow8core:latest
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 4454
protocol: TCP
- name: mcp
containerPort: 4445
protocol: TCP
- name: channels
containerPort: 7701
protocol: TCP
env:
- name: SERVER_PORT
value: "4454"
- name: LOG_LEVEL
value: "info"
- name: ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: flow8-secrets
key: ENCRYPTION_KEY
- name: ENCRYPTION_KEY_SALT
valueFrom:
secretKeyRef:
name: flow8-secrets
key: ENCRYPTION_KEY_SALT
- name: MONGODB_URI
valueFrom:
secretKeyRef:
name: flow8-secrets
key: MONGODB_URI
- name: OAUTH2_CLIENT_ID
valueFrom:
secretKeyRef:
name: flow8-secrets
key: OAUTH2_CLIENT_ID
- name: OAUTH2_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: flow8-secrets
key: OAUTH2_CLIENT_SECRET
- name: GOMAXPROCS
value: "2"
volumeMounts:
- name: data
mountPath: /app/data
- name: config
mountPath: /app/config
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 2000m
memory: 2Gi
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop:
- ALL
volumes:
- name: data
persistentVolumeClaim:
claimName: flow8-data
- name: config
configMap:
name: flow8-config
- name: tmp
emptyDir: {}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- flow8
topologyKey: kubernetes.io/hostname

7. Service

apiVersion: v1
kind: Service
metadata:
name: flow8
namespace: flow8
labels:
app: flow8
spec:
type: ClusterIP # Internal service
ports:
- name: http
port: 80
targetPort: 4454
protocol: TCP
- name: mcp
port: 4445
targetPort: 4445
protocol: TCP
- name: channels
port: 7701
targetPort: 7701
protocol: TCP
selector:
app: flow8

8. Ingress (TLS Termination)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: flow8
namespace: flow8
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/limit-rps: "10"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.flow8.io
secretName: flow8-tls
rules:
- host: app.flow8.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: flow8
port:
number: 80

MongoDB Deployment Options

# No K8s resources needed, just configure connection string
apiVersion: v1
kind: Secret
metadata:
name: flow8-mongodb
namespace: flow8
stringData:
MONGODB_URI: "mongodb+srv://user:password@cluster.mongodb.net/flow8?retryWrites=true&w=majority"

Benefits:

  • No operational overhead
  • Automatic backups
  • Automatic failover
  • Built-in mTLS

Option 2: Self-Hosted StatefulSet

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
namespace: flow8
spec:
serviceName: mongodb
replicas: 3 # Replica set
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:6.0
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-creds
key: password
volumeMounts:
- name: data
mountPath: /data/db
resources:
requests:
cpu: 1000m
memory: 2Gi
limits:
cpu: 2000m
memory: 4Gi
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast-ssd
resources:
requests:
storage: 200Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongodb
namespace: flow8
spec:
clusterIP: None # Headless service for StatefulSet
ports:
- port: 27017
selector:
app: mongodb

Helm Chart Template

For a repeatable deployment, create a Helm chart:

flow8-chart/
├── Chart.yaml
├── values.yaml
├── templates/
│ ├── namespace.yaml
│ ├── configmap.yaml
│ ├── secret.yaml
│ ├── pvc.yaml
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── ingress.yaml
│ └── _helpers.tpl

Chart.yaml:

apiVersion: v2
name: flow8
description: flow8 workflow automation platform
type: application
version: 1.0.0
appVersion: "1.0.0"
maintainers:
- name: flow8 Team
email: support@flow8.io

values.yaml:

replicaCount: 2
image:
repository: ghcr.io/osbits/flow8core
tag: latest
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: app.flow8.io
paths:
- path: /
pathType: Prefix
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 500m
memory: 512Mi
storage:
size: 100Gi
storageClass: fast-ssd
mongodb:
# Set to false to use MongoDB Atlas
enabled: false
# If enabled: false, set external connection string
uri: "mongodb+srv://..."

Install:

Terminal window
helm install flow8 ./flow8-chart \
--namespace flow8 \
--values values-prod.yaml

Scaling Considerations

Horizontal Scaling

flow8 is stateless and scales horizontally:

Terminal window
# Scale to 3 replicas
kubectl scale deployment flow8 -n flow8 --replicas=3
# Or update deployment
kubectl patch deployment flow8 -n flow8 -p '{"spec":{"replicas":5}}'

Benefits:

  • Shared MongoDB (horizontal scalability)
  • No session affinity needed
  • Load balanced by Service/Ingress
  • Each instance has 100 channel workers (7701-7799)

Limitations:

  • MongoDB becomes bottleneck at ~10,000 concurrent plays
  • Channel port contention (max 100 concurrent per instance)
  • Solution: Deploy multiple instances with shared DB, or use MongoDB sharding

Vertical Scaling

Increase resources for single instance:

resources:
requests:
cpu: 4000m
memory: 4Gi
limits:
cpu: 8000m
memory: 8Gi

Auto-Scaling

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: flow8
namespace: flow8
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: flow8
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 50
periodSeconds: 60

Monitoring & Logging

Prometheus Metrics

flow8 exposes metrics at /metrics:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: flow8
namespace: flow8
spec:
selector:
matchLabels:
app: flow8
endpoints:
- port: http
interval: 30s
path: /metrics

ELK/Datadog Logging

Collect logs via stdout (Kubernetes native):

Terminal window
kubectl logs -n flow8 -l app=flow8 -f

Or use a log aggregator:

apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: flow8
data:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/lib/docker/containers/*/*.log
output.elasticsearch:
hosts: ["elasticsearch:9200"]

Backup & Recovery

MongoDB Backup

Terminal window
# Backup MongoDB Atlas automatically (included)
# For self-hosted, use:
kubectl exec -it mongodb-0 -n flow8 -- \
mongodump --out=/backup
# Copy to S3
kubectl cp flow8/mongodb-0:/backup /tmp/backup
aws s3 cp /tmp/backup s3://backups/flow8/ --recursive

PVC Backup (Velero)

Terminal window
velero backup create flow8-backup \
--include-namespaces flow8 \
--include-cluster-resources=true

Troubleshooting

Check Pod Logs

Terminal window
kubectl logs -n flow8 deployment/flow8 --tail=100 -f

Describe Pod

Terminal window
kubectl describe pod -n flow8 -l app=flow8

Port Forwarding (Debugging)

Terminal window
# Forward local port to service
kubectl port-forward -n flow8 svc/flow8 8888:80
# Access at http://localhost:8888

Next Steps

  1. Create a Git repo for your Helm chart
  2. Use GitOps (Flux, ArgoCD) to manage deployments
  3. Set up Prometheus + Grafana for monitoring
  4. Configure backup strategy (Velero, or manual MongoDB dumps)
  5. Implement network policies for security