0010: Vibe Kanban POC — Implementation Plan
Reference
- Spec: spec.md
- Upstream docs: https://vibekanban.com/docs/self-hosting/deploy-docker
- Source: https://github.com/BloopAI/vibe-kanban
- Pattern reference:
repos/ai-dev/infrastructure/kustomize/components/openclaw/
Current State
- No Vibe Kanban deployment exists
- The
ai-devrepo has the Kustomize component/overlay structure ready - The
code-servernamespace already has GitHub OAuth credentials (gh-oauthsecret) - The
code-servernamespace already has SSH keys (ssh-keysecret) - The ArgoCD
ai-devAppProject permits deployment to any namespace
Phase 0: Prerequisites (Manual)
These steps require manual action outside of Git/K8s manifests.
0.1 Add OAuth Callback URL
Add the following callback URL to the existing GitHub OAuth App:
https://vibe-kanban.lab.ctoaas.co/v1/oauth/github/callback
GitHub OAuth Apps support multiple callback URLs — add this alongside the existing one.
0.2 Create Secrets in Central Secret Store
Create a secret named vibe-kanban in the central-secret-store namespace:
kubectl create secret generic vibe-kanban \
--namespace central-secret-store \
--from-literal=jwt-secret="$(openssl rand -base64 48)" \
--from-literal=db-password="$(openssl rand -base64 32)" \
--from-literal=electric-role-password="$(openssl rand -base64 32)"Phase 1: Docker Image via Image Factory
Enrol VK in the image-factory pipeline for automated builds and version pinning.
1.1 Add to Image Factory
Add entry to repos/image-factory-state/images.yaml:
- name: vibe-kanban-remote
registry: ghcr.io
repository: craigedmunds/vibe-kanban-remote
source:
provider: github
repo: BloopAI/vibe-kanban
branch: main
dockerfile: crates/remote/Dockerfile
workflow: docker-build.yml
rebuildDelay: 7d
autoRebuild: true1.2 Generate State and Build
cd repos/image-factory-state
task generate # Discovers base images, creates state files
task build # Builds the image locally (or trigger via Kargo)
task push # Pushes to GHCR1.3 Ensure GitHub Actions Workflow
Create a docker-build.yml workflow in the forked craigedmunds/vibe-kanban repo using the reusable _docker-build.yml pattern. This enables Kargo to trigger rebuilds via workflow_dispatch.
Note: First build takes 10-15 minutes (Rust compilation). Ensure the build runner has at least 4GB RAM.
Phase 2: Kustomize Component
Create repos/ai-dev/infrastructure/kustomize/components/vibe-kanban/ with the following files.
2.1 kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: code-server
resources:
- externalsecret.yaml
- pvc.yaml
- statefulset-postgres.yaml
- service-postgres.yaml
- deployment-electric.yaml
- service-electric.yaml
- deployment.yaml
- service.yaml
commonLabels:
app.kubernetes.io/name: vibe-kanban
app.kubernetes.io/component: kanban-board
app.kubernetes.io/part-of: remote-development2.2 externalsecret.yaml
Pulls VK-specific secrets from the central store. GitHub OAuth credentials come from the pre-existing gh-oauth secret (already in code-server ns — no new ESO resource needed for those).
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: vibe-kanban-secrets
labels:
app: vibe-kanban
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: central-secret-store
target:
name: vibe-kanban-secrets
creationPolicy: Owner
template:
metadata:
labels:
managed-by: external-secrets
source: central-secret-store
data:
- secretKey: JWT_SECRET
remoteRef:
key: vibe-kanban
property: jwt-secret
- secretKey: DB_PASSWORD
remoteRef:
key: vibe-kanban
property: db-password
- secretKey: ELECTRIC_ROLE_PASSWORD
remoteRef:
key: vibe-kanban
property: electric-role-password2.3 pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vibe-kanban-db
labels:
app: vibe-kanban
component: database
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-path
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vibe-kanban-electric
labels:
app: vibe-kanban
component: electric
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: local-path2.4 statefulset-postgres.yaml
StatefulSet (not Deployment) for stable network identity and ordered rollout.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vibe-kanban-db
labels:
app: vibe-kanban
component: database
spec:
serviceName: vibe-kanban-db
replicas: 1
selector:
matchLabels:
app: vibe-kanban
component: database
template:
metadata:
labels:
app: vibe-kanban
component: database
spec:
securityContext:
runAsNonRoot: true
runAsUser: 999
fsGroup: 999
fsGroupChangePolicy: "OnRootMismatch"
containers:
- name: postgres
image: postgres:16-alpine
command: ["postgres", "-c", "wal_level=logical"]
ports:
- name: postgres
containerPort: 5432
env:
- name: POSTGRES_DB
value: "remote"
- name: POSTGRES_USER
value: "remote"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: vibe-kanban-secrets
key: DB_PASSWORD
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "500m"
readinessProbe:
exec:
command: ["pg_isready", "-U", "remote", "-d", "remote"]
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5
livenessProbe:
exec:
command: ["pg_isready", "-U", "remote", "-d", "remote"]
initialDelaySeconds: 10
periodSeconds: 10
volumes:
- name: data
persistentVolumeClaim:
claimName: vibe-kanban-db2.5 service-postgres.yaml
apiVersion: v1
kind: Service
metadata:
name: vibe-kanban-db
labels:
app: vibe-kanban
component: database
spec:
type: ClusterIP
ports:
- name: postgres
port: 5432
targetPort: postgres
selector:
app: vibe-kanban
component: database2.6 deployment.yaml (remote-server)
Uses init container to wait for Postgres. Runs as UID 1000 to match code-server-storage PVC ownership. Mounts SSH key for git operations. PVC mounted read-write for workspace/worktree creation.
apiVersion: apps/v1
kind: Deployment
metadata:
name: vibe-kanban
labels:
app: vibe-kanban
component: server
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: vibe-kanban
component: server
template:
metadata:
labels:
app: vibe-kanban
component: server
spec:
imagePullSecrets:
- name: gh-docker-registry-creds
securityContext:
runAsUser: 1000
fsGroup: 1000
fsGroupChangePolicy: "OnRootMismatch"
initContainers:
- name: wait-for-db
image: postgres:16-alpine
command: ["sh", "-c"]
args:
- |
until pg_isready -h vibe-kanban-db -U remote -d remote; do
echo "Waiting for database..."
sleep 2
done
echo "Database is ready"
securityContext:
runAsUser: 1000
containers:
- name: remote-server
image: ghcr.io/craigedmunds/vibe-kanban-remote
ports:
- name: http
containerPort: 8081
env:
- name: RUST_LOG
value: "info,remote=info"
- name: SERVER_LISTEN_ADDR
value: "0.0.0.0:8081"
- name: SERVER_PUBLIC_BASE_URL
value: "https://vibe-kanban.lab.ctoaas.co"
- name: VK_ALLOWED_ORIGINS
value: "https://vibe-kanban.lab.ctoaas.co"
- name: ELECTRIC_URL
value: "http://vibe-kanban-electric:3000"
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: vibe-kanban-secrets
key: DB_PASSWORD
- name: SERVER_DATABASE_URL
value: "postgres://remote:$(DB_PASSWORD)@vibe-kanban-db:5432/remote"
- name: VIBEKANBAN_REMOTE_JWT_SECRET
valueFrom:
secretKeyRef:
name: vibe-kanban-secrets
key: JWT_SECRET
- name: ELECTRIC_ROLE_PASSWORD
valueFrom:
secretKeyRef:
name: vibe-kanban-secrets
key: ELECTRIC_ROLE_PASSWORD
- name: GITHUB_OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
name: gh-oauth
key: GITHUB_CLIENT_ID
- name: GITHUB_OAUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: gh-oauth
key: GITHUB_CLIENT_SECRET
- name: GOOGLE_OAUTH_CLIENT_ID
value: ""
- name: GOOGLE_OAUTH_CLIENT_SECRET
value: ""
- name: LOOPS_EMAIL_API_KEY
value: ""
volumeMounts:
- name: workspace
mountPath: /home/coder/src
- name: ssh-key
mountPath: /home/coder/.ssh
readOnly: true
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "500m"
readinessProbe:
httpGet:
path: /v1/health
port: http
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 5
livenessProbe:
httpGet:
path: /v1/health
port: http
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 5
volumes:
- name: workspace
persistentVolumeClaim:
claimName: code-server-storage
- name: ssh-key
secret:
secretName: ssh-key
defaultMode: 06002.7 service.yaml
apiVersion: v1
kind: Service
metadata:
name: vibe-kanban
labels:
app: vibe-kanban
component: server
spec:
type: ClusterIP
ports:
- name: http
port: 8081
targetPort: http
selector:
app: vibe-kanban
component: server2.8 deployment-electric.yaml
Uses init container to wait for remote-server health endpoint before starting. This ensures migrations have run and the electric_sync DB user exists.
apiVersion: apps/v1
kind: Deployment
metadata:
name: vibe-kanban-electric
labels:
app: vibe-kanban
component: electric
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: vibe-kanban
component: electric
template:
metadata:
labels:
app: vibe-kanban
component: electric
spec:
initContainers:
- name: wait-for-server
image: busybox:1.36
command: ["sh", "-c"]
args:
- |
until wget --spider -q http://vibe-kanban:8081/v1/health; do
echo "Waiting for remote-server to run migrations..."
sleep 5
done
echo "Remote server is healthy"
containers:
- name: electric
image: electricsql/electric:1.3.3
workingDir: /app
ports:
- name: http
containerPort: 3000
env:
- name: ELECTRIC_ROLE_PASSWORD
valueFrom:
secretKeyRef:
name: vibe-kanban-secrets
key: ELECTRIC_ROLE_PASSWORD
- name: DATABASE_URL
value: "postgresql://electric_sync:$(ELECTRIC_ROLE_PASSWORD)@vibe-kanban-db:5432/remote?sslmode=disable"
- name: PG_PROXY_PORT
value: "65432"
- name: LOGICAL_PUBLISHER_HOST
value: "vibe-kanban-electric"
- name: AUTH_MODE
value: "insecure"
- name: ELECTRIC_INSECURE
value: "true"
- name: ELECTRIC_MANUAL_TABLE_PUBLISHING
value: "true"
- name: ELECTRIC_USAGE_REPORTING
value: "false"
- name: ELECTRIC_FEATURE_FLAGS
value: "allow_subqueries,tagged_subqueries"
volumeMounts:
- name: data
mountPath: /app/persistent
resources:
requests:
memory: "128Mi"
cpu: "50m"
limits:
memory: "512Mi"
cpu: "250m"
volumes:
- name: data
persistentVolumeClaim:
claimName: vibe-kanban-electric2.9 service-electric.yaml
apiVersion: v1
kind: Service
metadata:
name: vibe-kanban-electric
labels:
app: vibe-kanban
component: electric
spec:
type: ClusterIP
ports:
- name: http
port: 3000
targetPort: http
selector:
app: vibe-kanban
component: electricPhase 3: Overlay Integration
Edit repos/ai-dev/infrastructure/kustomize/overlays/lab/kustomization.yaml.
3.1 Add Component to Resources
resources:
- ../../components/namespace
- ../../components/gateway
- ../../components/openclaw
- ../../components/vibe-kanban # add this3.2 Add Ingress Entry
Add to helmCharts[].valuesInline.ingresses[]. Keep traefik Google OAuth middleware as the access gate (no skipAuth). VK’s own GitHub OAuth handles identity on top.
- domains:
name: vibe-kanban
ingress:
accessPattern: public
name: vibe-kanban-ingress
path: /
pathType: Prefix
service:
name: vibe-kanban
namespace: code-server
port:
name: http
number: 80813.3 Add Image Tag
Add to images::
- name: ghcr.io/craigedmunds/vibe-kanban-remote
newName: ghcr.io/craigedmunds/vibe-kanban-remote
newTag: "0.1.0"Phase 4: Deploy & Verify
4.1 Push and Sync
- Commit all changes to a feature branch in the
ai-devrepo - Push and open a PR
- After merge, ArgoCD auto-syncs (or trigger manually via
argocd app sync ai-dev)
4.2 Verify Pods
# All vibe-kanban pods should be Running
kubectl get pods -n code-server -l app.kubernetes.io/name=vibe-kanban
# Check postgres is healthy and wal_level is set
kubectl exec -n code-server statefulset/vibe-kanban-db -- psql -U remote -d remote -c "SHOW wal_level;"
# Check remote-server ran migrations successfully
kubectl logs -n code-server deployment/vibe-kanban -c remote-server | head -50
# Check electric connected to postgres
kubectl logs -n code-server deployment/vibe-kanban-electric -c electric | head -50
# Test health endpoint
kubectl exec -n code-server deployment/vibe-kanban -- wget -qO- http://localhost:8081/v1/health4.3 Verify Access
- Navigate to
https://vibe-kanban.lab.ctoaas.co - Should pass through Google OAuth (shared cookie — no extra prompt if already authenticated)
- Sign in with GitHub OAuth within VK
- Create first organisation and project
- Test workspace creation — verify git worktree is created on the PVC
Risks & Mitigations
| Risk | Mitigation |
|---|---|
| Rust build OOM during Docker build | Ensure image-factory runner has 4GB+ RAM |
SERVER_DATABASE_URL env var interpolation fails | If $(DB_PASSWORD) doesn’t expand, use a shell init script to construct the URL |
| ElectricSQL version incompatibility | Pin to electricsql/electric:1.3.3 as specified in upstream docker-compose |
| GitHub OAuth callback intercepted by Google OAuth middleware | Should work since callback happens after Google OAuth session is established; test during deployment |
code-server-storage PVC scheduling conflict | All pods must land on same node (RWO). Add nodeSelector/affinity if needed |
| UID mismatch on PVC | Run as runAsUser: 1000, fsGroup: 1000 to match PVC ownership |
| VK container lacks git binary | Verified: upstream Dockerfile installs git |
Future Enhancements
- Investigate Google OAuth for VK identity layer (single sign-in instead of dual)
- Postgres backup CronJob to the PVC
- Resource limit tuning based on actual usage
- Configure VK workspace directory location on the PVC