ZFS Storage Integration - Implementation Tasks

1. Proxmox Host Configuration

  • 1.1 ZFS Dataset Setup Validates: Requirements 1.1, 1.4, 3.1, 3.2

Create and configure the ZFS dataset on the Proxmox host for Kubernetes storage.

Steps:

  1. SSH to Proxmox host (192.168.32.163)
  2. Create ZFS dataset: zfs create rpool/data/k8s-storage
  3. Set ZFS properties:
    • zfs set compression=lz4 rpool/data/k8s-storage
    • zfs set atime=off rpool/data/k8s-storage
    • zfs set recordsize=128k rpool/data/k8s-storage
  4. Verify dataset creation: zfs list rpool/data/k8s-storage
  5. Check properties: zfs get all rpool/data/k8s-storage

Acceptance:

  • Dataset rpool/data/k8s-storage exists
  • Compression is enabled (lz4)
  • Properties are correctly set

  • 1.2 NFS Server Configuration Validates: Requirements 1.2, 2.1

Configure NFS server on Proxmox to export the ZFS dataset to the Kubernetes cluster.

Steps:

  1. Install NFS server: apt update && apt install nfs-kernel-server -y
  2. Add export to /etc/exports:
    /rpool/data/k8s-storage 192.168.32.59(rw,sync,no_subtree_check,no_root_squash)
    
  3. Reload exports: exportfs -ra
  4. Verify export: exportfs -v
  5. Test from Talos node: showmount -e 192.168.32.163

Acceptance:

  • NFS server is running
  • Export is configured for k8s node IP
  • Export is visible from Talos node

2. Kubernetes Storage Infrastructure

  • 2.1 Create democratic-csi Component Directory Validates: Requirements 4.1, 4.3

Set up the component structure for democratic-csi in the k8s-lab repository.

Steps:

  1. Create directory: k8s-lab/components/democratic-csi/
  2. Create subdirectories:
    • k8s-lab/components/democratic-csi/charts/
  3. Create initial files:
    • kustomization.yaml
    • namespace.yaml
    • values.yaml
    • README.md
    • Taskfile.yaml

Acceptance:

  • Directory structure exists
  • Initial files are created
  • Follows existing component patterns (code-server, n8n, etc.)

  • 2.2 Configure democratic-csi Namespace Validates: Requirements 4.1

Create the namespace for democratic-csi with appropriate security settings.

File: k8s-lab/components/democratic-csi/namespace.yaml

Content:

apiVersion: v1
kind: Namespace
metadata:
  name: democratic-csi
  labels:
    pod-security.kubernetes.io/enforce: privileged
    pod-security.kubernetes.io/audit: privileged
    pod-security.kubernetes.io/warn: privileged

Acceptance:

  • Namespace manifest is created
  • Pod security labels are set to privileged (required for CSI)

  • 2.3 Create democratic-csi Helm Values Validates: Requirements 1.1, 1.4, 1.5, 2.1, 2.2, 2.3

Configure Helm values for democratic-csi with NFS driver and ZFS backend.

File: k8s-lab/components/democratic-csi/values.yaml

Content:

csiDriver:
  name: "org.democratic-csi.nfs"
 
storageClasses:
- name: zfs-nfs
  defaultClass: false
  reclaimPolicy: Retain
  volumeBindingMode: Immediate
  allowVolumeExpansion: true
  parameters:
    fsType: nfs
 
volumeSnapshotClasses:
- name: zfs-nfs-snapshot
  deletionPolicy: Delete
  parameters: {}
 
driver:
  config:
    driver: freenas-nfs
    instance_id:
    httpConnection:
      protocol: http
      host: 192.168.32.163
      port: 80
      apiKey:
    zfs:
      datasetParentName: rpool/data/k8s-storage
      detachedSnapshotsDatasetParentName: rpool/data/k8s-storage/snapshots
      datasetEnableQuotas: true
      datasetEnableReservation: false
      datasetPermissionsMode: "0777"
      datasetPermissionsUser: 0
      datasetPermissionsGroup: 0
    nfs:
      shareHost: 192.168.32.163
      shareAlldirs: false
      shareAllowedHosts: []
      shareAllowedNetworks: []
      shareMaprootUser: root
      shareMaprootGroup: root
      shareMapallUser: ""
      shareMapallGroup: ""
 
controller:
  enabled: true
  strategy: deployment
  externalAttacher:
    enabled: true
  externalProvisioner:
    enabled: true
  externalResizer:
    enabled: true
  externalSnapshotter:
    enabled: true
 
node:
  enabled: true
  hostPID: true
  driver:
    extraEnv:
    - name: MOUNT_OPTIONS
      value: "nfsvers=4.2,noatime"

Acceptance:

  • Values file is created
  • Storage class configured with Retain policy
  • NFS driver configured for Proxmox host
  • ZFS dataset paths are correct

  • 2.4 Download democratic-csi Helm Chart Validates: Requirements 4.1

Download the democratic-csi Helm chart for local deployment.

Steps:

  1. Add Helm repo: helm repo add democratic-csi https://democratic-csi.github.io/charts/
  2. Update repos: helm repo update
  3. Pull chart: helm pull democratic-csi/democratic-csi --untar --untardir k8s-lab/components/democratic-csi/charts/
  4. Verify chart version in Chart.yaml

Acceptance:

  • Chart is downloaded to charts/ directory
  • Chart version is documented

  • 2.5 Create democratic-csi Kustomization Validates: Requirements 4.1, 4.3

Create kustomization manifest to deploy democratic-csi via Helm.

File: k8s-lab/components/democratic-csi/kustomization.yaml

Content:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
 
namespace: democratic-csi
 
resources:
  - namespace.yaml
 
helmCharts:
  - name: democratic-csi
    repo: https://democratic-csi.github.io/charts/
    version: 0.14.6  # Update to latest stable version
    releaseName: zfs-nfs
    namespace: democratic-csi
    valuesFile: values.yaml
    includeCRDs: true

Acceptance:

  • Kustomization file is created
  • Helm chart is configured
  • Values file is referenced

  • 2.6 Create democratic-csi Component README Validates: Requirements 4.4

Document the democratic-csi setup, configuration, and usage.

File: k8s-lab/components/democratic-csi/README.md

Content:

  • Overview of democratic-csi and ZFS storage
  • Architecture diagram reference
  • Configuration details
  • Usage examples
  • Troubleshooting guide
  • Links to design document

Acceptance:

  • README is comprehensive
  • Includes usage examples
  • Documents troubleshooting steps

  • 2.7 Create democratic-csi Taskfile Validates: Requirements 4.4

Create task automation for democratic-csi management.

File: k8s-lab/components/democratic-csi/Taskfile.yaml

Tasks to include:

  • status - Show democratic-csi status
  • logs - View controller and node logs
  • restart - Restart democratic-csi pods
  • test:provisioning - Test PVC provisioning
  • test:mount - Test volume mounting

Acceptance:

  • Taskfile is created
  • Common operations are automated
  • Follows existing task patterns

  • 2.8 Update Main Components Kustomization Validates: Requirements 4.1, 4.3

Add democratic-csi to the main components kustomization.

File: k8s-lab/components/kustomization.yaml

Changes:

  • Add - democratic-csi/ to resources list
  • Position after local-storage and before code-server

Acceptance:

  • democratic-csi is included in main kustomization
  • Build succeeds: task build-application APP=democratic-csi

3. Code-Server Storage Migration

  • 3.1 Update Code-Server Helm Values for ZFS Storage Validates: Requirements 5.1, 5.2, 5.4

Update code-server to use ZFS-backed storage instead of local-path.

File: k8s-lab/components/code-server/values.yaml

Changes:

persistentVolumeClaim:
  enabled: true
  create: true
  size: 10Gi
  storageClassName: "zfs-nfs"  # Changed from "local-path"
  accessMode: ReadWriteOnce
  mountPath: "/home/coder"
  projectsPath: "/home/coder/projects"

Acceptance:

  • Storage class changed to zfs-nfs
  • Size remains 10Gi
  • Access mode is ReadWriteOnce

  • 3.2 Add Node Affinity for Shared Workspace Validates: Requirements 5.2

Ensure code-server and codev pods are scheduled on the same node for RWO volume sharing.

File: k8s-lab/components/code-server/values.yaml

Add:

affinity:
  podAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - code-server
          - codev
      topologyKey: kubernetes.io/hostname

File: k8s-lab/components/codev/deployment.yaml

Add to spec.template.spec:

affinity:
  podAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - code-server
          - codev
      topologyKey: kubernetes.io/hostname

Acceptance:

  • Both pods have pod affinity configured
  • Affinity ensures same-node scheduling

4. Testing and Validation

  • 4.1 Test democratic-csi Deployment Validates: Requirements 4.1, 4.5

Verify democratic-csi is deployed and running correctly.

Steps:

  1. Build configuration: task build-application APP=democratic-csi
  2. Apply configuration: task apply-application APP=democratic-csi
  3. Check pods: kubectl get pods -n democratic-csi
  4. Check logs: kubectl logs -n democratic-csi -l app=democratic-csi-controller
  5. Verify storage class: kubectl get storageclass zfs-nfs

Acceptance:

  • All democratic-csi pods are running
  • No errors in logs
  • Storage class zfs-nfs exists

  • 4.2 Test PVC Provisioning Validates: Requirements 2.1, 2.2, 2.4

Create a test PVC and verify dynamic provisioning works.

Test PVC: k8s-lab/components/democratic-csi/test-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-zfs-pvc
  namespace: democratic-csi
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: zfs-nfs
  resources:
    requests:
      storage: 1Gi

Steps:

  1. Create test PVC: kubectl apply -f test-pvc.yaml
  2. Check PVC status: kubectl get pvc -n democratic-csi
  3. Check PV creation: kubectl get pv
  4. Verify ZFS dataset on Proxmox: zfs list | grep k8s-storage
  5. Delete test PVC: kubectl delete -f test-pvc.yaml
  6. Verify PV is retained: kubectl get pv

Acceptance:

  • PVC is created and bound
  • PV is dynamically provisioned
  • ZFS dataset exists on Proxmox
  • PV is retained after PVC deletion

  • 4.3 Test Volume Mount and Data Persistence Validates: Requirements 3.4, 3.5

Create a test pod that writes data to a ZFS-backed volume and verify persistence.

Test Pod: k8s-lab/components/democratic-csi/test-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: test-zfs-pod
  namespace: democratic-csi
spec:
  containers:
  - name: test
    image: alpine
    command: ["sh", "-c", "echo 'Hello ZFS' > /data/test.txt && cat /data/test.txt && sleep 3600"]
    volumeMounts:
    - name: test-volume
      mountPath: /data
  volumes:
  - name: test-volume
    persistentVolumeClaim:
      claimName: test-zfs-pvc

Steps:

  1. Create test pod: kubectl apply -f test-pod.yaml
  2. Wait for pod to run: kubectl wait --for=condition=ready pod/test-zfs-pod -n democratic-csi
  3. Check file: kubectl exec -n democratic-csi test-zfs-pod -- cat /data/test.txt
  4. Delete pod: kubectl delete pod test-zfs-pod -n democratic-csi
  5. Recreate pod: kubectl apply -f test-pod.yaml
  6. Verify file persists: kubectl exec -n democratic-csi test-zfs-pod -- cat /data/test.txt

Acceptance:

  • Pod can write to volume
  • File persists across pod restarts
  • No errors in pod logs

  • 4.4 Test Code-Server with ZFS Storage Validates: Requirements 5.1, 5.3, 5.4

Deploy code-server with ZFS storage and verify functionality.

Steps:

  1. Delete existing code-server PVC: kubectl delete pvc code-server-storage -n code-server
  2. Apply updated code-server: task apply-application APP=code-server
  3. Wait for pod: kubectl wait --for=condition=ready pod -l app=code-server -n code-server
  4. Check PVC: kubectl get pvc -n code-server
  5. Verify storage class: kubectl get pvc code-server-storage -n code-server -o jsonpath='{.spec.storageClassName}'
  6. Access code-server UI and create test files
  7. Restart pod and verify files persist

Acceptance:

  • code-server uses zfs-nfs storage class
  • PVC is bound to ZFS-backed PV
  • Files persist across restarts
  • UI is accessible and functional

  • 4.5 Test Shared Workspace Between code-server and codev Validates: Requirements 5.1, 5.2, 5.3

Verify both code-server and codev can access the same workspace.

Steps:

  1. Ensure code-server is running with ZFS storage
  2. Apply codev: task apply-application APP=codev
  3. Wait for both pods: kubectl get pods -n code-server
  4. Verify same node: kubectl get pods -n code-server -o wide
  5. Create file in code-server: kubectl exec -n code-server -c code-server <pod> -- touch /home/coder/test-shared.txt
  6. Check file in codev: kubectl exec -n code-server <codev-pod> -- ls -la /home/coder/test-shared.txt
  7. Write from codev: kubectl exec -n code-server <codev-pod> -- sh -c "echo 'from codev' > /home/coder/test-shared.txt"
  8. Read from code-server: kubectl exec -n code-server -c code-server <pod> -- cat /home/coder/test-shared.txt

Acceptance:

  • Both pods are on the same node
  • Both pods can read/write to shared workspace
  • Files are immediately visible to both pods
  • No permission errors

  • 4.6 Performance Comparison Test Validates: Requirements 3.3

Compare I/O performance between local-path and zfs-nfs storage.

Test Script: k8s-lab/components/democratic-csi/test-performance.sh

#!/bin/bash
# Simple I/O performance test
 
echo "Testing write performance..."
kubectl exec -n code-server <pod> -- sh -c "dd if=/dev/zero of=/home/coder/testfile bs=1M count=100 oflag=direct"
 
echo "Testing read performance..."
kubectl exec -n code-server <pod> -- sh -c "dd if=/home/coder/testfile of=/dev/null bs=1M iflag=direct"
 
echo "Cleaning up..."
kubectl exec -n code-server <pod> -- rm /home/coder/testfile

Steps:

  1. Run test with ZFS storage
  2. Document results
  3. Compare with baseline (if available)

Acceptance:

  • Performance is acceptable for development workloads
  • No significant degradation compared to local-path
  • Results are documented

5. Documentation and Cleanup

  • 5.1 Update Main k8s-lab README Validates: Requirements 4.4, 6.5

Document ZFS storage integration in the main README.

File: k8s-lab/README.md

Add section:

  • Storage architecture overview
  • Available storage classes (local-path, zfs-nfs)
  • When to use each storage class
  • Link to democratic-csi component README

Acceptance:

  • README includes storage documentation
  • Usage guidance is clear
  • Links to detailed docs

  • 5.2 Create Storage Selection Guide Validates: Requirements 6.1, 6.2, 6.3, 6.5

Document when to use local-path vs zfs-nfs storage.

File: k8s-lab/docs/storage-guide.md

Content:

  • Storage class comparison table
  • Use case recommendations
  • Performance characteristics
  • Migration guide
  • Troubleshooting

Acceptance:

  • Guide is comprehensive
  • Clear decision criteria
  • Examples for common scenarios

  • 5.3 Update Taskfile with Storage Tasks Validates: Requirements 4.4

Add storage management tasks to main Taskfile.

File: k8s-lab/Taskfile.yaml

Add tasks:

  • storage:status - Show all storage classes and PVCs
  • storage:test - Run storage provisioning tests
  • storage:cleanup - Clean up retained PVs

Acceptance:

  • Tasks are added to main Taskfile
  • Tasks follow existing patterns
  • Documentation is updated

  • 5.4 Clean Up Test Resources Validates: Requirements 4.5

Remove test PVCs, pods, and retained PVs.

Steps:

  1. Delete test pods: kubectl delete pod test-zfs-pod -n democratic-csi
  2. Delete test PVCs: kubectl delete pvc test-zfs-pvc -n democratic-csi
  3. List retained PVs: kubectl get pv | grep Released
  4. Delete retained PVs: kubectl delete pv <pv-name>
  5. Clean up ZFS datasets on Proxmox: zfs destroy rpool/data/k8s-storage/<dataset>

Acceptance:

  • All test resources are removed
  • No orphaned PVs or datasets
  • Cluster is clean

  • 5.5 Create Acceptance Test Suite Validates: All Requirements

Create automated acceptance tests for ZFS storage integration.

File: k8s-lab/components/democratic-csi/tests/acceptance_test.sh

Tests:

  1. democratic-csi deployment health
  2. Storage class availability
  3. PVC provisioning
  4. Volume mounting
  5. Data persistence
  6. Shared workspace functionality
  7. Performance baseline

Acceptance:

  • All tests pass
  • Tests are automated
  • Tests can be run via Taskfile

6. GitOps Integration

  • 6.1 Verify ArgoCD Sync Validates: Requirements 4.1, 4.2, 4.5

Ensure democratic-csi can be managed by ArgoCD.

Steps:

  1. Commit all changes to git
  2. Push to repository
  3. Trigger ArgoCD sync: task seed:app:sync
  4. Monitor sync status: task seed:app
  5. Verify democratic-csi is healthy in ArgoCD

Acceptance:

  • ArgoCD successfully syncs democratic-csi
  • All resources are healthy
  • No sync errors

  • 6.2 Test Cluster Bootstrap with ZFS Storage Validates: Requirements 4.5

Verify ZFS storage infrastructure can be recreated from scratch.

Steps:

  1. Document current state
  2. Delete democratic-csi: kubectl delete -k components/democratic-csi
  3. Wait for cleanup
  4. Reapply: task apply-application APP=democratic-csi
  5. Verify all components are healthy
  6. Test PVC provisioning

Acceptance:

  • Infrastructure can be recreated
  • No manual intervention required
  • All tests pass after recreation

Task Summary

Total Tasks: 27

  • Proxmox Configuration: 2 tasks
  • Kubernetes Infrastructure: 6 tasks
  • Code-Server Migration: 2 tasks
  • Testing: 6 tasks
  • Documentation: 5 tasks
  • GitOps Integration: 2 tasks
  • Cleanup: 4 tasks

Estimated Effort: 2-3 days

  • Day 1: Proxmox setup + democratic-csi deployment (Tasks 1.1-2.8)
  • Day 2: Code-server migration + testing (Tasks 3.1-4.6)
  • Day 3: Documentation + GitOps validation (Tasks 5.1-6.2)