ZFS Storage Integration - Implementation Tasks
1. Proxmox Host Configuration
Create and configure the ZFS dataset on the Proxmox host for Kubernetes storage.
Steps:
- SSH to Proxmox host (192.168.32.163)
- Create ZFS dataset:
zfs create rpool/data/k8s-storage - Set ZFS properties:
zfs set compression=lz4 rpool/data/k8s-storagezfs set atime=off rpool/data/k8s-storagezfs set recordsize=128k rpool/data/k8s-storage
- Verify dataset creation:
zfs list rpool/data/k8s-storage - Check properties:
zfs get all rpool/data/k8s-storage
Acceptance:
- Dataset
rpool/data/k8s-storageexists - Compression is enabled (lz4)
- Properties are correctly set
Configure NFS server on Proxmox to export the ZFS dataset to the Kubernetes cluster.
Steps:
- Install NFS server:
apt update && apt install nfs-kernel-server -y - Add export to
/etc/exports:/rpool/data/k8s-storage 192.168.32.59(rw,sync,no_subtree_check,no_root_squash) - Reload exports:
exportfs -ra - Verify export:
exportfs -v - Test from Talos node:
showmount -e 192.168.32.163
Acceptance:
- NFS server is running
- Export is configured for k8s node IP
- Export is visible from Talos node
2. Kubernetes Storage Infrastructure
Set up the component structure for democratic-csi in the k8s-lab repository.
Steps:
- Create directory:
k8s-lab/components/democratic-csi/ - Create subdirectories:
k8s-lab/components/democratic-csi/charts/
- Create initial files:
kustomization.yamlnamespace.yamlvalues.yamlREADME.mdTaskfile.yaml
Acceptance:
- Directory structure exists
- Initial files are created
- Follows existing component patterns (code-server, n8n, etc.)
Create the namespace for democratic-csi with appropriate security settings.
File: k8s-lab/components/democratic-csi/namespace.yaml
Content:
apiVersion: v1
kind: Namespace
metadata:
name: democratic-csi
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privilegedAcceptance:
- Namespace manifest is created
- Pod security labels are set to privileged (required for CSI)
Configure Helm values for democratic-csi with NFS driver and ZFS backend.
File: k8s-lab/components/democratic-csi/values.yaml
Content:
csiDriver:
name: "org.democratic-csi.nfs"
storageClasses:
- name: zfs-nfs
defaultClass: false
reclaimPolicy: Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
fsType: nfs
volumeSnapshotClasses:
- name: zfs-nfs-snapshot
deletionPolicy: Delete
parameters: {}
driver:
config:
driver: freenas-nfs
instance_id:
httpConnection:
protocol: http
host: 192.168.32.163
port: 80
apiKey:
zfs:
datasetParentName: rpool/data/k8s-storage
detachedSnapshotsDatasetParentName: rpool/data/k8s-storage/snapshots
datasetEnableQuotas: true
datasetEnableReservation: false
datasetPermissionsMode: "0777"
datasetPermissionsUser: 0
datasetPermissionsGroup: 0
nfs:
shareHost: 192.168.32.163
shareAlldirs: false
shareAllowedHosts: []
shareAllowedNetworks: []
shareMaprootUser: root
shareMaprootGroup: root
shareMapallUser: ""
shareMapallGroup: ""
controller:
enabled: true
strategy: deployment
externalAttacher:
enabled: true
externalProvisioner:
enabled: true
externalResizer:
enabled: true
externalSnapshotter:
enabled: true
node:
enabled: true
hostPID: true
driver:
extraEnv:
- name: MOUNT_OPTIONS
value: "nfsvers=4.2,noatime"Acceptance:
- Values file is created
- Storage class configured with Retain policy
- NFS driver configured for Proxmox host
- ZFS dataset paths are correct
Download the democratic-csi Helm chart for local deployment.
Steps:
- Add Helm repo:
helm repo add democratic-csi https://democratic-csi.github.io/charts/ - Update repos:
helm repo update - Pull chart:
helm pull democratic-csi/democratic-csi --untar --untardir k8s-lab/components/democratic-csi/charts/ - Verify chart version in
Chart.yaml
Acceptance:
- Chart is downloaded to
charts/directory - Chart version is documented
Create kustomization manifest to deploy democratic-csi via Helm.
File: k8s-lab/components/democratic-csi/kustomization.yaml
Content:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: democratic-csi
resources:
- namespace.yaml
helmCharts:
- name: democratic-csi
repo: https://democratic-csi.github.io/charts/
version: 0.14.6 # Update to latest stable version
releaseName: zfs-nfs
namespace: democratic-csi
valuesFile: values.yaml
includeCRDs: trueAcceptance:
- Kustomization file is created
- Helm chart is configured
- Values file is referenced
Document the democratic-csi setup, configuration, and usage.
File: k8s-lab/components/democratic-csi/README.md
Content:
- Overview of democratic-csi and ZFS storage
- Architecture diagram reference
- Configuration details
- Usage examples
- Troubleshooting guide
- Links to design document
Acceptance:
- README is comprehensive
- Includes usage examples
- Documents troubleshooting steps
Create task automation for democratic-csi management.
File: k8s-lab/components/democratic-csi/Taskfile.yaml
Tasks to include:
status- Show democratic-csi statuslogs- View controller and node logsrestart- Restart democratic-csi podstest:provisioning- Test PVC provisioningtest:mount- Test volume mounting
Acceptance:
- Taskfile is created
- Common operations are automated
- Follows existing task patterns
Add democratic-csi to the main components kustomization.
File: k8s-lab/components/kustomization.yaml
Changes:
- Add
- democratic-csi/to resources list - Position after
local-storageand beforecode-server
Acceptance:
- democratic-csi is included in main kustomization
- Build succeeds:
task build-application APP=democratic-csi
3. Code-Server Storage Migration
Update code-server to use ZFS-backed storage instead of local-path.
File: k8s-lab/components/code-server/values.yaml
Changes:
persistentVolumeClaim:
enabled: true
create: true
size: 10Gi
storageClassName: "zfs-nfs" # Changed from "local-path"
accessMode: ReadWriteOnce
mountPath: "/home/coder"
projectsPath: "/home/coder/projects"Acceptance:
- Storage class changed to
zfs-nfs - Size remains 10Gi
- Access mode is ReadWriteOnce
Ensure code-server and codev pods are scheduled on the same node for RWO volume sharing.
File: k8s-lab/components/code-server/values.yaml
Add:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- code-server
- codev
topologyKey: kubernetes.io/hostnameFile: k8s-lab/components/codev/deployment.yaml
Add to spec.template.spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- code-server
- codev
topologyKey: kubernetes.io/hostnameAcceptance:
- Both pods have pod affinity configured
- Affinity ensures same-node scheduling
4. Testing and Validation
Verify democratic-csi is deployed and running correctly.
Steps:
- Build configuration:
task build-application APP=democratic-csi - Apply configuration:
task apply-application APP=democratic-csi - Check pods:
kubectl get pods -n democratic-csi - Check logs:
kubectl logs -n democratic-csi -l app=democratic-csi-controller - Verify storage class:
kubectl get storageclass zfs-nfs
Acceptance:
- All democratic-csi pods are running
- No errors in logs
- Storage class
zfs-nfsexists
Create a test PVC and verify dynamic provisioning works.
Test PVC: k8s-lab/components/democratic-csi/test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-zfs-pvc
namespace: democratic-csi
spec:
accessModes:
- ReadWriteOnce
storageClassName: zfs-nfs
resources:
requests:
storage: 1GiSteps:
- Create test PVC:
kubectl apply -f test-pvc.yaml - Check PVC status:
kubectl get pvc -n democratic-csi - Check PV creation:
kubectl get pv - Verify ZFS dataset on Proxmox:
zfs list | grep k8s-storage - Delete test PVC:
kubectl delete -f test-pvc.yaml - Verify PV is retained:
kubectl get pv
Acceptance:
- PVC is created and bound
- PV is dynamically provisioned
- ZFS dataset exists on Proxmox
- PV is retained after PVC deletion
Create a test pod that writes data to a ZFS-backed volume and verify persistence.
Test Pod: k8s-lab/components/democratic-csi/test-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-zfs-pod
namespace: democratic-csi
spec:
containers:
- name: test
image: alpine
command: ["sh", "-c", "echo 'Hello ZFS' > /data/test.txt && cat /data/test.txt && sleep 3600"]
volumeMounts:
- name: test-volume
mountPath: /data
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: test-zfs-pvcSteps:
- Create test pod:
kubectl apply -f test-pod.yaml - Wait for pod to run:
kubectl wait --for=condition=ready pod/test-zfs-pod -n democratic-csi - Check file:
kubectl exec -n democratic-csi test-zfs-pod -- cat /data/test.txt - Delete pod:
kubectl delete pod test-zfs-pod -n democratic-csi - Recreate pod:
kubectl apply -f test-pod.yaml - Verify file persists:
kubectl exec -n democratic-csi test-zfs-pod -- cat /data/test.txt
Acceptance:
- Pod can write to volume
- File persists across pod restarts
- No errors in pod logs
Deploy code-server with ZFS storage and verify functionality.
Steps:
- Delete existing code-server PVC:
kubectl delete pvc code-server-storage -n code-server - Apply updated code-server:
task apply-application APP=code-server - Wait for pod:
kubectl wait --for=condition=ready pod -l app=code-server -n code-server - Check PVC:
kubectl get pvc -n code-server - Verify storage class:
kubectl get pvc code-server-storage -n code-server -o jsonpath='{.spec.storageClassName}' - Access code-server UI and create test files
- Restart pod and verify files persist
Acceptance:
- code-server uses
zfs-nfsstorage class - PVC is bound to ZFS-backed PV
- Files persist across restarts
- UI is accessible and functional
Verify both code-server and codev can access the same workspace.
Steps:
- Ensure code-server is running with ZFS storage
- Apply codev:
task apply-application APP=codev - Wait for both pods:
kubectl get pods -n code-server - Verify same node:
kubectl get pods -n code-server -o wide - Create file in code-server:
kubectl exec -n code-server -c code-server <pod> -- touch /home/coder/test-shared.txt - Check file in codev:
kubectl exec -n code-server <codev-pod> -- ls -la /home/coder/test-shared.txt - Write from codev:
kubectl exec -n code-server <codev-pod> -- sh -c "echo 'from codev' > /home/coder/test-shared.txt" - Read from code-server:
kubectl exec -n code-server -c code-server <pod> -- cat /home/coder/test-shared.txt
Acceptance:
- Both pods are on the same node
- Both pods can read/write to shared workspace
- Files are immediately visible to both pods
- No permission errors
Compare I/O performance between local-path and zfs-nfs storage.
Test Script: k8s-lab/components/democratic-csi/test-performance.sh
#!/bin/bash
# Simple I/O performance test
echo "Testing write performance..."
kubectl exec -n code-server <pod> -- sh -c "dd if=/dev/zero of=/home/coder/testfile bs=1M count=100 oflag=direct"
echo "Testing read performance..."
kubectl exec -n code-server <pod> -- sh -c "dd if=/home/coder/testfile of=/dev/null bs=1M iflag=direct"
echo "Cleaning up..."
kubectl exec -n code-server <pod> -- rm /home/coder/testfileSteps:
- Run test with ZFS storage
- Document results
- Compare with baseline (if available)
Acceptance:
- Performance is acceptable for development workloads
- No significant degradation compared to local-path
- Results are documented
5. Documentation and Cleanup
Document ZFS storage integration in the main README.
File: k8s-lab/README.md
Add section:
- Storage architecture overview
- Available storage classes (local-path, zfs-nfs)
- When to use each storage class
- Link to democratic-csi component README
Acceptance:
- README includes storage documentation
- Usage guidance is clear
- Links to detailed docs
Document when to use local-path vs zfs-nfs storage.
File: k8s-lab/docs/storage-guide.md
Content:
- Storage class comparison table
- Use case recommendations
- Performance characteristics
- Migration guide
- Troubleshooting
Acceptance:
- Guide is comprehensive
- Clear decision criteria
- Examples for common scenarios
Add storage management tasks to main Taskfile.
File: k8s-lab/Taskfile.yaml
Add tasks:
storage:status- Show all storage classes and PVCsstorage:test- Run storage provisioning testsstorage:cleanup- Clean up retained PVs
Acceptance:
- Tasks are added to main Taskfile
- Tasks follow existing patterns
- Documentation is updated
Remove test PVCs, pods, and retained PVs.
Steps:
- Delete test pods:
kubectl delete pod test-zfs-pod -n democratic-csi - Delete test PVCs:
kubectl delete pvc test-zfs-pvc -n democratic-csi - List retained PVs:
kubectl get pv | grep Released - Delete retained PVs:
kubectl delete pv <pv-name> - Clean up ZFS datasets on Proxmox:
zfs destroy rpool/data/k8s-storage/<dataset>
Acceptance:
- All test resources are removed
- No orphaned PVs or datasets
- Cluster is clean
Create automated acceptance tests for ZFS storage integration.
File: k8s-lab/components/democratic-csi/tests/acceptance_test.sh
Tests:
- democratic-csi deployment health
- Storage class availability
- PVC provisioning
- Volume mounting
- Data persistence
- Shared workspace functionality
- Performance baseline
Acceptance:
- All tests pass
- Tests are automated
- Tests can be run via Taskfile
6. GitOps Integration
Ensure democratic-csi can be managed by ArgoCD.
Steps:
- Commit all changes to git
- Push to repository
- Trigger ArgoCD sync:
task seed:app:sync - Monitor sync status:
task seed:app - Verify democratic-csi is healthy in ArgoCD
Acceptance:
- ArgoCD successfully syncs democratic-csi
- All resources are healthy
- No sync errors
Verify ZFS storage infrastructure can be recreated from scratch.
Steps:
- Document current state
- Delete democratic-csi:
kubectl delete -k components/democratic-csi - Wait for cleanup
- Reapply:
task apply-application APP=democratic-csi - Verify all components are healthy
- Test PVC provisioning
Acceptance:
- Infrastructure can be recreated
- No manual intervention required
- All tests pass after recreation
Task Summary
Total Tasks: 27
- Proxmox Configuration: 2 tasks
- Kubernetes Infrastructure: 6 tasks
- Code-Server Migration: 2 tasks
- Testing: 6 tasks
- Documentation: 5 tasks
- GitOps Integration: 2 tasks
- Cleanup: 4 tasks
Estimated Effort: 2-3 days
- Day 1: Proxmox setup + democratic-csi deployment (Tasks 1.1-2.8)
- Day 2: Code-server migration + testing (Tasks 3.1-4.6)
- Day 3: Documentation + GitOps validation (Tasks 5.1-6.2)