0005 - ClickHouse Stack: Installation Plan
Reference
- Specification: 0005-clickhouse-stack.md
- Repository:
repos/k8s-lab - Target Directory:
repos/k8s-lab/components/clickhouse/
Overview
This plan implements a single-node ClickHouse installation on k8s-lab using the Bitnami Helm chart. The implementation follows existing k8s-lab component patterns (Kustomize + Helm) and integrates with local-path storage and Traefik ingress.
Phase 1: Component Directory Structure
Objective: Create the clickhouse component directory with namespace and kustomization configuration.
Tasks
- Create directory
repos/k8s-lab/components/clickhouse/ - Create
namespace.yamlwithclickhousenamespace - Create
kustomization.yamlwith Bitnami Helm chart reference (pinned version) - Create empty
values.yamlplaceholder
Files to Create
| File | Purpose |
|---|---|
components/clickhouse/namespace.yaml | Namespace definition |
components/clickhouse/kustomization.yaml | Kustomize with Helm chart |
components/clickhouse/values.yaml | Helm values (placeholder) |
Success Criteria
- Directory structure exists
-
kubectl kustomize repos/k8s-lab/components/clickhouse/generates valid YAML (namespace only at this stage) - Helm chart version is explicitly pinned
Dependencies
None (first phase)
Phase 2: Helm Values Configuration
Objective: Configure ClickHouse Helm values for single-node deployment with local-path storage.
Tasks
- Configure persistence with
local-pathStorageClass (50Gi initial size) - Set resource limits (2 CPU / 4Gi memory limits, 500m / 1Gi requests)
- Configure authentication with static password (for initial deployment)
- Disable ClickHouse Keeper (not needed for single node)
- Disable sharding/replication (single node deployment)
- Set memory and thread limits in ClickHouse configuration
- Expose HTTP (8123) and native (9000) ports
Files to Modify
| File | Changes |
|---|---|
components/clickhouse/values.yaml | Full Helm values configuration |
Success Criteria
-
kubectl kustomize repos/k8s-lab/components/clickhouse/generates complete ClickHouse deployment - Generated YAML includes PVC with
local-pathStorageClass - Resource limits are present in generated YAML
- Service exposes ports 8123 and 9000
Dependencies
- Phase 1 (directory structure)
Phase 3: External Secrets Integration
Objective: Configure ClickHouse authentication via central-secret-store using External Secrets Operator.
Tasks
- Create ClusterExternalSecret for ClickHouse credentials in
central-secret-store/external-secrets/ - Add namespace label for secret selector (
secrets/clickhouse-credentials: "true") - Update ClickHouse values.yaml to reference the synced secret
- Document manual secret creation in central-secret-store namespace
Files to Create/Modify
| File | Changes |
|---|---|
components/central-secret-store/external-secrets/clickhouse-credentials.yaml | ClusterExternalSecret definition |
components/central-secret-store/kustomization.yaml | Add new external secret resource |
components/clickhouse/namespace.yaml | Add secret selector label |
components/clickhouse/values.yaml | Reference existingSecret |
Success Criteria
- ClusterExternalSecret syncs password to clickhouse namespace
- ClickHouse deployment references the external secret
-
kubectl kustomize repos/k8s-lab/components/central-secret-store/includes new secret
Dependencies
- Phase 2 (Helm values)
Phase 4: Ingress Configuration
Objective: Expose ClickHouse HTTP interface via Traefik IngressRoute with TLS.
Tasks
- Add ClickHouse ingress entry to
components/ingress/kustomization.yaml - Configure hostname:
clickhouse.lab.local.ctoaas.co - Enable TLS with LetsEncrypt issuer
- Set access pattern to internal (consistent with other lab services)
Files to Modify
| File | Changes |
|---|---|
components/ingress/kustomization.yaml | Add ClickHouse ingress configuration |
Success Criteria
-
kubectl kustomize repos/k8s-lab/components/ingress/includes ClickHouse IngressRoute - TLS secret name configured
- Ingress routes to ClickHouse HTTP port (8123)
Dependencies
- Phase 2 (ClickHouse service must exist)
Phase 5: Component Registration
Objective: Register clickhouse component in the main components kustomization.
Tasks
- Add
- clickhouse/tocomponents/kustomization.yamlresources list - Position after existing database/storage components
Files to Modify
| File | Changes |
|---|---|
components/kustomization.yaml | Add clickhouse resource |
Success Criteria
-
kubectl kustomize repos/k8s-lab/components/includes all ClickHouse resources - No kustomize errors when building full components directory
Dependencies
- Phase 1-4 (all component files must exist)
Phase 6: Documentation
Objective: Create component README with usage instructions.
Tasks
- Create
README.mdwith:- Component overview
- Prerequisites (secret creation in central-secret-store)
- Access instructions (HTTP interface, native protocol)
- Connection examples (curl, clickhouse-client)
- Common queries for verification
- Create
Taskfile.yamlwith common operations:task verify- Check pod status and connectivitytask shell- Open clickhouse-client shelltask logs- Tail pod logs
Files to Create
| File | Purpose |
|---|---|
components/clickhouse/README.md | Usage documentation |
components/clickhouse/Taskfile.yaml | Common task definitions |
Success Criteria
- README includes all required sections
- Taskfile commands execute successfully
- Documentation covers secret creation steps
Dependencies
- Phase 5 (component fully registered)
Phase 7: Deployment Verification
Objective: Deploy and verify ClickHouse is functional.
Tasks
- Create ClickHouse password secret in central-secret-store namespace
- Apply kustomization:
kubectl apply -k repos/k8s-lab/components/ - Verify pod reaches Running state
- Verify PVC is bound to local-path PV
- Test HTTP interface via port-forward
- Test external access via ingress URL
- Run verification queries:
SELECT version()SELECT * FROM system.databases- Create test table, insert data, query data, drop table
Files to Create/Modify
None (verification only)
Success Criteria
- Pod in Running state:
kubectl get pod -n clickhouse - PVC bound:
kubectl get pvc -n clickhouse - HTTP responds:
curl http://localhost:8123/ping(via port-forward) - External access works:
curl https://clickhouse.lab.local.ctoaas.co/ping - SQL queries execute successfully
- Logs show no errors:
kubectl logs -n clickhouse -l app.kubernetes.io/name=clickhouse
Dependencies
- Phase 6 (full component ready)
File Summary
New Files
| File | Phase |
|---|---|
components/clickhouse/namespace.yaml | 1 |
components/clickhouse/kustomization.yaml | 1 |
components/clickhouse/values.yaml | 1, 2, 3 |
components/clickhouse/README.md | 6 |
components/clickhouse/Taskfile.yaml | 6 |
components/central-secret-store/external-secrets/clickhouse-credentials.yaml | 3 |
Modified Files
| File | Phase |
|---|---|
components/central-secret-store/kustomization.yaml | 3 |
components/ingress/kustomization.yaml | 4 |
components/kustomization.yaml | 5 |
Configuration Reference
Helm Chart
helmCharts:
- name: clickhouse
repo: https://charts.bitnami.com/bitnami
version: 6.2.13 # Pin to specific version
releaseName: clickhouse
namespace: clickhouse
valuesFile: values.yamlKey Values
# Single node configuration
shards: 1
replicaCount: 1
keeper:
enabled: false
# Storage
persistence:
enabled: true
storageClass: local-path
size: 50Gi
# Resources
resources:
limits:
cpu: 2000m
memory: 4Gi
requests:
cpu: 500m
memory: 1Gi
# Authentication
auth:
username: default
existingSecret: clickhouse-credentials
existingSecretKey: passwordIngress Entry
- service:
name: clickhouse
namespace: clickhouse
port:
number: 8123
ingress:
name: clickhouse
accessPattern: internal
domains:
name: clickhouse
tls:
secretName: clickhouse-ctoaas-tlsRisk Mitigations
| Risk | Mitigation in Plan |
|---|---|
| Storage performance | Use local-path (not NFS) as specified in spec |
| Resource contention | Explicit resource limits in Phase 2 |
| Helm chart compatibility | Pin specific version in Phase 1 |
| Secret management | External Secrets integration in Phase 3 |
Open Items for Implementation
- Helm chart version: Verify latest stable Bitnami ClickHouse version at implementation time
- Secret key structure: Confirm central-secret-store secret format during Phase 3
- Storage size: 50Gi as proposed; can be adjusted based on usage patterns