Design Document
Overview
The repository architecture creates a multi-repository platform that cleanly separates workspace-level operations, repository-level utilities, and platform foundation infrastructure. This design addresses the architectural challenge of sharing development tooling across multiple levels while supporting the AI Gateway ecosystem, BMAD methodology integration, and providing a robust platform foundation.
The architecture creates workspace-root (heavy workspace-level tooling), workspace-shared (lightweight repo-level utilities), and k8s-lab (platform foundation infrastructure), enabling efficient sharing without duplication while providing a foundation for AI-enabled development workflows and GitOps-based platform operations.
Architecture
High-Level Architecture
graph TB subgraph "Workspace Level" WorkspaceRoot[workspace-root] WorkspaceRoot --> WorkspaceManagement[Workspace Management] WorkspaceRoot --> BMADMethodology[BMAD Methodology] WorkspaceRoot --> AIGatewayConfig[AI Gateway Config] WorkspaceRoot --> Artifacts[Artifacts & Tools] end subgraph "Repository Level" WorkspaceShared[workspace-shared] WorkspaceShared --> DocsTaskfile[docs:taskfile] WorkspaceShared --> CITasks[CI Tasks] WorkspaceShared --> RepoUtils[Repo Utilities] WorkspaceShared --> SharedComponents[Shared Components] end subgraph "Platform Foundation" K8sLab[k8s-lab] K8sLab --> ArgoCD[ArgoCD Bootstrap] K8sLab --> SupportingApps[Supporting Applications] K8sLab --> SecretStore[Central Secret Store] K8sLab --> EnvironmentOverlays[Environment Overlays] end subgraph "Individual Repositories" Repo1[ai-dev] Repo2[argocd-eda] Repo3[other-repo] Repo1 --> WorkspaceSharedSub1[workspace-shared submodule] Repo2 --> WorkspaceSharedSub2[workspace-shared submodule] Repo3 --> WorkspaceSharedSub3[workspace-shared submodule] end subgraph "AI-Enabled Workspaces" AIWorkspace1[Workspace 1 (.ai/)] AIWorkspace2[Workspace 2 (.ai/)] AIWorkspaceLarge[AI Agent Workspace (repos/)] AIWorkspace1 --> WorkspaceRoot AIWorkspace2 --> WorkspaceRoot AIWorkspaceLarge --> WorkspaceRoot end subgraph "Business Capabilities" Repo2 --> K8sLab BusinessApps[Backstage, Image Factory, EDA] BusinessApps -.-> K8sLab end WorkspaceRoot -.-> WorkspacesYAML[workspaces.yaml] WorkspaceRoot --> DevEnvironment[Development Environment] WorkspaceShared --> K8sLab
Workspace Patterns
The architecture supports two distinct workspace patterns to accommodate different development workflows:
Traditional Multi-Workspace Pattern
Use Case: Human developers working on specific projects with focused context
Structure:
- Each project gets its own workspace
- Projects are cloned at the workspace root level
- VS Code workspace file references project directories directly
- Switching between projects requires changing workspaces
Benefits:
- Clear separation of concerns
- Focused development environment
- Minimal context switching within a workspace
- Optimized for single-project workflows
Example Configuration:
workspaces:
- name: ai-development
projects:
- ai-dev
- name: infrastructure
projects:
- k8s-labSingle Large Workspace Pattern
Use Case: AI agents working across multiple repositories with unified context
Structure:
- All projects are included in a single workspace
- Projects are cloned into a configurable internal folder (e.g.,
repos/) - VS Code workspace file references all project subdirectories
- AI agent can navigate and work across all projects without context switching
Benefits:
- Unified context for cross-repository work
- No workspace switching required
- AI agent can see relationships between projects
- Optimized for multi-project analysis and refactoring
Example Configuration:
workspaces:
- name: ai-agent
internalFolder: repos
projects:
- ai-dev
- k8s-lab
- argocd-edaKey Design Decision: The internalFolder property is optional and additive. Existing workspace configurations continue to work without modification, ensuring backward compatibility.
Repository Separation Strategy
workspace-root (Heavy Tooling)
- Purpose: Workspace-level operations spanning multiple repositories
- Location: Standalone repository, not a submodule
- Usage: Direct reference from workspace environments
- Contents: Workspace management, BMAD methodology, AI Gateway config, development tools
workspace-shared (Lightweight Utilities)
- Purpose: Repo-level shared utilities for individual repositories
- Location: Git submodule in individual repositories
- Usage: Included via submodule for CI/CD and repo-specific tasks
- Contents: docs:taskfile, CI utilities, lightweight shared tasks, Helm charts, kustomize components
k8s-lab (Platform Foundation)
- Purpose: Base infrastructure components for all platform capabilities
- Location: Standalone repository providing GitOps foundation
- Usage: Direct deployment as platform foundation
- Contents: ArgoCD bootstrap, supporting applications, central secret store, environment overlays
Components and Interfaces
workspace-root Repository Structure
workspace-root/
├── README.md # Workspace-root documentation
├── Taskfile.yaml # Workspace-level tasks
├── workspaces.yaml # Workspace definitions (migrated from k8s-lab)
├── ai-gateway.yaml # Global AI Gateway configuration
├── scripts/ # Workspace management scripts
│ ├── workspace-manager.py # Python workspace manager
│ ├── sync-workspaces.sh # Shell workspace sync
│ ├── install-tools.sh # Development tools installer
│ └── test-tools.sh # Tools testing script
├── _bmad/ # Centralized BMAD methodology
│ ├── _config/ # BMAD configuration
│ │ ├── methodology.yaml # BMAD methodology definitions
│ │ ├── templates.yaml # Artifact templates
│ │ └── roles.yaml # Standard role definitions
│ ├── _memory/ # Cross-workspace learnings
│ │ ├── patterns.yaml # Discovered patterns
│ │ ├── lessons-learned.yaml # Accumulated knowledge
│ │ └── best-practices.yaml # Proven practices
│ ├── bmb/ # Business Model Building
│ │ └── canvas-templates/ # Business model templates
│ ├── bmm/ # Business Motivation Model
│ │ └── goal-templates/ # Goal and strategy templates
│ ├── cis/ # Component Interface Specifications
│ │ └── interface-standards/ # Standard interface patterns
│ ├── core/ # Core BMAD methodology
│ │ ├── principles.yaml # Core principles
│ │ └── process-flows.yaml # Standard process flows
│ └── _bmad-output/ # Generated methodology outputs
│ ├── reports/ # Cross-workspace reports
│ └── metrics/ # Methodology metrics
├── templates/ # Workspace templates
│ ├── workspace-templates/ # VS Code workspace templates
│ └── project-templates/ # Project scaffolding templates
└── .kiro/ # Kiro specifications
├── steering/ # Steering files
└── specs/ # Feature specifications
workspace-shared Repository Structure
workspace-shared/
├── README.md # Usage documentation
├── Taskfile.yaml # Lightweight repo-level tasks
├── tasks/ # Individual task definitions
│ ├── docs.yaml # Documentation tasks
│ ├── ci.yaml # CI/CD tasks
│ └── validation.yaml # Validation tasks
├── scripts/ # Lightweight utility scripts
│ ├── generate-docs.sh # Documentation generation
│ └── validate-config.sh # Configuration validation
├── templates/ # Repo-level templates
│ ├── ci-templates/ # CI configuration templates
│ └── doc-templates/ # Documentation templates
└── .kiro/ # Minimal Kiro configuration
└── steering/ # Repo-level steering files
k8s-lab Repository Structure
k8s-lab/
├── README.md # Platform foundation documentation
├── Taskfile.yaml # Foundation management tasks
├── kustomization.yaml # Root kustomization
├── argocd/ # ArgoCD bootstrap
│ ├── argocd-namespace.yaml
│ ├── argocd-projects.yaml
│ └── kustomization.yaml
├── components/ # Component groupings
│ ├── core-infrastructure/ # Essential platform services
│ │ ├── cert-manager/
│ │ ├── external-secrets/
│ │ ├── central-secret-store/
│ │ ├── ingress/
│ │ ├── local-storage/
│ │ ├── democratic-csi/
│ │ ├── letsencrypt-issuer.yaml
│ │ ├── remove-control-plane-taint-job.yaml
│ │ └── kustomization.yaml
│ ├── platform-services/ # Platform-level services
│ │ ├── argo-rollouts/
│ │ ├── kargo/
│ │ ├── headlamp/
│ │ ├── strimzi/
│ │ ├── rabbitmq-operator/
│ │ ├── n8n/
│ │ └── kustomization.yaml
│ ├── remote-development/ # Development environment tools
│ │ ├── code-server/
│ │ ├── codev/
│ │ └── kustomization.yaml
│ ├── observability/ # Observability stack
│ │ ├── clickhouse/
│ │ ├── prometheus/
│ │ └── kustomization.yaml
│ └── kustomization.yaml # Aggregates all component groups
├── other-seeds/ # Business capability seeds
│ ├── remote-development.yaml # ArgoCD app for remote-development
│ ├── observability.yaml # ArgoCD app for observability
│ ├── argocd-eda.yaml
│ ├── workspace-root.yaml
│ └── kustomization.yaml
└── traefik/ # Separated due to sync issues
└── kustomization.yaml
Component Organization Strategy
Core Infrastructure (components/core-infrastructure/):
- Essential services required for platform operation
- Deployed as part of the main foundation seed
- Includes: cert-manager, external-secrets, central-secret-store, ingress, storage
Platform Services (components/platform-services/):
- Platform-level capabilities that support business applications
- Deployed as part of the main foundation seed
- Includes: argo-rollouts, kargo, headlamp, strimzi, rabbitmq-operator, n8n
Remote Development (components/remote-development/):
- Development environment tools
- Deployed as separate ArgoCD application (
other-seeds/remote-development.yaml) - Includes: code-server, codev
- Can be disabled in production environments
Observability (components/observability/):
- Observability and monitoring stack
- Deployed as separate ArgoCD application (
other-seeds/observability.yaml) - Includes: clickhouse, prometheus
- Scalable independently from core infrastructure
ArgoCD Application Structure
Main Foundation Seed (deployed via kubectl apply -k .):
- ArgoCD bootstrap
- Core infrastructure components
- Platform services components
- Creates ArgoCD applications for modular components
Modular ArgoCD Applications (managed by ArgoCD):
remote-development: Manages remote-development component groupobservability: Manages observability component group- Business capability seeds (argocd-eda, workspace-root, etc.)
Benefits of Modular Organization:
- Maintainability: Easier to understand and manage related components together
- Scalability: Independent scaling and resource management per component group
- Flexibility: Enable/disable entire component groups based on environment needs
- Troubleshooting: Isolated status and logs per component group
- Deployment Control: Independent sync policies and health checks per group
Migration Mapping
From dev-common to workspace-root:
workspaces.yaml→workspace-root/workspaces.yamlscripts/workspace-manager.py→workspace-root/scripts/workspace-manager.pyscripts/sync-workspaces.sh→workspace-root/scripts/sync-workspaces.shscripts/install-tools.sh→workspace-root/scripts/install-tools.shscripts/test-tools.sh→workspace-root/scripts/test-tools.sh- Workspace-level tasks →
workspace-root/Taskfile.yaml - BMAD methodology →
workspace-root/_bmad/ - AI Gateway config →
workspace-root/ai-gateway.yaml
From dev-common to workspace-shared:
docs:taskfiletask →workspace-shared/Taskfile.yaml- Repo-level utilities →
workspace-shared/scripts/ - CI templates →
workspace-shared/templates/ci-templates/
Data Models
Workspace Configuration (workspace-root/workspaces.yaml)
version: v1
# Project definitions (repositories)
projects:
- name: ai-dev
repo: git@github.com:org/ai-dev.git
default_branch: main
desired_state: present
ai_enabled: true # Supports AI Gateway
bmad_enabled: true # Uses BMAD methodology
- name: k8s-lab
repo: git@github.com:org/k8s-lab.git
default_branch: main
desired_state: present
ai_enabled: false # Traditional development
bmad_enabled: false
# Workspace definitions (logical groupings)
workspaces:
# Traditional multi-workspace pattern (one workspace per project)
- name: ai-development
projects:
- ai-dev
extensions:
- ms-python.python
- ms-vscode.vscode-yaml
- ms-vscode.vscode-typescript-next
ai_gateway:
enabled: true
primary_model: "claude-3-5-sonnet"
bmad:
enabled: true
methodology_version: "2.0"
- name: infrastructure
projects:
- k8s-lab
extensions:
- redhat.vscode-kubernetes-tools
- ms-kubernetes-tools.vscode-kubernetes-tools
- golang.go
# Single large workspace pattern (all projects in one workspace)
- name: ai-agent
internalFolder: repos # Projects instantiated under repos/ subdirectory
projects:
- ai-dev
- k8s-lab
- argocd-eda
extensions:
- ms-python.python
- ms-vscode.vscode-yaml
- ms-vscode.vscode-typescript-next
- redhat.vscode-kubernetes-tools
- ms-kubernetes-tools.vscode-kubernetes-tools
- golang.go
ai_gateway:
enabled: true
primary_model: "claude-3-5-sonnet"
bmad:
enabled: true
methodology_version: "2.0"
# Development tools (shared across all workspaces)
development_tools:
system_packages: [curl, wget, git, jq, yq, tree, htop, vim, nano]
kubernetes_tools: [kubectl, kustomize, helm, k9s]
python_tools: [uv, pip, pipx]
node_tools: [node, npm, yarn, ts-node]
go_tools: [go]
task_runner: [task]Workspace Structure Examples
Traditional Multi-Workspace Pattern:
/workspace/
├── ai-dev/ # Project root
│ ├── .git/
│ ├── services/
│ └── ...
└── .vscode/
└── workspace.code-workspace
Single Large Workspace Pattern (with internalFolder):
/workspace/
├── repos/ # Internal folder for all projects
│ ├── ai-dev/ # Project as subdirectory
│ │ ├── .git/
│ │ ├── services/
│ │ └── ...
│ ├── k8s-lab/ # Project as subdirectory
│ │ ├── .git/
│ │ ├── components/
│ │ └── ...
│ └── argocd-eda/ # Project as subdirectory
│ ├── .git/
│ └── ...
└── .vscode/
└── workspace.code-workspace
internalFolder Validation Rules
The internalFolder property enables the single large workspace pattern. Validation rules:
- Must be a non-empty string
- Cannot contain path separators (
/,\) - Cannot contain parent directory references (
..) - Cannot contain null characters
- The folder is created automatically during workspace setup
Workspace Pattern Selection Guide
When to Use Traditional Multi-Workspace:
- Working on a single project at a time
- Need focused development environment
- Minimal context switching within workspace
- Human-driven development workflows
When to Use Single Large Workspace:
- AI-assisted development across multiple repositories
- Platform engineering with infrastructure + applications
- Cross-repository refactoring or analysis
- Need unified context for all projects
Troubleshooting
Projects Not Appearing in Internal Folder:
- Verify
internalFolderproperty is correctly spelled (case-sensitive) - Check folder name doesn’t contain invalid characters
- Confirm workspace sync completed successfully
VS Code Workspace Not Loading:
- Verify workspace file was generated correctly
- Check project paths reference the internal folder structure
- Confirm all projects were cloned successfully
Path Issues in Configuration:
- Use relative paths, not absolute paths
- Ensure relative paths account for internal folder structure
- Verify git submodules use correct relative paths
AI Gateway Configuration (workspace-root/ai-gateway.yaml)
ai_gateway:
version: "1.0"
description: "Global AI Gateway configuration for all workspaces"
# AI Model configuration
ai_models:
primary: "claude-3-5-sonnet"
fallback: "gpt-4"
specialized:
code_generation: "claude-3-5-sonnet"
documentation: "gpt-4"
bmad_artifacts: "claude-3-5-sonnet"
infrastructure: "claude-3-5-sonnet"
# Global guardrails
guardrails:
max_files_per_diff: 10
require_tests: true
protected_paths: ["**/secrets/**", "**/production/**", "**/.git/**"]
max_file_size_kb: 1024
allowed_file_types: ["*.py", "*.ts", "*.js", "*.yaml", "*.yml", "*.md", "*.json", "*.toml", "*.sh"]
# BMAD integration
bmad:
enabled: true
methodology_source: "_bmad/"
inherit_templates: true
cross_workspace_learning: true
# Workspace discovery
discovery:
workspace_source: "workspaces.yaml"
ai_state_directory: ".ai"
auto_enable_workspaces: falseBMAD Methodology Configuration (workspace-root/_bmad/_config/methodology.yaml)
bmad_methodology:
name: "Business Method and Design"
version: "2.0"
description: "Centralized BMAD methodology for all workspaces"
structure:
_config: ["methodology.yaml", "templates.yaml", "roles.yaml"]
_memory: ["patterns.yaml", "lessons-learned.yaml", "best-practices.yaml"]
bmb: ["canvas-templates/", "value-proposition-templates/"]
bmm: ["goal-templates/", "strategy-templates/"]
cis: ["interface-standards/", "contract-templates/"]
core: ["principles.yaml", "process-flows.yaml"]
_bmad-output: ["reports/", "metrics/"]
phases:
- name: discovery
bmad_focus: [bmb, bmm]
templates: ["business-canvas", "stakeholder-analysis"]
duration_estimate: "1-2 weeks"
- name: analysis
bmad_focus: [bmm, cis]
templates: ["requirements-analysis", "interface-specs"]
duration_estimate: "2-3 weeks"
- name: design
bmad_focus: [cis, core]
templates: ["system-design", "component-specs"]
duration_estimate: "2-4 weeks"
- name: implementation
bmad_focus: [core]
templates: ["implementation-plan", "code-standards"]
duration_estimate: "4-8 weeks"
- name: validation
bmad_focus: [core]
templates: ["test-plans", "quality-reports"]
duration_estimate: "1-2 weeks"workspace-shared Task Configuration
# workspace-shared/Taskfile.yaml
version: '3'
tasks:
docs:taskfile:
desc: Generate/update the taskfile steering documentation
vars:
FILE: '{{.FILE | default ".kiro/steering/taskfile.md" }}'
cmds:
- |
cat > {{.FILE}} << 'EOF'
# Taskfile Commands and Development Workflow
[... existing docs:taskfile implementation ...]
EOF
- cmd: NO_COLOR=1 COLUMNS=1000 task -l -j | jq -r '.tasks[] | ("| " + .name + " | " + .desc +"|")' >> {{.FILE}}
- cmd: echo "Updated {{.FILE}} with current task list"
validate:config:
desc: Validate repository configuration files
cmds:
- ./scripts/validate-config.sh
ci:prepare:
desc: Prepare repository for CI/CD pipeline
cmds:
- echo "Preparing CI/CD configuration..."
- ./scripts/generate-docs.shk8s-lab ArgoCD Application Configuration
Remote Development Application (k8s-lab/other-seeds/remote-development.yaml)
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: remote-development
namespace: argocd
labels:
component: remote-development
category: development-tools
annotations:
argocd.argoproj.io/sync-wave: "2"
spec:
project: platform
source:
repoURL: https://github.com/craigedmunds/k8s-lab.git
path: components/remote-development
targetRevision: main
destination:
server: https://kubernetes.default.svc
syncPolicy:
syncOptions:
- CreateNamespace=true
- PrunePropagationPolicy=foreground
- PruneLast=true
automated:
prune: true
selfHeal: true
ignoreDifferences:
- group: apps
kind: Deployment
jsonPointers:
- /spec/replicasObservability Application (k8s-lab/other-seeds/observability.yaml)
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: observability
namespace: argocd
labels:
component: observability
category: monitoring
annotations:
argocd.argoproj.io/sync-wave: "2"
spec:
project: platform
source:
repoURL: https://github.com/craigedmunds/k8s-lab.git
path: components/observability
targetRevision: main
destination:
server: https://kubernetes.default.svc
syncPolicy:
syncOptions:
- CreateNamespace=true
- PrunePropagationPolicy=foreground
- PruneLast=true
automated:
prune: true
selfHeal: true
ignoreDifferences:
- group: apps
kind: StatefulSet
jsonPointers:
- /spec/replicasComponent Group Kustomizations
Core Infrastructure (components/core-infrastructure/kustomization.yaml):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- cert-manager/
- external-secrets/
- central-secret-store/
- ingress/
- local-storage/
- democratic-csi/
- letsencrypt-issuer.yaml
- remove-control-plane-taint-job.yamlPlatform Services (components/platform-services/kustomization.yaml):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- argo-rollouts/
- kargo/
- headlamp/
- strimzi/
- rabbitmq-operator/
- n8n/Remote Development (components/remote-development/kustomization.yaml):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- code-server/
- codev/Observability (components/observability/kustomization.yaml):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- clickhouse/
- prometheus/Root Components (components/kustomization.yaml):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
# Core infrastructure (deployed with seed)
- core-infrastructure/
# Platform services (deployed with seed)
- platform-services/
# Modular components (deployed as separate ArgoCD apps)
# These are NOT included here - managed by ArgoCD applications
# - remote-development/ # Managed by other-seeds/remote-development.yaml
# - observability/ # Managed by other-seeds/observability.yamlCorrectness Properties
A property is a characteristic or behavior that should hold true across all valid executions of a system—essentially, a formal statement about what the system should do. Properties serve as the bridge between human-readable specifications and machine-verifiable correctness guarantees.
Property 1: Repository Separation Consistency
For any tooling component, it should exist in exactly one repository (workspace-root or workspace-shared) and not be duplicated across both Validates: Requirements 1.1, 1.3
Property 2: Workspace-Level Tool Accessibility
For any workspace-level operation, all required tooling should be accessible from workspace-root without requiring submodule inclusion Validates: Requirements 2.1, 2.4
Property 3: Repo-Level Utility Isolation
For any repository including workspace-shared as submodule, only lightweight utilities should be accessible, not heavy workspace tooling Validates: Requirements 3.3, 4.3
Property 4: Task Migration Preservation
For any migrated task from dev-common, the functionality should be identical in the destination repository (workspace-root or workspace-shared) Validates: Requirements 4.2, 6.2
Property 5: AI Gateway Configuration Centralization
For any AI-enabled workspace, the AI Gateway configuration should be loaded from workspace-root and not duplicated in individual repositories Validates: Requirements 7.1, 7.2
Property 6: BMAD Methodology Consistency
For any BMAD artifact generated across different workspaces, it should follow the same centralized methodology from workspace-root Validates: Requirements 8.1, 8.2
Property 7: Workspace Discovery Integration
For any workspace defined in workspaces.yaml, the AI Gateway should be able to discover and configure it using the centralized configuration Validates: Requirements 9.1, 9.2
Property 8: Submodule Reference Correctness
For any repository that previously used dev-common as submodule, it should reference workspace-shared instead and maintain the same functionality Validates: Requirements 5.1, 5.2
Property 9: Backward Compatibility Preservation
For any existing workspace configuration or development workflow, it should continue to work after the repository restructure Validates: Requirements 6.1, 6.3
Property 10: Documentation Accuracy
For any usage documentation, it should correctly describe when to use workspace-root vs workspace-shared and provide accurate migration instructions Validates: Requirements 10.1, 10.2
Property 11: Internal Folder Structure Consistency
For any workspace configured with an internalFolder property, all projects should be instantiated as subdirectories under that internal folder path, and the workspace should function identically to traditional multi-workspace patterns Validates: Requirements 18.1, 18.3, 18.5
Error Handling
Migration Errors
- Missing Dependencies: Validate all required tooling exists in target repository before migration
- Broken References: Update all hardcoded paths and references during migration
- Configuration Conflicts: Resolve conflicts between old and new configuration formats
Repository Access Errors
- Submodule Failures: Provide fallback mechanisms when workspace-shared submodule is unavailable
- Permission Issues: Clear error messages when repositories are inaccessible
- Version Mismatches: Handle version conflicts between workspace-root and workspace-shared
AI Gateway Integration Errors
- Configuration Loading: Graceful degradation when AI Gateway config is malformed
- BMAD Methodology Access: Fallback to local templates when centralized methodology is unavailable
- Workspace Discovery: Continue operation when workspace discovery fails
Task Execution Errors
- Missing Scripts: Clear error messages when migrated scripts are not found
- Environment Issues: Validate required tools and dependencies before task execution
- Path Resolution: Handle path differences between old and new repository structures
Python Application Development Patterns
Overview
The platform supports two distinct Python development patterns optimized for different use cases: lightweight local scripting for operational utilities and production-grade Kubernetes services for long-running applications.
Pattern 1: Local Scripting (venv-based)
Use Cases:
- Operational utilities (workflow cleanup, data migration)
- CI/CD helper scripts
- One-off automation tasks
- Development tooling
Structure:
component/
├── scripts/
│ ├── requirements.txt # Minimal dependencies
│ ├── utility_script.py # Standalone executable scripts
│ └── .venv/ # Local virtual environment (gitignored)
└── Taskfile.yaml # Task automation
Dependency Management:
- Use
requirements.txtfor simple dependency lists - Virtual environment created with
python3 -m venv .venv - Dependencies installed with
pip install -r requirements.txt
Taskfile Integration:
tasks:
scripts:setup:
desc: Setup Python virtual environment for scripts
dir: scripts
cmds:
- cmd: |
if [ ! -d ".venv" ]; then
echo "Creating Python virtual environment..."
python3 -m venv .venv
.venv/bin/pip install --upgrade pip
.venv/bin/pip install -r requirements.txt
echo "✅ Virtual environment created and dependencies installed"
else
echo "Virtual environment already exists"
fi
scripts:run:
desc: Run utility script
deps: [scripts:setup]
dir: scripts
cmds:
- cmd: |
if [ -n "$CI" ]; then
python3 script_name.py {{.CLI_ARGS}}
else
.venv/bin/python script_name.py {{.CLI_ARGS}}
fiCI/CD Considerations:
- Check
$CIenvironment variable to determine execution context - Use system Python in CI environments (dependencies pre-installed)
- Use venv Python for local development
Script Structure:
#!/usr/bin/env python3
"""
Script Description
Usage:
python3 script_name.py --arg value
"""
import argparse
import sys
from typing import Optional
def main() -> int:
"""Main entry point."""
parser = argparse.ArgumentParser(description="Script description")
parser.add_argument("--arg", required=True, help="Argument description")
args = parser.parse_args()
try:
# Script logic here
return 0
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
return 1
if __name__ == "__main__":
sys.exit(main())Pattern 2: Kubernetes Services (uv-based)
Use Cases:
- Long-running services (APIs, workers, daemons)
- Production applications
- Services requiring complex dependency management
- Applications with multiple components
Structure:
repository/
├── pyproject.toml # Project metadata and dependencies
├── uv.lock # Locked dependency versions
├── services/
│ ├── gateway/
│ │ ├── __init__.py
│ │ ├── __main__.py # Entry point
│ │ ├── main.py # Application logic
│ │ └── Dockerfile # Multi-stage build
│ └── worker/
│ ├── __init__.py
│ └── main.py
└── tests/
├── unit/
├── integration/
└── acceptance/
Dependency Management:
- Use
pyproject.tomlfor project metadata and dependencies - Use
uvfor fast, reliable dependency resolution - Lock file (
uv.lock) ensures reproducible builds - Support for optional dependencies (
[project.optional-dependencies])
Docker Build Pattern:
# Multi-stage build for production services
FROM python:3.11-slim as builder
# Install uv
COPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
WORKDIR /app
# Copy dependency files
COPY pyproject.toml uv.lock README.md ./
# Install dependencies
RUN uv sync --frozen --no-cache
# Production stage
FROM python:3.11-slim
# Install uv
COPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
# Create non-root user
RUN groupadd -r appuser && useradd -r -g appuser appuser
WORKDIR /app
# Copy virtual environment from builder
COPY --from=builder /app/.venv /app/.venv
# Copy application code
COPY services/ ./services/
COPY pyproject.toml README.md ./
# Set ownership
RUN chown -R appuser:appuser /app
# Switch to non-root user
USER appuser
# Set environment variables
ENV PATH="/app/.venv/bin:$PATH"
ENV PYTHONPATH="/app"
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD python -c "import httpx; httpx.get('http://localhost:8000/health')"
EXPOSE 8000
CMD ["python", "-m", "services.gateway"]pyproject.toml Structure:
[project]
name = "service-name"
version = "0.1.0"
description = "Service description"
requires-python = ">=3.11"
dependencies = [
"fastapi>=0.104.0",
"uvicorn[standard]>=0.24.0",
"pydantic>=2.5.0",
]
[project.optional-dependencies]
dev = [
"pytest>=7.4.0",
"pytest-asyncio>=0.21.0",
"hypothesis>=6.88.0",
"ruff>=0.1.0",
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.pytest.ini_options]
testpaths = ["tests"]
addopts = ["--strict-markers", "--cov=services"]Pattern Selection Guidelines
Use Local Scripting Pattern When:
- Script is < 500 lines of code
- Minimal dependencies (< 5 packages)
- One-off or infrequent execution
- No need for packaging or distribution
- Primarily operational/administrative tasks
Use Kubernetes Service Pattern When:
- Long-running application
- Complex dependency tree
- Multiple components or services
- Requires testing infrastructure
- Production deployment needed
- API or service interface
Migration Path
From Script to Service:
- Create
pyproject.tomlwith project metadata - Restructure code into proper Python package
- Add comprehensive test suite
- Create Dockerfile with multi-stage build
- Set up Kubernetes manifests
- Migrate Taskfile tasks to use uv
Indicators for Migration:
- Script exceeds 500 lines
- Dependencies grow beyond 5 packages
- Need for proper testing infrastructure
- Multiple people maintaining the code
- Script runs continuously or on schedule
Shared Utilities
workspace-shared Location:
- Common script templates
- Reusable Taskfile patterns
- Dockerfile templates for uv-based services
- CI/CD integration examples
workspace-root Location:
- Workspace-level Python utilities
- Cross-repository automation scripts
- Development environment setup scripts
Best Practices
Local Scripts:
- Always include shebang (
#!/usr/bin/env python3) - Use argparse for CLI arguments
- Return proper exit codes (0 for success, non-zero for failure)
- Write to stderr for errors
- Include usage documentation in docstring
Kubernetes Services:
- Use multi-stage Docker builds
- Run as non-root user
- Include health checks
- Set proper PYTHONPATH
- Use uv for dependency management
- Lock dependencies with uv.lock
- Separate dev and production dependencies
Both Patterns:
- Type hints for all functions
- Comprehensive docstrings
- Error handling with proper logging
- Unit tests for business logic
- Integration tests for external dependencies
Project Management
Unified Project Structure
The .ai/projects/ directory provides a single source of truth for all AI-assisted development projects, regardless of which tool (Kiro or Codev) is being used. This eliminates duplication and enables seamless switching between tools.
Directory Structure
workspace-root/
├── .ai/
│ └── projects/ # Unified project storage
│ ├── simple-xml-response/ # Example project
│ │ ├── metadata.json # Project metadata
│ │ ├── spec.md # Codev spec (WHAT)
│ │ ├── requirements.md # Kiro requirements (WHAT)
│ │ ├── design.md # Kiro design (HOW)
│ │ ├── plan.md # Codev plan (HOW)
│ │ ├── tasks.md # Kiro tasks (TODO)
│ │ └── review.md # Codev review (LEARNED)
│ └── n8n-platform/
│ ├── metadata.json
│ ├── requirements.md
│ ├── design.md
│ └── tasks.md
│
├── .kiro/
│ └── specs/ # Symlinks to projects
│ ├── simple-xml-response -> ../../.ai/projects/simple-xml-response/
│ └── n8n-platform -> ../../.ai/projects/n8n-platform/
│
├── codev/
│ ├── specs/ # Pointer files
│ │ └── 0001-simple-xml-response.link.md
│ ├── plans/
│ │ └── 0001-simple-xml-response.link.md
│ └── reviews/
│ └── 0001-simple-xml-response.link.md
│
└── .gitignore # Excludes .ai/projects/
Project Metadata Format
{
"name": "simple-xml-response",
"displayName": "Simple XML Response Adapter",
"created": "2026-01-26T10:30:00Z",
"updated": "2026-01-26T15:45:00Z",
"format": "codev",
"artifacts": ["spec.md", "plan.md", "review.md"],
"status": "complete",
"codevNumber": "0001",
"tags": ["gateway", "adapter", "xml"],
"repository": "domain-apis",
"description": "Adapter to transform XML backend responses to JSON"
}Format Field Values:
"kiro": Project uses Kiro format (requirements.md, design.md, tasks.md)"codev": Project uses Codev format (spec.md, plan.md, review.md)
Note: A project uses exactly one format, not both. The format determines which artifacts are present and which tool integration method is used.
Pointer File Format (Codev)
Codev uses numbered files, so we use lightweight pointer files:
<!-- codev/specs/0001-simple-xml-response.link.md -->
# 0001: Simple XML Response Adapter Implementation
> **Note**: This is a pointer file. The actual project artifacts are located at:
> `.ai/projects/simple-xml-response/`
## Quick Links
- [Spec](../../.ai/projects/simple-xml-response/spec.md)
- [Plan](../../.ai/projects/simple-xml-response/plan.md)
- [Review](../../.ai/projects/simple-xml-response/review.md)
## Metadata
- **Project**: simple-xml-response
- **Format**: Codev
- **Codev Number**: 0001
- **Status**: Complete
- **Repository**: domain-apis
---
*This file is auto-generated. Do not edit manually.*Symlink Strategy (Kiro)
Kiro expects folder structures, so we use symlinks:
# Create symlink for Kiro
ln -s ../../.ai/projects/simple-xml-response .kiro/specs/simple-xml-response
# Kiro can now access:
# .kiro/specs/simple-xml-response/requirements.md
# .kiro/specs/simple-xml-response/design.md
# .kiro/specs/simple-xml-response/tasks.mdTool Integration Patterns
Kiro Integration:
- Reads specs via symlinks in
.kiro/specs/ - Writes directly to
.ai/projects/{project-name}/ - Updates metadata.json on artifact changes
- No changes needed to Kiro’s spec workflow
Codev Integration:
- Reads specs via pointer files in
codev/specs/ - Writes directly to
.ai/projects/{project-name}/ - Updates metadata.json and regenerates pointer files
- Maintains sequential numbering in pointer files
Migration Strategy
Phase 1: Create Unified Structure
- Create
.ai/projects/directory - Add
.ai/projects/to.gitignore - Create migration script
Phase 2: Migrate Existing Projects
#!/bin/bash
# migrate-projects.sh
# Migrate Kiro projects
for spec_dir in .kiro/specs/*/; do
project_name=$(basename "$spec_dir")
# Create project directory
mkdir -p ".ai/projects/$project_name"
# Move artifacts
mv "$spec_dir"/*.md ".ai/projects/$project_name/"
# Create symlink
rm -rf "$spec_dir"
ln -s "../../.ai/projects/$project_name" ".kiro/specs/$project_name"
# Generate metadata
generate_metadata "$project_name" "kiro"
done
# Migrate Codev projects
for spec_file in codev/specs/*.md; do
# Extract project info from filename (e.g., 0001-simple-xml-response.md)
filename=$(basename "$spec_file" .md)
number=$(echo "$filename" | cut -d'-' -f1)
project_name=$(echo "$filename" | cut -d'-' -f2-)
# Create project directory
mkdir -p ".ai/projects/$project_name"
# Move artifacts
mv "codev/specs/$filename.md" ".ai/projects/$project_name/spec.md"
mv "codev/plans/$filename.md" ".ai/projects/$project_name/plan.md" 2>/dev/null || true
mv "codev/reviews/$filename.md" ".ai/projects/$project_name/review.md" 2>/dev/null || true
# Create pointer file
create_pointer_file "$number" "$project_name" "spec"
create_pointer_file "$number" "$project_name" "plan"
create_pointer_file "$number" "$project_name" "review"
# Generate metadata
generate_metadata "$project_name" "codev" "$number"
donePhase 3: Update Tool Configurations
- Update Kiro to recognize symlinked specs
- Update Codev to follow pointer files
- Test both tools with migrated projects
Phase 4: Validation
- Verify all projects accessible from both tools
- Confirm metadata is accurate
- Test creating new projects with each tool
Helper Scripts
Create New Project:
#!/bin/bash
# create-project.sh <project-name> <tool>
PROJECT_NAME=$1
TOOL=$2
# Create project directory
mkdir -p ".ai/projects/$PROJECT_NAME"
# Generate metadata
cat > ".ai/projects/$PROJECT_NAME/metadata.json" << EOF
{
"name": "$PROJECT_NAME",
"created": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"updated": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"tools": {
"$TOOL": {
"enabled": true,
"artifacts": [],
"status": "new"
}
}
}
EOF
# Create tool-specific integration
if [ "$TOOL" = "kiro" ]; then
ln -s "../../.ai/projects/$PROJECT_NAME" ".kiro/specs/$PROJECT_NAME"
elif [ "$TOOL" = "codev" ]; then
# Get next number
NEXT_NUM=$(find codev/specs -name "*.link.md" | wc -l | xargs printf "%04d")
create_pointer_file "$NEXT_NUM" "$PROJECT_NAME" "spec"
fi
echo "Created project: $PROJECT_NAME for $TOOL"List All Projects:
#!/bin/bash
# list-projects.sh
echo "All Projects:"
echo "============="
for project_dir in .ai/projects/*/; do
project_name=$(basename "$project_dir")
# Read metadata
if [ -f "$project_dir/metadata.json" ]; then
tools=$(jq -r '.tools | keys | join(", ")' "$project_dir/metadata.json")
status=$(jq -r '.tools | to_entries | map(.value.status) | join(", ")' "$project_dir/metadata.json")
echo "📁 $project_name"
echo " Tools: $tools"
echo " Status: $status"
echo ""
fi
doneBenefits
- Single Source of Truth: All project artifacts in one location
- Tool Agnostic: Work with any AI tool without duplication
- Easy Migration: Simple scripts to move between tools
- Metadata Tracking: Know which tools use which projects
- Local State:
.ai/projects/stays local (gitignored) - Backward Compatible: Existing tool workflows continue to work
Considerations
Git Ignore:
# .gitignore
.ai/projects/Symlink Compatibility:
- Works on macOS, Linux, WSL
- Windows requires Developer Mode or admin privileges
- Alternative: Use junction points on Windows
Tool Updates:
- Kiro: No changes needed (follows symlinks naturally)
- Codev: Minor update to follow pointer files
- Both tools can write directly to
.ai/projects/
Testing Strategy
Dual Testing Approach
Unit Tests:
- Repository structure validation (correct files in correct locations)
- Task migration verification (identical functionality before/after)
- Configuration parsing and validation
- Script execution in isolated environments
Property-Based Tests:
- Repository separation consistency across all tooling components
- Workspace discovery with randomized workspace configurations
- AI Gateway integration with various workspace combinations
- BMAD methodology consistency across multiple workspace scenarios
Property-Based Testing Configuration
- Framework: Use Hypothesis (Python) for property-based testing
- Iterations: Minimum 100 iterations per property test
- Test Tagging: Each property test must reference its design document property
- Tag Format:
# Feature: repository-restructure, Property {number}: {property_text}
Migration Testing Strategy
Pre-Migration Validation:
- Inventory all existing functionality in dev-common
- Document all current usage patterns and dependencies
- Validate all repositories that currently use dev-common as submodule
Migration Execution Testing:
- Automated migration scripts with rollback capability
- Validation of file placement and content preservation
- Reference update verification across all affected repositories
Post-Migration Validation:
- Functional testing of all migrated components
- Integration testing with existing workflows
- Performance comparison before/after migration
- User acceptance testing with development teams
Test Data Management
- Repository Fixtures: Test repositories representing different usage patterns
- Configuration Samples: Various workspace and AI Gateway configurations
- Migration Scenarios: Different starting states and migration paths
- Integration Environments: Isolated environments for testing complete workflows
CI/CD Infrastructure
The workspace uses a unified merge request (MR) validation system that automatically discovers and runs tests across all components.
GitHub Actions Workflow
Location: .github/workflows/mr-checks.yaml
Runs on all PRs to main and:
- Discovers all testable targets automatically
- Runs tests in parallel via matrix strategy
- Reports per-component pass/fail status
Discovery Mechanism
Location: scripts/discover-mr-targets.py
The discovery script finds testable targets using two strategies:
- Taskfile-based targets: Components with a
Taskfile.yamlcontaining atest:mrtask - Kustomization fallback: Kustomizations not covered by a Taskfile get
kustomize buildvalidation
Output is JSON formatted for GitHub Actions matrix consumption:
{
"include": [
{"name": "clickhouse", "path": "repos/k8s-lab/components/clickhouse", "type": "taskfile"},
{"name": "traefik", "path": "repos/k8s-lab/components/traefik", "type": "kustomization"}
]
}Local Validation
Run all MR checks locally before pushing:
# Via task (recommended)
task test:mr:all
# Via script directly
./tests/validate-kustomizations.shAdding Tests to a Component
Option 1: Taskfile with custom tests (preferred for complex components)
Create Taskfile.yaml in the component directory:
version: '3'
tasks:
test:mr:
desc: MR validation tests
cmds:
- kustomize build . > /dev/null
- pytest tests/ -v
# Add additional validation as neededOption 2: Kustomization-only (automatic for simple components)
If a component has only a kustomization.yaml and no Taskfile.yaml, the CI will automatically validate it with kustomize build.
Convention Summary
| Component Type | Test Definition | What Runs |
|---|---|---|
Has Taskfile.yaml with test:mr | Custom | Whatever test:mr defines |
Has kustomization.yaml only | Automatic | kustomize build validation |
| Neither | Skipped | Not tested |
Related Files
.github/workflows/mr-checks.yaml- GitHub Actions workflowscripts/discover-mr-targets.py- Target discovery scripttests/validate-kustomizations.sh- Local validation scriptTaskfile.yaml(root) - Workspace-level tasks includingtest:mr:all