Secure Images MVP — Technical Approach
What We’re Building
A small set of public, hardened container base images maintained by CascadeGuard. Each image is continuously scanned, automatically rebuilt when vulnerabilities are found, and published with full transparency (SBOMs, scan results, vulnerability status).
This is not a multi-tenant SaaS or a subscription product. It’s CascadeGuard’s own curated image catalog — the foundation that demonstrates our capability and builds trust before we offer it as a service to others.
MVP Scope
Images we publish (initial set)
A handful of hardened base images covering the most common use cases:
cascadeguard/node— Node.js basecascadeguard/python— Python basecascadeguard/go— Go basecascadeguard/nginx— Nginx base- (others as needed — small set, not exhaustive)
Each image is:
- Built from minimal base (Alpine/distroless where possible)
- Scanned daily with Grype + Trivy
- SBOM generated with Syft on every build
- Automatically rebuilt when critical/high CVEs are detected
- Signed and published to GHCR
Public dashboard
A public-facing website where anyone can see:
- Image catalog — all published images with current vulnerability status (green/yellow/red)
- Per-image detail — CVE list by severity, affected packages, fix availability, scan history
- SBOM viewer — full package inventory per image, diff between versions
- Rebuild history — when and why each image was rebuilt
- Status badges — embeddable badges for READMEs (e.g.
)
No login required. This is a public trust signal.
”Try me” — lead capture
A lightweight way for visitors to try CascadeGuard on their own image and enter our lead pipeline:
- Try it form on the dashboard — visitor enters an image reference (e.g.
myorg/myapp:latest) - We run a one-off scan via GitHub Actions (our runner, rate-limited)
- Results gated — visitor sees a summary (critical/high/medium counts) but must enter their email to get the full vulnerability report
- Email + image ref captured → sent to CRM (HubSpot) → enters drip campaign
- Drip sequence: full scan report → “here’s what we’d fix” → invite to enroll when SaaS launches
This is the lead generation engine. The public dashboard shows what we do; “try me” lets prospects experience it on their own image.
Rate limiting: Max 5 try-me scans per IP per day. Scans run in a dedicated GitHub Actions workflow with a queue.
API
A lightweight API that:
- Receives scan results from our CI pipelines (Grype + Trivy JSON)
- Receives SBOMs from our CI pipelines (Syft output)
- Tracks vulnerability state — which CVEs affect which images, first seen, SLA status
- Triggers rebuilds when SLA thresholds are breached
- Serves the dashboard — image list, vulnerability data, SBOMs, rebuild history
No auth for public read endpoints. CI pipeline auth via API key (single org — us).
CI pipeline (GitHub Actions)
Our own repos use reusable workflows:
- scan.yaml — Grype + Trivy scan → POST results to API
- sbom.yaml — Syft SBOM generation → POST to API
- rebuild.yaml — rebuild image from source, push to GHCR, re-scan
These run on GitHub Actions runners in our repos. The API receives results and orchestrates.
Architecture
┌─────────────────────────────────────────────────────────┐
│ CascadeGuard GitHub Repos │
│ │
│ cascadeguard/images (Dockerfiles + workflows) │
│ ┌────────────────────────────────────────────┐ │
│ │ On push / daily schedule: │ │
│ │ 1. Build image │ │
│ │ 2. Scan (Grype + Trivy) → POST to API │ │
│ │ 3. Generate SBOM (Syft) → POST to API │ │
│ │ 4. Sign + push to GHCR │ │
│ └────────────────────────────────────────────┘ │
└──────────────────────────┬───────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ CascadeGuard Platform (Cloudflare) │
│ │
│ Workers API (Python) Pages (React dashboard) │
│ ┌──────────────────┐ ┌──────────────────────┐ │
│ │ Ingest scans │ │ Image catalog │ │
│ │ Ingest SBOMs │ │ Vuln status per image │ │
│ │ Evaluate SLA │ │ CVE detail view │ │
│ │ Trigger rebuilds │ │ SBOM viewer + diff │ │
│ │ Serve public API │ │ Rebuild history │ │
│ └──────────────────┘ │ Status badges │ │
│ └──────────────────────┘ │
│ Turso (DB) R2 (artifacts) KV (cache) │
└─────────────────────────────────────────────────────────┘
Tech Stack (finalized)
| Component | Choice | Why |
|---|---|---|
| API | Cloudflare Workers (Python) | Single language, code reuse with OSS tooling (ADR-001) |
| Dashboard | Cloudflare Pages (React + Vite) | Free CDN, instant deploys |
| Database | Turso (edge SQLite) | Edge-compatible, free tier covers MVP |
| Object storage | Cloudflare R2 | S3-compatible, zero egress fees |
| Cache | Cloudflare KV | Dashboard data caching |
| CI | GitHub Actions (our repos) | Already in use for open-source images |
| Scanning | Grype + Trivy | Dual-scanner coverage, industry standard |
| SBOM | Syft | SPDX + CycloneDX support |
| Image registry | GHCR | Free for public images, integrated with GitHub |
| CRM / leads | HubSpot Free CRM | 1M contacts, email marketing, forms, drip campaigns — zero cost at MVP scale |
No auth provider needed for MVP (public dashboard, single-org API key for CI).
Data Model
-- Our published images
CREATE TABLE images (
id TEXT PRIMARY KEY,
name TEXT NOT NULL UNIQUE, -- e.g. cascadeguard/node
registry TEXT NOT NULL DEFAULT 'ghcr.io',
repository TEXT NOT NULL,
description TEXT,
dockerfile_path TEXT,
source_repo TEXT, -- GitHub repo URL
auto_rebuild INTEGER DEFAULT 1,
sla_critical_hours INTEGER DEFAULT 24,
sla_high_hours INTEGER DEFAULT 168,
status TEXT DEFAULT 'pending', -- pending|green|yellow|red
last_scan_at TEXT,
created_at TEXT DEFAULT (datetime('now'))
);
-- Scan results (one row per CI scan)
CREATE TABLE scans (
id TEXT PRIMARY KEY,
image_id TEXT NOT NULL REFERENCES images(id),
scanner TEXT NOT NULL, -- grype|trivy
image_digest TEXT,
image_tag TEXT,
critical_count INTEGER DEFAULT 0,
high_count INTEGER DEFAULT 0,
medium_count INTEGER DEFAULT 0,
low_count INTEGER DEFAULT 0,
results_r2_key TEXT, -- full JSON in R2
created_at TEXT DEFAULT (datetime('now'))
);
-- Individual CVEs (for querying and dashboard display)
CREATE TABLE vulnerabilities (
id TEXT PRIMARY KEY,
image_id TEXT NOT NULL REFERENCES images(id),
scan_id TEXT NOT NULL REFERENCES scans(id),
cve_id TEXT NOT NULL,
severity TEXT NOT NULL,
package_name TEXT NOT NULL,
installed_version TEXT,
fixed_version TEXT,
first_seen_at TEXT,
created_at TEXT DEFAULT (datetime('now'))
);
-- SBOMs
CREATE TABLE sboms (
id TEXT PRIMARY KEY,
image_id TEXT NOT NULL REFERENCES images(id),
format TEXT NOT NULL, -- spdx|cyclonedx
image_digest TEXT,
package_count INTEGER DEFAULT 0,
r2_key TEXT NOT NULL,
created_at TEXT DEFAULT (datetime('now'))
);
-- Rebuilds
CREATE TABLE rebuilds (
id TEXT PRIMARY KEY,
image_id TEXT NOT NULL REFERENCES images(id),
trigger TEXT NOT NULL, -- sla_breach|manual|schedule
trigger_scan_id TEXT REFERENCES scans(id),
status TEXT DEFAULT 'pending', -- pending|dispatched|building|success|failed
github_run_id TEXT,
started_at TEXT,
completed_at TEXT,
created_at TEXT DEFAULT (datetime('now'))
);No organizations, api_keys, or multi-tenant tables. Single org — us.
-- "Try me" scan requests (lead capture)
CREATE TABLE try_me_scans (
id TEXT PRIMARY KEY,
image_ref TEXT NOT NULL, -- e.g. myorg/myapp:latest
email TEXT, -- captured after scan completes
ip_address TEXT,
status TEXT DEFAULT 'queued', -- queued|scanning|done|failed
critical_count INTEGER,
high_count INTEGER,
medium_count INTEGER,
low_count INTEGER,
results_r2_key TEXT, -- full results (gated behind email)
hubspot_contact_id TEXT, -- synced to HubSpot
created_at TEXT DEFAULT (datetime('now'))
);API Endpoints
# Public (no auth)
GET /api/v1/images # List all published images + status
GET /api/v1/images/{id} # Image detail + latest scan summary
GET /api/v1/images/{id}/vulnerabilities # CVE list for image
GET /api/v1/images/{id}/scans # Scan history
GET /api/v1/images/{id}/sboms # SBOM list
GET /api/v1/images/{id}/sboms/diff # SBOM diff between versions
GET /api/v1/images/{id}/rebuilds # Rebuild history
GET /api/v1/badges/{image-name} # SVG status badge
# Try me (no auth, rate-limited by IP)
POST /api/v1/try-me # Submit image ref for one-off scan
GET /api/v1/try-me/{id}/summary # Get severity counts (free)
POST /api/v1/try-me/{id}/unlock # Submit email → get full results + sync to HubSpot
# CI pipeline (API key auth)
POST /api/v1/images/{id}/scans # Submit scan results
POST /api/v1/images/{id}/sboms # Upload SBOM
POST /api/v1/images/{id}/rebuilds # Trigger rebuild
POST /api/v1/webhooks/github # GitHub webhook events
SLA Policy Engine
When a scan result arrives:
- Parse results (normalize Grype/Trivy JSON to common format)
- Upsert vulnerabilities (track
first_seen_atfor SLA clock) - Evaluate thresholds per image:
- Critical CVEs older than
sla_critical_hours→ red (SLA breach) - Critical/high CVEs within SLA window → yellow
- No critical/high CVEs → green
- Critical CVEs older than
- If SLA breached and
auto_rebuildenabled → dispatch rebuild workflow
Implementation Plan
Phase 1: API + Data (Weeks 1-2)
- Python Workers project scaffold (routing, Turso connection, R2 bindings)
- Database schema migrations
- Scan ingestion endpoint — accept Grype/Trivy JSON, normalize, store, update image status
- SBOM upload endpoint — store in R2, extract package count
- Image CRUD (admin — for us to manage the catalog)
Phase 2: CI + Rebuild (Weeks 2-3)
- Reusable GitHub Actions workflows (scan.yaml, sbom.yaml, rebuild.yaml)
- SLA policy engine — evaluate scan results, trigger rebuild dispatch
- Rebuild dispatch — call GitHub Actions workflow_dispatch API
- GitHub webhook receiver — push events, workflow completion callbacks
Phase 3: Dashboard (Weeks 3-5)
- React dashboard — image catalog, vulnerability status, CVE detail
- SBOM viewer with version diff
- Rebuild history view
- Status badges (SVG generation)
Phase 4: Try Me + Lead Capture (Weeks 5-6)
- “Try me” API — accept image ref, queue one-off scan, return summary
- Try-me GitHub Actions workflow — scan arbitrary image, POST results
- Email gate + HubSpot integration — capture email, create contact, trigger drip
- Rate limiting (IP-based, Cloudflare KV)
- End-to-end test: full flow from dashboard → try me → lead capture → drip email
MVP Exit Criteria
- Small set of hardened base images building and publishing to GHCR
- Daily scans running via GitHub Actions, results ingested by API
- Public dashboard shows per-image vulnerability status
- CVE detail view with severity, package, fix availability
- SBOM viewer with package inventory
- Auto-rebuild triggers on SLA breach
- Status badges embeddable in READMEs
- “Try me” scan works: visitor submits image → sees severity summary → enters email → gets full report
- Leads captured in HubSpot with drip campaign active
Cost Estimate
| Service | Monthly Cost |
|---|---|
| Cloudflare Workers/Pages/R2/KV (free tiers) | $0 |
| Turso (free tier) | $0 |
| GitHub Actions (free for public repos) | $0 |
| HubSpot Free CRM | $0 |
| Domain | ~$1/mo |
| Total | ~$1/mo |
Open Questions
- Python Workers framework — FastAPI isn’t directly supported on Workers. Options: raw ASGI, Starlette, or a Workers-native Python framework. Needs prototyping.
- Scan result normalization — Grype and Trivy have different JSON schemas. Should we adopt OSV format or define our own common format?
- Which base images to start with — Node, Python, Go, Nginx are obvious candidates. What else for the initial set?
- Try-me scan security — scanning arbitrary user-provided images has risk (pulling malicious images). Mitigations: scan public images only, use ephemeral runners, timeout aggressively.