Implementation Plan: Market Maker Dashboard
Overview
This plan implements a static dashboard that visualizes market maker metrics over the last 7 days. The implementation uses a build-time pre-aggregation approach where Python processes YAML metric files into optimized JSON, then generates a static HTML/CSS/JavaScript site with interactive charts.
Tasks
-
1. Set up project structure and dependencies
- Create dashboard directory structure (dashboard/, dashboard/data/, dashboard/js/, dashboard/css/)
- Set up Python virtual environment
- Install dependencies: PyYAML, Jinja2
- Create package.json for JavaScript dependencies (if using npm for charting library)
- Requirements: 1.1, 1.3
-
2. Implement YAML metrics scanner and parser
- 2.1 Create MetricsScanner class to discover YAML files
- Implement recursive directory scanning for metrics/
- Group files by market (symbol)
- Extract timestamp from filename
- Requirements: 2.1
- 2.2 Create YAMLParser class to extract data
- Implement parse_metric_file() to load YAML
- Implement extract_price_data() for minute_prices and market_summary
- Implement extract_regime_data() for verdict, confidence, competing verdicts
- Implement extract_grid_data() for grid status and configuration
- Implement extract_account_data() for balance information
- Requirements: 2.2
- [ ]* 2.3 Write unit tests for YAML parser
- Test parsing valid YAML files
- Test handling missing optional fields
- Test error handling for malformed YAML
- Requirements: 2.2, 2.3
- 2.1 Create MetricsScanner class to discover YAML files
-
[-] 3. Implement data aggregation and optimization
- 3.1 Create DataAggregator class
- Implement aggregate_market_data() to combine all metrics for a market
- Implement create_price_series() with minute-level granularity
- Implement create_regime_series() with confidence and competing verdicts
- Implement create_grid_series() for performance metrics
- Implement create_account_series() for balance over time
- Sort all time-series chronologically
- Requirements: 2.5, 3.1
- [ ]* 3.2 Write property test for chronological ordering
- Property 2: Chronological Ordering
- Validates: Requirements 2.5
- Generate random metric data with shuffled timestamps
- Verify all time-series are sorted chronologically
- [ ]* 3.3 Write property test for minute-level granularity
- Property 10: Minute-Level Price Granularity
- Validates: Requirements 3.1
- Generate random hourly metrics with 60 minute prices
- Verify all 60 prices are preserved in output
- 3.1 Create DataAggregator class
-
4. Implement static site generator
- 4.1 Create StaticSiteGenerator class
- Implement generate_site() to orchestrate generation
- Implement write_data_files() to output optimized JSON
- Create markets.json with market list and metadata
- Create per-market JSON files (e.g., eth-usdt.json)
- Requirements: 1.4, 1.5, 2.6
- 4.2 Create HTML template with Jinja2
- Design responsive layout (desktop and mobile)
- Add market selector dropdown
- Add chart containers for all visualizations
- Add current status panel
- Include export buttons on charts
- Requirements: 1.1, 1.2, 11.1, 11.2
- 4.3 Create CSS for responsive design
- Implement mobile-first responsive layout
- Style charts and panels
- Implement color scheme for regimes and risk levels
- Add loading and refresh indicators
- Requirements: 11.1, 11.2, 11.3
- [ ]* 4.4 Write property test for data completeness
- Property 1: Data Completeness Preservation
- Validates: Requirements 2.1, 2.3
- Generate random sets of valid YAML files
- Build dashboard
- Verify all files are processed or logged as errors
- 4.1 Create StaticSiteGenerator class
-
5. Checkpoint - Verify build pipeline
- Run build script on sample data
- Verify JSON files are generated correctly
- Verify HTML is generated with correct structure
- Check file sizes are reasonable (~500KB uncompressed)
- Ensure all tests pass, ask the user if questions arise
-
6. Implement client-side data loader
- 6.1 Create DataLoader class in JavaScript
- Implement loadMarketList() to fetch markets.json
- Implement loadMarketData() to fetch market-specific JSON
- Implement caching to avoid redundant loads
- Implement checkForUpdates() for auto-refresh
- Requirements: 2.6, 8.1, 8.2
- [ ]* 6.2 Write unit tests for data loader
- Test loading market list
- Test loading market data
- Test caching behavior
- Test error handling
- 6.1 Create DataLoader class in JavaScript
-
7. Implement chart rendering with charting library
- 7.1 Create ChartManager class
- Initialize charting library (Chart.js or Lightweight.js)
- Implement renderPriceChart() with regime overlay
- Implement renderRegimeChart() showing stacked bars with verdict and competing verdicts
- Implement renderGridChart() for P&L trends
- Implement renderAccountChart() for balance trends
- Requirements: 3.1, 3.5, 4.1, 4.7, 5.6, 5.7
- 7.2 Implement regime visualization on price chart
- Overlay regime classifications as colored background bands
- Display grid bounds and levels as horizontal lines
- Show legend with regime colors
- Requirements: 3.3, 3.4, 3.5, 4.2
- 7.2b Implement stacked regime chart with competing verdicts
- Create stacked bar chart where each bar represents an hour
- Stack primary verdict and competing verdicts vertically in each bar
- Size each segment proportionally to its weight/confidence
- Color each segment by regime type (RANGE_OK, RANGE_WEAK, TRANSITION, TREND)
- Add tooltip showing all verdicts and weights for each time period
- Tall single-color bars indicate high confidence, multi-color bars indicate uncertainty
- Requirements: 4.1, 4.2, 4.3, 4.7
- 7.3 Implement chart interactivity
- Add zoom and pan functionality
- Synchronize zoom/pan across all charts
- Add tooltips showing detailed values on hover
- Requirements: 3.6, 3.7
- [ ]* 7.4 Write property test for grid bounds accuracy
- Property 4: Grid Bounds Overlay Accuracy
- Validates: Requirements 3.3, 3.4
- Generate random grid configurations
- Render price chart
- Verify displayed bounds match configuration
- [ ]* 7.5 Write property test for regime color consistency
- Property 5: Regime Color Consistency
- Validates: Requirements 4.2
- Generate random regime data
- Render charts
- Verify same verdict uses same color everywhere
- 7.1 Create ChartManager class
-
8. Implement dashboard application logic
- 8.1 Create DashboardApp class
- Implement init() to load initial data and render charts
- Implement switchMarket() to change displayed market
- Implement setupAutoRefresh() for periodic updates
- Handle loading states and errors gracefully
- Requirements: 8.1, 8.2, 8.3, 8.4, 8.5, 9.1, 9.2
- 8.2 Implement current status panel
- Display current risk level with color coding
- Show current regime, confidence, and strength
- Display grid status and metrics
- Show account balances
- List recommendations
- Show last updated time and next review time
- Requirements: 6.1, 6.2, 6.3, 6.4, 6.5
- [ ]* 8.3 Write property test for multi-market isolation
- Property 7: Multi-Market Data Isolation
- Validates: Requirements 9.2
- Generate data for multiple random markets
- Switch between markets
- Verify no data leakage between markets
- [ ]* 8.4 Write property test for auto-refresh state preservation
- Property 9: Auto-Refresh State Preservation
- Validates: Requirements 8.3
- Set dashboard state (market, zoom level)
- Trigger refresh
- Verify state is restored after refresh
- 8.1 Create DashboardApp class
-
9. Implement spot check snapshot creation
- 9.1 Create SpotCheckSnapshot class
- Implement collect_raw_exchange_data() to gather OHLCV data for all timeframes (1m, 15m, 1h, 4h)
- Implement collect_analysis_results() to capture regime scores, gate evaluations, discovered ranges, competing verdicts
- Implement collect_configuration() to snapshot grid config, thresholds, system settings
- Implement create_snapshot() to build complete JSON structure
- Requirements: 12.1, 12.3, 12.4, 12.5, 12.6, 12.7
- 9.2 Integrate snapshot creation with recommendation generation
- Hook into regime management system’s recommendation generation
- Trigger snapshot creation when recommendation is generated
- Write JSON file to spotchecks/ directory with naming format: spotcheck_{timestamp}{symbol}{recommendation_id}.json
- Include decision record path in snapshot metadata
- Requirements: 12.1, 12.2, 12.8, 12.11
- [ ]* 9.3 Write unit tests for snapshot creation
- Test OHLCV data collection for all timeframes
- Test analysis results capture
- Test configuration snapshot
- Test JSON file writing
- 9.1 Create SpotCheckSnapshot class
-
10. Implement spot check page generation
- 10.1 Create SpotCheckScanner class
- Implement scan_spotcheck_directory() to discover JSON snapshots
- Index snapshots by timestamp, symbol, recommendation ID
- Match snapshots to decision records
- Requirements: 12.2, 12.9
- 10.2 Create SpotCheckParser class
- Implement parse_snapshot() to load JSON
- Validate snapshot structure
- Extract raw exchange data, analysis results, configuration
- Requirements: 12.3, 12.4, 12.5, 12.6
- 10.3 Create spot check HTML template
- Header section with metadata and verdict
- Multi-timeframe price charts section
- Regime classification factors section (mean reversion, directional persistence, range quality, volatility)
- Gate evaluation section (if applicable)
- Grid comparison section (if grid exists)
- Configuration section
- Export section
- Requirements: 12.12-12.29
- 10.4 Create SpotCheckGenerator class
- Implement generate_spotcheck_page() for each snapshot
- Apply data to spot check template
- Generate individual HTML pages
- Use shared CSS/JS libraries from main dashboard
- Requirements: 12.9, 12.10
- 10.5 Create spot check index page generator
- Aggregate all spot check metadata
- Generate index.html listing all spot checks
- Include sortable/filterable table
- Add links to individual spot check pages
- Requirements: 12.27
- [ ]* 10.6 Write property test for snapshot data preservation
- Property 11: Spot Check Data Preservation
- Validates: Requirements 12.3, 12.4, 12.5
- Generate random OHLCV data for all timeframes
- Create snapshot
- Parse snapshot
- Verify all data is preserved exactly
- 10.1 Create SpotCheckScanner class
-
11. Implement spot check visualizations
- 11.1 Create SpotCheckChartManager class
- Implement renderMultiTimeframeCharts() for 1m, 15m, 1h, 4h price data
- Implement renderRegimeFactorCharts() for mean reversion, directional persistence, range quality, volatility
- Implement renderGateEvaluations() with pass/fail indicators
- Implement renderGridComparison() showing discovered range vs configured grid
- Implement renderTechnicalIndicators() overlays (Bollinger Bands, ATR, moving averages)
- Requirements: 12.13-12.26
- 11.2 Implement discovered range visualization
- Overlay discovered support/resistance levels on price chart
- Show rejection frequency indicators
- Display range stability duration
- Requirements: 12.15
- 11.3 Implement gate evaluation display
- Show gate status for all 3 gates (Directional Energy Decay, Mean Reversion Return, Tradable Volatility)
- Display specific metrics and thresholds for each gate component
- Show time-series of gate status over evaluation period
- Use visual indicators for pass/fail status
- Requirements: 12.18, 12.19, 12.20
- 11.4 Implement competing verdicts display
- Show alternative regime classifications with weights
- Display supporting data for each competing verdict
- Use consistent color coding with main dashboard
- Requirements: 12.19, 12.24
- [ ]* 11.5 Write property test for multi-timeframe consistency
- Property 12: Multi-Timeframe Data Consistency
- Validates: Requirements 12.3, 12.21
- Generate random OHLCV data for all timeframes
- Render spot check charts
- Verify timeframe labels match data
- 11.1 Create SpotCheckChartManager class
-
12. Implement recommendations list on main dashboard
- 12.1 Create RecommendationsScanner class
- Scan decision records directory
- Match decision records to spot check snapshots
- Extract recommendation metadata (timestamp, symbol, verdict, confidence, action, status)
- Requirements: 13.9, 13.10
- 12.2 Add recommendations list section to main dashboard template
- Create table/list layout for recent 20-30 recommendations
- Display timestamp, symbol, verdict, confidence, action, status
- Add link to spot check page for each recommendation
- Use color coding for regime verdicts
- Requirements: 13.1, 13.2, 13.3, 13.4
- 12.3 Implement recommendations list JavaScript module
- Implement renderRecommendationsList() to display table
- Implement sorting by timestamp, symbol, verdict, confidence
- Implement filtering by symbol, verdict type, action type
- Handle row clicks to navigate to spot check pages
- Display tooltips on hover with key metrics preview
- Requirements: 13.5, 13.6, 13.7, 13.8
- [ ]* 12.4 Write property test for recommendations list completeness
- Property 13: Recommendations List Completeness
- Validates: Requirements 13.1, 13.9
- Generate random decision records and spot check snapshots
- Build recommendations list
- Verify all recommendations are included
- 12.1 Create RecommendationsScanner class
-
13. Implement re-analysis capability
- 13.1 Create ReanalysisEngine class
- Implement load_snapshot() to read JSON
- Implement extract_raw_data() to get original OHLCV data
- Implement apply_new_algorithm() to run updated regime analysis
- Implement compare_results() to show old vs new analysis
- Requirements: 12.30
- 13.2 Create re-analysis comparison view
- Display old and new analysis side-by-side
- Highlight differences in regime classification
- Show which metrics changed and by how much
- Regenerate spot check page with comparison data
- Requirements: 12.30
- [ ]* 13.3 Write integration test for re-analysis
- Create snapshot with known data
- Modify algorithm
- Run re-analysis
- Verify new results are calculated correctly
- Verify comparison view shows differences
- 13.1 Create ReanalysisEngine class
-
14. Checkpoint - Test spot check functionality
- Create test snapshots with sample data
- Generate spot check pages
- Verify all sections display correctly
- Test links to decision records
- Test multi-timeframe charts
- Test gate evaluations display
- Test grid comparison visualization
- Test recommendations list on main dashboard
- Ensure all tests pass, ask the user if questions arise
-
15. Implement export functionality
- 15.1 Add export buttons to charts
- Implement exportChart() for PNG format
- Implement exportChart() for SVG format
- Implement exportData() for CSV format
- Implement exportData() for JSON format
- Include metadata in exported files
- Requirements: 10.1, 10.2, 10.3, 10.4, 10.5, 12.25
- [ ]* 15.2 Write property test for export completeness
- Property 6: Export Data Completeness
- Validates: Requirements 10.4, 10.5
- Generate random time-series data
- Export data
- Verify all visible data points are included
- Verify metadata is accurate
- 15.1 Add export buttons to charts
-
16. Checkpoint - Test full dashboard functionality
- Load dashboard in browser
- Verify all charts render correctly
- Test market switching
- Test zoom/pan interactions
- Test export functionality
- Test recommendations list sorting and filtering
- Test links to spot check pages
- Test on mobile device or responsive mode
- Ensure all tests pass, ask the user if questions arise
-
17. Implement build automation and integration
- 17.1 Create build script (build_dashboard.py)
- Accept metrics directory path as argument
- Accept spotchecks directory path as argument
- Accept output directory path as argument
- Run full build pipeline (scan → parse → aggregate → generate)
- Generate main dashboard
- Generate all spot check pages
- Generate spot check index
- Log build progress and errors
- Report build statistics (files processed, data size, etc.)
- Requirements: 1.6, 12.9
- 17.2 Create Taskfile entry for dashboard build
- Add task to run build script
- Add task to serve dashboard locally (python -m http.server)
- Add task to clean build output
- Requirements: 1.6, 1.7
- 17.3 Document dashboard usage
- Add README.md in dashboard directory
- Document build process
- Document how to view dashboard
- Document how to integrate with metrics collection
- Document spot check snapshot creation
- Document re-analysis capability
- Requirements: 1.6, 1.7, 12.30
- 17.1 Create build script (build_dashboard.py)
-
18. Integration testing
- [ ]* 18.1 Write integration test for full build pipeline
- Start with sample YAML files and spot check snapshots
- Run complete build process
- Verify generated static site structure
- Load in headless browser and verify charts render
- [ ]* 18.2 Write integration test for multi-market dashboard
- Build dashboard with multiple markets
- Verify market selector works
- Verify switching between markets updates all charts
- [ ]* 18.3 Write integration test for auto-refresh
- Build initial dashboard
- Add new metric files
- Trigger refresh
- Verify new data appears without losing state
- [ ]* 18.4 Write integration test for spot check generation
- Create test snapshots
- Generate spot check pages
- Verify all sections render correctly
- Verify links work
- [ ]* 18.5 Write integration test for recommendations list
- Build dashboard with decision records and snapshots
- Verify recommendations list displays
- Test sorting and filtering
- Test links to spot checks
- [ ]* 18.1 Write integration test for full build pipeline
-
19. Final checkpoint - Complete system validation
- Run all unit tests
- Run all property-based tests
- Run all integration tests
- Build dashboard with real metrics data and spot check snapshots
- Verify performance targets (build time < 15s, page load < 2s)
- Test on multiple browsers (Chrome, Firefox, Safari)
- Test on mobile devices
- Verify spot check pages load correctly
- Verify recommendations list works
- Verify re-analysis capability
- Ensure all tests pass, ask the user if questions arise
Notes
- Tasks marked with
*are optional test tasks and can be skipped for faster MVP - Each task references specific requirements for traceability
- Checkpoints ensure incremental validation
- Property tests validate universal correctness properties
- Unit tests validate specific examples and edge cases
- Integration tests validate end-to-end workflows
- The dashboard will be regenerated automatically after each metrics collection cycle