Design Document
Overview
The Regime Management System is a decision support and controlled automation platform for grid trading strategies. The system monitors cryptocurrency market conditions, classifies market regimes using technical analysis, and provides actionable recommendations while maintaining strict capital preservation principles. The architecture prioritizes explicit decision-making, auditability, and learning over profit optimization.
The system operates as a decision engine rather than a traditional trading bot, emphasizing risk governance and memory of intent and outcomes. It integrates with KuCoin exchange initially but is designed with an abstract exchange interface for future extensibility.
Architecture
The system follows a layered architecture with clear separation of concerns:
graph TB subgraph "Decision Layer" RE[Regime Engine] RG[Recommendation Generator] DR[Decision Records] end subgraph "Data Layer" MD[Market Data Service] CD[Configuration Data] EI[Exchange Interface] end subgraph "Execution Layer" GM[Grid Manager] NM[Notification Manager] EM[Evaluation Manager] end subgraph "Metrics Layer" DC[Data Collector] MA[Metrics Analyzer] MH[Metrics History] SD[Static Dashboard] end subgraph "External Systems" KC[KuCoin API] N8N[n8n Automation Platform] GIT[Git Repository - market-maker-data] PO[Pushover via n8n] EMAIL[Email via n8n] SLACK[Slack via n8n] WEB[Web Browser] end RE --> RG RG --> DR MD --> RE CD --> RE EI --> MD GM --> EI NM --> N8N N8N --> PO N8N --> EMAIL N8N --> SLACK DR --> GIT EM --> DR KC --> EI N8N --> DC DC --> EI DC --> MH MA --> MH MA --> DC MH --> GIT MH --> SD SD --> WEB DC --> N8N
Core Principles Implementation
- Regime-first, PnL-second: The Regime Engine drives all decisions based on market classification, not profit expectations
- Asymmetric automation: Risk reduction can be automated, capital deployment requires explicit permission
- Explicit decisions, explicit memory: All recommendations are recorded in immutable Decision Records
- Confidence may veto risk, never authorize it: Confidence scores can only escalate to conservative actions
Components and Interfaces
Regime Engine
The Regime Engine is responsible for market regime classification using technical analysis of OHLCV data.
Core Functionality:
- Analyzes 1-hour decision timeframe with 5m/15m early-warning context
- Classifies markets into four regimes: RANGE_OK, RANGE_WEAK, TRANSITION, TREND
- Calculates confidence scores based on regime stability metrics
Technical Analysis Components:
- Range Detection: Identifies support/resistance levels using rolling high/low analysis to discover natural market ranges
- Mean Reversion Analysis: Measures speed of reversion from extremes and oscillation frequency
- Volatility Assessment: Calculates ATR-based metrics and volatility expansion/contraction
- Time Consistency: Evaluates agreement across recent candles and regime stability
Range Discovery (Not Grid Validation):
The range_analysis component discovers natural support/resistance levels from market price action, independent of any configured grid. This is a critical distinction:
- Market-Driven: Range bounds are identified from rolling highs/lows, price clusters, and rejection points
- Not Grid-Aware: The discovered range has no knowledge of configured grid bounds
- Discovery Methods:
- Rolling high/low over window (e.g., 120-240 bars for 1h)
- Support/resistance strength based on price rejection frequency
- Bollinger Band boundaries as dynamic range estimates
- Volume-weighted price levels
Example: If the market has been oscillating between 2900-3200 over the past week, the range_analysis will discover these bounds regardless of whether a grid is configured at 2850-3150 or 3000-3300.
The discovered range is then used to:
- Calculate RangeQualityScore (amplitude vs drift)
- Assess mean reversion behavior within the discovered range
- Propose new grid bounds that align with discovered market structure
Regime Classification Logic
The regime classifier answers three fundamental questions every bar to determine what makes a grid profitable versus dangerous:
- Is price mean-reverting enough? (good for grid)
- Is directionality/persistence strong? (bad for symmetric grid)
- Is volatility in a “tradable” band? (too low = no fills, too high = inventory blowups)
The system maps combinations of these signals into the four regime classifications.
Rolling Window Configuration:
The analysis uses rolling windows appropriate for the timeframe:
- 1h timeframe: 120-240 bars (5-10 days)
- 4h timeframe: 60-120 bars (10-20 days)
- 15m timeframe: 300-600 bars (3-6 days)
Core Feature Calculations:
A) Trend / Directionality Indicators
Measures directional persistence and trend strength:
- ADX(14): Average Directional Index for trend strength measurement
- Normalized Slope:
|slope(EMA(close, n))| / ATR(14)- slope normalized by volatility - Directional Persistence: Fraction of up bars vs down bars, or Hurst exponent proxy
Interpretation: High values indicate trend risk for symmetric grids.
B) Mean Reversion Strength Indicators
Measures how strongly price reverts to mean:
- Lag-1 Autocorrelation: Negative autocorrelation indicates mean reversion; positive indicates trend
- OU Half-Life: Ornstein-Uhlenbeck half-life estimate on price deviation from mean (short half-life = strong mean reversion)
- Z-Score Reversion Rate: How quickly z-score of
(close - MA)/stdevdecays after excursions
Interpretation: Strong mean reversion is grid-friendly.
C) Range Quality (Amplitude vs Drift)
Grids need oscillation around a mean, not just low ADX. Range quality is calculated from the discovered market range, not configured grid bounds.
Range Discovery Process:
-
Identify Support/Resistance:
- Rolling high/low over window W
- Price levels with high rejection frequency
- Bollinger Band boundaries
- Volume-weighted price clusters
-
Calculate Range Metrics:
- Range Ratio:
(discovered_high - discovered_low) / ATR - Efficiency Ratio (Kaufman ER):
abs(close - close[-W]) / sum(abs(diff(close)))- ER near 0 = choppy/range behavior
- ER near 1 = straight-line trend
- Range Ratio:
-
Assess Range Quality:
- How well-defined are the bounds? (rejection frequency)
- How long has the range been stable?
- What % of closes fall within the range?
Interpretation: Need “wide enough chop” but not directional movement.
Relationship to Grid Configuration:
The discovered range informs grid proposals but is independent of any existing grid:
- If no grid exists: Propose grid bounds aligned with discovered support/resistance
- If grid exists: Compare discovered range to configured grid bounds
- If price has moved outside configured grid → recommend repositioning
- If discovered range has shifted → recommend new grid bounds
- If discovered range aligns with grid → continue operation
Example Scenarios:
- Discovered Range: 2900-3200, No Grid: Propose grid with bounds 2900-3200
- Discovered Range: 2900-3200, Grid: 2850-3150: Grid bounds reasonable, continue
- Discovered Range: 3100-3400, Grid: 2850-3150: Price has moved up, recommend new grid 3100-3400
- Discovered Range: 2950-3050 (tight), Grid: 2850-3150: Range too tight, classify as RANGE_WEAK
D) Volatility State (Fills vs Blowups)
Measures volatility level and expansion:
- ATR%:
ATR / close- volatility as percentage of price - BB Bandwidth:
(BB_upper - BB_lower) / BB_mid - Vol Percentile: Current ATR% compared to last N bars
Interpretation:
- Too low → RANGE_WEAK (no action, insufficient fills)
- Too high / expanding → TRANSITION (breakout risk)
E) Breakout / Transition Triggers
Grid killers are regime changes. These triggers detect early warning signs:
- Vol Expansion After Compression: Bandwidth percentile jumps upward
- Close Outside Bands + Follow-Through: 2 consecutive closes outside BB/Keltner
- Change-Point on Slope: Slope crosses threshold AND volatility rises
Interpretation: When these fire, classify as TRANSITION even if trend isn’t fully established.
Regime Definitions and Classification Rules:
RANGE_OK - Grid Expected to Harvest Mean Reversion
Grid is expected to harvest mean reversion with manageable inventory risk.
Driving Factors:
- Range quality SUFFICIENT (gate condition)
- Trend strength LOW
- Mean reversion HIGH
- Volatility level MODERATE (tradable band)
- Volatility change STABLE (not expanding)
- Grid capacity SAFE (expected move fits within grid)
Classification Logic:
RangeQualityScore >= MIN_RANGE_QUALITY AND
TrendScore <= 35 AND
MeanRevScore >= 60 AND
30 <= VolLevelScore <= 70 AND
VolChangeScore < 70 AND
GridCapacity <= MAX_LEVELS_SAFE
RANGE_WEAK - Range-y But Edges Are Weak
Looks range-y but edges are weak: insufficient range amplitude, low volatility (no fills), or weak mean reversion.
Common Causes:
- Range quality below threshold (tight range, fee-churning)
- Volatility too low (no fills) or too high (risk)
- Mean reversion signal weak/neutral
- Grid capacity marginal
Classification Logic:
(RangeQualityScore < MIN_RANGE_QUALITY) OR
(TrendScore <= 50 AND MeanRevScore < 60) OR
(VolLevelScore < 30 OR VolLevelScore > 70) OR
(GridCapacity > MAX_LEVELS_SAFE AND regime was RANGE_OK)
TRANSITION - Unstable / Breakout Risk
Unstable: volatility expanding, breakout risk, or trend forming. Symmetric grid most likely to get run over.
Driving Factors:
- Volatility change HIGH (expansion detected)
- Breakout triggers fired
- Trend score in mid-zone (40-70) and rising
- Grid capacity exceeded
Classification Logic:
(VolChangeScore > 70) OR
(breakout_triggers_fired()) OR
(40 <= TrendScore < 70 AND VolChangeScore > 50) OR
(GridCapacity > MAX_LEVELS_SAFE AND regime was RANGE_WEAK)
TREND - Directionality Dominates
Directional persistence dominates; mean reversion is not the primary effect.
Driving Factors:
- Trend score HIGH (≥70) with confirmation (3 of 5 bars)
- Mean reversion score LOW
- Efficiency ratio HIGH (straight-line movement)
Classification Logic:
TrendScore >= 70 AND
confirmed_for_3_of_5_bars() AND
MeanRevScore < 40
Scoring Approach:
Instead of brittle thresholds, the system builds normalized scores (0-100) using rank/percentile transforms for scale invariance and cross-asset portability.
Feature Aggregation Rules:
To handle correlated indicators, the system uses explicit aggregation:
TrendScore Calculation:
# Use weighted rank to combine correlated trend indicators
trend_indicators = {
'ADX': ADX(14),
'ER': efficiency_ratio(window),
'slope_norm': abs(slope(EMA(close, n))) / ATR(14)
}
# Convert to percentile ranks (0-100) over rolling window
trend_ranks = {k: percentile_rank(v, window) for k, v in trend_indicators.items()}
# Weighted combination (weights sum to 1.0)
TrendScore = (
0.40 * trend_ranks['ADX'] +
0.35 * trend_ranks['ER'] +
0.25 * trend_ranks['slope_norm']
)MeanRevScore Calculation:
# Use median to reduce noise from correlated mean reversion indicators
meanrev_indicators = {
'neg_autocorr': -autocorr(returns, lag=1), # Negative autocorr = mean reversion
'inv_half_life': 1.0 / ou_half_life(price_deviation),
'z_decay': z_score_reversion_rate(close, window)
}
# Convert to percentile ranks
meanrev_ranks = {k: percentile_rank(v, window) for k, v in meanrev_indicators.items()}
# Use median to reduce impact of outliers
MeanRevScore = median(meanrev_ranks.values())VolatilityLevel and VolatilityChange (Split Volatility Roles):
Volatility serves two distinct purposes and must be separated:
# VolLevelScore: Measures absolute volatility level (gates tradability)
vol_level_indicators = {
'atr_pct': ATR(14) / close,
'bb_bandwidth': (BB_upper - BB_lower) / BB_mid
}
vol_level_ranks = {k: percentile_rank(v, window) for k, v in vol_level_indicators.items()}
VolLevelScore = mean(vol_level_ranks.values())
# VolChangeScore: Measures volatility expansion/compression (detects regime change)
vol_change_indicators = {
'vol_expansion': current_ATR / baseline_ATR, # baseline = median(ATR, long_window)
'bandwidth_delta': (current_bandwidth - prev_bandwidth) / prev_bandwidth,
'vol_percentile_change': percentile_rank(ATR, window) - percentile_rank(ATR[-5], window)
}
vol_change_ranks = {k: percentile_rank(v, window) for k, v in vol_change_indicators.items()}
VolChangeScore = mean(vol_change_ranks.values())RangeQuality (Gate, Not Just Signal):
Range quality acts as a gate to prevent tight, fee-churning grids. It’s calculated from discovered market ranges, not configured grid bounds.
# Step 1: Discover range from market data
discovered_range = discover_support_resistance(
prices=close_prices,
window=window,
method='ROLLING_HIGHLOW' # or BB_BANDS, VOLUME_PROFILE
)
# Step 2: Calculate range quality metrics
range_indicators = {
'range_ratio': (discovered_range.high - discovered_range.low) / ATR,
'oscillation_width': (BB_upper - BB_lower) / close,
'range_integrity': calculate_integrity(prices, discovered_range), # % closes inside
'range_duration': discovered_range.duration_hours
}
# Step 3: Convert to percentile ranks
range_ranks = {k: percentile_rank(v, window) for k, v in range_indicators.items()}
# Step 4: Aggregate into RangeQualityScore
RangeQualityScore = mean(range_ranks.values())
# Minimum range threshold (calibrated per asset/timeframe)
MIN_RANGE_QUALITY = 30 # percentile threshold
# Gate logic
if RangeQualityScore < MIN_RANGE_QUALITY:
# Discovered range is too tight or poorly defined
# This prevents grids in flat, fee-churning markets
return RANGE_WEAKKey Point: The discovered range is market-driven and independent of any configured grid. A tight discovered range (e.g., 2950-3050) will trigger RANGE_WEAK classification even if a wider grid (2850-3150) is configured.
GridCapacity (Grid-Aware Safety Check):
Links market movement to grid geometry for safety:
# Expected market movement
ExpectedMove = max(
2.5 * ATR, # ATR-based estimate
0.9 * (BB_upper - BB_lower) / 2 # Bollinger band half-width
)
# Grid spacing from proposal logic
GridSpacing = k_s * ATR # k_s depends on regime
# How many grid levels would be hit by expected move?
GridCapacity = ExpectedMove / GridSpacing
# Safety threshold (e.g., 8-10 levels)
MAX_LEVELS_SAFE = 10
# If capacity exceeds safe threshold, downgrade regime
if GridCapacity > MAX_LEVELS_SAFE:
# Downgrade by one notch: RANGE_OK → RANGE_WEAK, RANGE_WEAK → TRANSITION
regime = downgrade_regime(regime)Regime Mapping Rules (Refined):
The classification uses a hierarchical decision tree with gates:
# Step 1: Check range quality gate
if RangeQualityScore < MIN_RANGE_QUALITY:
return RANGE_WEAK # Insufficient range amplitude
# Step 2: Check volatility change for regime transitions
if VolChangeScore > 70 or breakout_triggers_fired():
return TRANSITION # Volatility expanding or breakout detected
# Step 3: Check trend dominance
if TrendScore >= 70 and confirmed_for_3_of_5_bars():
return TREND # Strong directional persistence
# Step 4: Evaluate range conditions
if TrendScore <= 35 and MeanRevScore >= 60:
# Check volatility level for tradability
if 30 <= VolLevelScore <= 70:
regime = RANGE_OK
else:
regime = RANGE_WEAK # Vol too low (no fills) or too high (risk)
else:
# Trend not low enough or mean reversion not strong enough
if TrendScore <= 50:
regime = RANGE_WEAK # Weak conditions
else:
regime = TRANSITION # Trend forming but not confirmed
# Step 5: Apply grid capacity safety check
if GridCapacity > MAX_LEVELS_SAFE:
regime = downgrade_regime(regime)
return regimeKey Improvements:
- Scale Invariance: Percentile ranks work across assets and timeframes
- Explicit Aggregation: Weighted combinations and medians handle correlated indicators
- Separated Volatility Roles: VolLevel gates tradability, VolChange detects transitions
- Range Quality Gate: Prevents tight-range false positives
- Grid-Aware Safety: GridCapacity links market movement to grid geometry
Hysteresis Rules (Prevent Regime Flapping):
Grid logic requires stability. The system implements:
- Confirmation Requirements: TREND needs 3/5 bars meeting condition before classification
- Exit Thresholds: To leave TREND, TrendScore must fall below 50 (not 70) - asymmetric thresholds
- Minimum Dwell Time: Once entering TREND, stay at least N bars unless emergency trigger fires
Regime Downgrade Mechanism:
When GridCapacity exceeds safety thresholds, the system downgrades the regime by one notch:
def downgrade_regime(current_regime):
"""Downgrade regime when grid capacity exceeds safe levels"""
downgrade_map = {
'RANGE_OK': 'RANGE_WEAK',
'RANGE_WEAK': 'TRANSITION',
'TRANSITION': 'TREND', # Already most conservative
'TREND': 'TREND' # Cannot downgrade further
}
return downgrade_map[current_regime]This ensures that even if market conditions appear favorable, excessive expected movement relative to grid spacing triggers more conservative behavior.
Performance-Grounded Tuning:
Thresholds are tuned against outcome metrics:
- RANGE_OK: Maximize fills + profit per unit time, minimize inventory drawdown
- RANGE_WEAK: Minimize churn and fees, minimize low fill rate
- TRANSITION: Minimize “caught trend” inventory losses
- TREND: Minimize being in grid at all (or force one-sided/wide if must trade)
Conservative Classification Philosophy:
The detector is conservative about calling RANGE_OK. The expensive error is calling RANGE_OK during TRANSITION/TREND, which leads to inventory losses. False negatives (missing RANGE_OK opportunities) are less costly than false positives (running grids in trends).
Classification Summary:
The regime classifier uses a multi-stage decision process:
- Feature Calculation: Compute raw indicators (ADX, ER, autocorr, ATR, etc.)
- Rank Transformation: Convert to percentile ranks for scale invariance
- Score Aggregation: Combine correlated indicators using weighted averages or medians
- Gate Checks: Apply RangeQuality gate to filter out tight ranges
- Volatility Split: Separate VolLevel (tradability) from VolChange (regime transition)
- Hierarchical Classification: Apply decision tree with explicit logic
- Grid Safety Check: Compute GridCapacity and downgrade if unsafe
- Hysteresis: Apply confirmation and dwell time rules
This approach provides:
- Robustness: Rank transforms handle outliers and scale differences
- Clarity: Explicit aggregation rules prevent unstable scores
- Safety: Grid-aware checks prevent dangerous configurations
- Tunability: Percentile thresholds transfer across assets/timeframes
Competing Verdicts Calculation:
The Regime Engine calculates competing verdicts to provide transparency into alternative regime classifications that were considered but not selected as the primary verdict.
Requirements:
- Exclusion Rule: The primary verdict MUST NOT appear in the competing verdicts list (Requirements 15.10)
- Dynamic Calculation: Competing verdicts MUST be calculated dynamically based on current market analysis, not hardcoded (Requirements 15.12)
- Supporting Data: Each competing verdict MUST include supporting_data with numerical evidence (Requirements 15.9)
- Range Bounds: Range-based verdicts (RANGE_OK, RANGE_WEAK) MUST include range bounds in supporting_data (Requirements 15.11)
Calculation Process:
-
Evaluate All Four Regimes: For each market analysis, calculate scores for all four possible verdicts (RANGE_OK, RANGE_WEAK, TRANSITION, TREND)
-
Select Primary Verdict: The verdict with the highest score becomes the primary verdict
-
Identify Competing Verdicts: All other verdicts with scores above a threshold (e.g., 0.15) become competing verdicts
-
Calculate Weights: Normalize competing verdict scores to weights that sum to less than 1.0 (primary verdict has implicit weight)
-
Generate Reasons: For each competing verdict, identify the key factors that support that classification
-
Include Supporting Data: Attach relevant numerical metrics that support each competing verdict
Example Calculation:
Given market analysis with:
- Range integrity: 0.65 (moderate)
- Volatility: 0.03 (slightly elevated)
- Price: 3.5% outside upper grid bound
- Trend strength: 0.25 (weak trend)
Verdict scores:
- TRANSITION: 0.55 (highest - selected as primary)
- RANGE_WEAK: 0.30 (competing)
- TREND: 0.10 (competing)
- RANGE_OK: 0.05 (below threshold, excluded)
Resulting output:
verdict: TRANSITION
confidence: 0.55
competing_verdicts:
- verdict: RANGE_WEAK
weight: 0.30
reasons: ["Moderate range integrity", "Price near grid bounds"]
supporting_data:
range_integrity: 0.65
upper_bound: 315000
lower_bound: 285000
distance_from_bounds_pct: 3.5
within_grid_bounds: false
- verdict: TREND
weight: 0.10
reasons: ["Price outside grid bounds", "Weak upward momentum"]
supporting_data:
trend_strength: 0.25
price_distance_from_upper: 3.5
bound_breach: "upper"Supporting Data Requirements by Verdict Type:
- RANGE_OK / RANGE_WEAK: MUST include
upper_bound,lower_bound,range_integrity,within_grid_bounds - TRANSITION: MUST include
bound_breach(if applicable),distance_from_bounds_pct, volatility metrics - TREND: MUST include
trend_strength,trend_direction, distance from bounds
Confidence Score Calculation:
The confidence score measures the reliability of the regime classification based on regime stability and consistency, with conservative calibration that reflects the inherent uncertainty of early range detection, NOT profit predictions or fee estimates (Requirements 3.7).
Requirements:
- Regime-Based: Confidence MUST be based on regime consistency, range integrity, volatility stability, and time persistence (Requirements 3.1)
- Range Integrity: MUST consider percentage of closes inside discovered range, boundary violations, and range duration (Requirements 3.2)
- Mean Reversion: MUST measure speed of reversion from extremes and oscillation frequency (Requirements 3.3)
- Volatility Regime: MUST evaluate stability vs expansion and volatility symmetry (Requirements 3.4)
- Time Consistency: MUST assess agreement across recent candles and lack of rapid regime flipping (Requirements 3.5)
- Risk Reduction Only: Confidence MAY reduce risk exposure but MUST NEVER increase exposure or override safety constraints (Requirements 3.6)
- No Profit Optimization: Confidence MUST NEVER incorporate profit estimates, fee estimates, or historical PnL optimization (Requirements 3.7)
- Conservative Calibration: MUST apply penalties for early ranges, position risk, maturity, and narrow spacing (Requirements 3.8-3.11)
- Realistic Bounds: MUST ensure typical RANGE_OK scores fall between 0.60-0.85, capped at 0.95 (Requirements 3.12-3.13)
Calculation Components:
The confidence score is calculated as a weighted combination of four stability factors, then adjusted by conservative penalty factors:
-
Range Integrity Component (0-1.0):
- Percentage of closes inside discovered range (target: >85%)
- Number of boundary violations (fewer is better)
- Range duration (longer is more stable)
- Support/resistance strength at discovered levels
-
Mean Reversion Component (0-1.0):
- Reversion strength from extremes
- Average reversion time (faster is more predictable)
- Oscillation frequency (consistent oscillation is grid-friendly)
- Extreme rejection rate (how often price bounces from bounds)
-
Volatility Stability Component (0-1.0):
- Volatility expansion ratio (closer to 1.0 is more stable)
- Volatility symmetry (balanced up/down moves)
- Distance from baseline ATR
- Volatility state (STABLE > CONTRACTING > EXPANDING)
-
Time Consistency Component (0-1.0):
- Regime agreement percentage across recent candles
- Number of regime flips in 1h and 4h windows (fewer is better)
- Stability score (how long current regime has persisted)
Conservative Penalty Factors:
After calculating the base confidence, apply penalty factors to ensure realistic calibration:
- Time-Based Penalty: For range durations < 36 hours, apply penalty of 0.85-0.95 multiplier
- Position-Based Penalty: When price is in upper/lower half of range, apply penalty of 0.90-0.95 multiplier
- Maturity Penalty: For first range after trend, apply additional penalty of 0.85-0.90 multiplier
- Spacing-Based Penalty: For narrow grids (< 2% of range width), apply penalty of 0.80-0.90 multiplier
Calculation Formula:
def calculate_confidence_score(verdict, analysis, grid_config=None):
"""
Calculate confidence score with conservative calibration
Returns: float between 0.0 and 1.0, typically 0.60-0.85 for RANGE_OK
"""
# Extract components from analysis
range_integrity = calculate_range_integrity_score(analysis)
mean_reversion = calculate_mean_reversion_score(analysis)
volatility_stability = calculate_volatility_stability_score(analysis)
time_consistency = calculate_time_consistency_score(analysis)
# Verdict-specific weighting (same as before)
if verdict == "RANGE_OK":
weights = {
'range': 0.35,
'mean_rev': 0.30,
'vol_stability': 0.20,
'time': 0.15
}
base_confidence = (
weights['range'] * range_integrity +
weights['mean_rev'] * mean_reversion +
weights['vol_stability'] * volatility_stability +
weights['time'] * time_consistency
)
elif verdict == "RANGE_WEAK":
# Similar calculation with adjusted weights
base_confidence = calculate_range_weak_confidence(...)
elif verdict == "TREND":
# Similar calculation with trend-specific logic
base_confidence = calculate_trend_confidence(...)
elif verdict == "TRANSITION":
# Similar calculation with transition-specific logic
base_confidence = calculate_transition_confidence(...)
# Apply conservative penalty factors
penalty_multiplier = 1.0
# Time-based penalty for early ranges
range_duration_hours = analysis.get('range_duration_hours', 0)
if range_duration_hours < 36:
time_penalty = 0.85 + (range_duration_hours / 36) * 0.10 # 0.85-0.95
penalty_multiplier *= time_penalty
# Position-based penalty for range extremes
range_position = analysis.get('range_position', 0.5) # 0.0=bottom, 1.0=top
if range_position < 0.3 or range_position > 0.7:
position_penalty = 0.90 + abs(0.5 - range_position) * 0.10 # 0.90-0.95
penalty_multiplier *= position_penalty
# Maturity penalty for first range after trend
if analysis.get('first_range_after_trend', False):
maturity_penalty = 0.87 # Fixed penalty for new ranges
penalty_multiplier *= maturity_penalty
# Spacing-based penalty for narrow grids
if grid_config:
range_width = analysis.get('range_width', 0)
grid_spacing = grid_config.get('spacing_percent', 0)
if range_width > 0 and grid_spacing / range_width < 0.02: # < 2% of range
spacing_penalty = 0.80 + (grid_spacing / range_width) * 5.0 # 0.80-0.90
penalty_multiplier *= min(0.90, spacing_penalty)
# Apply penalties and bounds
adjusted_confidence = base_confidence * penalty_multiplier
# Hard cap at 0.95 to prevent unrealistic certainty
confidence = max(0.0, min(0.95, adjusted_confidence))
return confidenceRealistic Confidence Ranges (After Penalties):
- RANGE_OK: Typically 0.60-0.85 (conservative calibration, rarely exceeds 0.85)
- RANGE_WEAK: Typically 0.40-0.70 (moderate confidence with penalties applied)
- TRANSITION: Typically 0.30-0.60 (lower confidence, inherently uncertain)
- TREND: Typically 0.50-0.80 (high confidence when trend is established, but capped)
Confidence Score Bounds:
The confidence score MUST be bounded to prevent unrealistic values and ensure conservative calibration:
- Hard Bounds: Always between 0.0 and 1.0
- Realistic Upper Limit: Hard capped at 0.95 (near-perfect certainty is unrealistic)
- Conservative Calibration: Typical RANGE_OK scores 0.60-0.85 after penalty application
- Floating Point Precision: Avoid values like 0.9999999999999999 that indicate calculation errors
Common Issues to Avoid:
- Approaching 1.0: If confidence approaches 1.0 (e.g., 0.9999999999999999), this indicates a calculation bug
- Hardcoded Values: Confidence MUST be calculated dynamically from market data, never hardcoded
- Profit Conflation: Confidence MUST NOT incorporate profit expectations or fee calculations
- Over-Optimistic Calibration: Early ranges should not produce confidence > 0.85 without extensive validation
Interface:
# Enhanced Regime Classification Example
regime_classification:
verdict: RANGE_OK
confidence: 0.78
strength: STRONG
# Detailed breakdown of analysis components
range_analysis:
# DISCOVERED range from market data (not configured grid bounds)
upper_bound: 320000 # Price * 100 - discovered resistance level
lower_bound: 290000 # Price * 100 - discovered support level
# Discovery method and confidence
discovery_method: ROLLING_HIGHLOW # or BB_BANDS, VOLUME_PROFILE, PRICE_CLUSTERS
range_confidence: 0.85 # How well-defined the range is
# Range quality metrics
range_integrity: 0.85 # How well price respects the discovered bounds
closes_inside_range_pct: 88.3 # % of recent closes within discovered range
boundary_violations: 2 # Number of closes outside discovered range
range_duration_hours: 18 # How long this range has been stable
# Support/resistance strength at discovered levels
support_strength: 0.82 # Strength of discovered support (rejection frequency)
resistance_strength: 0.79 # Strength of discovered resistance (rejection frequency)
# Current price relative to discovered range
current_price: 305000 # Price * 100
price_position_pct: 50.0 # Position within discovered range (0=support, 100=resistance)
mean_reversion:
reversion_strength: 0.80
avg_reversion_time_minutes: 45
oscillation_frequency: 0.65
extreme_rejection_rate: 0.75
volatility_metrics:
atr_1h: 1525 # ATR * 100
volatility_state: STABLE
volatility_expansion_ratio: 0.95
volatility_symmetry: 0.88
baseline_atr: 1600
time_consistency:
regime_agreement_pct: 85.7
regime_flips_1h: 0
regime_flips_4h: 1
stability_score: 0.82
competing_verdicts:
- verdict: RANGE_WEAK
weight: 0.22
reasons: ["Slightly elevated volatility", "One boundary test"]
supporting_data:
volatility_expansion_ratio: 1.05
boundary_test_count: 1
resistance_strength: 0.79
volatility_state: "SLIGHTLY_ELEVATED"
- verdict: TRANSITION
weight: 0.05
reasons: ["Minor volume spike at resistance"]
supporting_data:
volume_spike_ratio: 1.15
resistance_touches_1h: 1
price_distance_to_resistance: 50 # Price * 100
volume_24h_percentile: 0.72
decision_factors:
primary_drivers:
- "Strong range integrity at 85% with clear support/resistance"
- "Consistent mean reversion behavior with 80% strength"
- "Stable volatility regime within baseline parameters"
supporting_factors:
- "18-hour range duration provides confidence"
- "88% of closes remain within established bounds"
- "Low regime flip frequency indicates stability"
risk_factors:
- "Slight volatility elevation (95% of baseline)"
- "Recent boundary test at resistance level"
decision_factors:
primary_drivers:
- factor: "Strong range integrity with clear support/resistance"
supporting_data:
range_integrity: 0.85
support_strength: 0.82
resistance_strength: 0.79
closes_inside_range_pct: 88.3
- factor: "Consistent mean reversion behavior"
supporting_data:
reversion_strength: 0.80
avg_reversion_time_minutes: 45
extreme_rejection_rate: 0.75
- factor: "Stable volatility regime within baseline parameters"
supporting_data:
volatility_expansion_ratio: 0.95
volatility_symmetry: 0.88
volatility_state: "STABLE"
supporting_factors:
- factor: "Extended range duration provides confidence"
supporting_data:
range_duration_hours: 18
boundary_violations: 2
- factor: "Majority of closes remain within established bounds"
supporting_data:
closes_inside_range_pct: 88.3
total_candles_analyzed: 24
- factor: "Low regime flip frequency indicates stability"
supporting_data:
regime_flips_1h: 0
regime_flips_4h: 1
stability_score: 0.82
regime_agreement_pct: 85.7
risk_factors:
- factor: "Slight volatility elevation above baseline"
supporting_data:
current_atr: 1525
baseline_atr: 1600
volatility_expansion_ratio: 0.95
- factor: "Recent boundary test at resistance level"
supporting_data:
resistance_touches_1h: 1
resistance_strength: 0.79
last_touch_minutes_ago: 23
verdict_rationale: "Range structure remains intact (85% integrity) with strong mean reversion (80% strength) and stable volatility (95% of baseline). Minor elevation in volatility and single boundary test insufficient to override strong structural indicators supporting RANGE_OK verdict."Grid State Management
The system determines grid state (running/stopped) from the history array in the grid configuration, providing a single source of truth for grid status.
History-Based State Determination:
def is_grid_running(grid_config):
"""
Determine if grid is running based on history array
Returns: bool - True if grid is running, False if stopped
"""
history = grid_config.get('history', [])
if not history:
return False # No history = stopped
# Get most recent history entry
latest_entry = history[-1]
# Check if latest entry has enabled timestamp without disabled timestamp
has_enabled = 'enabled' in latest_entry
has_disabled = 'disabled' in latest_entry
if has_enabled and not has_disabled:
return True # Grid is running
else:
return False # Grid is stoppedGrid State Examples:
# Running Grid (enabled without disabled)
grid_running_example:
id: "eth-v3"
symbol: "ETH/USDT"
history:
- enabled: 2026-01-08T22:56:54Z
capital_allocated: 750
exchange_id: 1767788577226065
# is_grid_running() returns True
# Stopped Grid (enabled with disabled)
grid_stopped_example:
id: "eth-v2"
symbol: "ETH/USDT"
history:
- enabled: 2026-01-08T18:48:16Z
disabled: 2026-01-08T22:53:28Z
capital_allocated: 299
exchange_id: 1767788577225743
# is_grid_running() returns False
# No History Grid
grid_no_history_example:
id: "btc-v1"
symbol: "BTC/USDT"
history: []
# is_grid_running() returns FalseRecommendation Logic Integration:
The recommendation generator uses is_grid_running() to determine appropriate actions:
- Running Grid: Generate GRID_MANAGE recommendations (RUN_GRID, STOP_GRID, etc.)
- Stopped Grid: Generate GRID_SETUP recommendations (CREATE_GRID) when conditions are favorable
- No Action: When stopped grid encounters unfavorable conditions (already safe)
Metrics Collection Integration:
The metrics collector uses history-based state for accurate reporting:
def collect_grid_metrics(grid_config, analyzed_period):
"""
Collect grid metrics using history-based state determination
"""
# Determine if grid was running during analyzed period
was_running = grid_was_running_during_period(
grid_config['history'],
analyzed_period
)
return {
'grid_id': grid_config['id'],
'status': 'ACTIVE' if was_running else 'STOPPED',
'grid_running': was_running,
# ... other metrics
}This approach eliminates the need for a separate enabled field and ensures consistent grid state determination across all system components.
Exchange Interface
Abstract interface for cryptocurrency exchange integration, with KuCoin as the initial implementation.
Interface:
# Account Balance Example
account_balance:
total_usdt: 1000.00
available_usdt: 950.00
locked_usdt: 50.00
reserve_required: 30.00 # 3% of total
deployable_capital: 920.00
# Market Data Example
market_data:
symbol: ETH/USDT
timeframe: 1h
source: KUCOIN
timestamp: 2025-01-18T12:00:00Z
candles:
- timestamp: 2025-01-18T11:00:00Z
open: 3100.50
high: 3120.75
low: 3095.25
close: 3115.00
volume: 1250.75
# Trade History Example
trade:
id: trade_12345
grid_id: eth-primary
symbol: ETH/USDT
side: BUY
quantity: 0.1
price: 3100.00
fee: 0.31 # Actual fee from exchange
timestamp: 2025-01-18T12:05:00ZRecommendation Generator
Generates Grid_Manage and Grid_Setup recommendations based on regime classifications and system state.
Decision Logic:
- RANGE_OK → RUN_GRID (default)
- RANGE_WEAK → RUN_GRID_NO_SCALE (default)
- TRANSITION → STOP_GRID (default, 30min timeout)
- TREND → STOP_GRID (default, 15min timeout)
Confidence Override:
- If confidence < safety_threshold → escalate to STOP_GRID
- Confidence never escalates exposure
Grid State Awareness: The recommendation generator considers the grid’s state determined from history array when generating actionable recommendations:
-
Grid Stopped + Favorable Conditions (RANGE_OK/RANGE_WEAK):
- Recommendation: “Create new grid” or “Consider creating grid”
- Rationale: Market conditions are suitable for grid trading
- Action: User can create a new grid with current market conditions
-
Grid Stopped + Unfavorable Conditions (TRANSITION/TREND):
- Recommendation: None (informational analysis only)
- Rationale: Grid is already in safe state (stopped)
- Action: No action needed
-
Grid Running + Price Outside Bounds:
- Recommendation: “Stop current grid and create new grid with updated bounds”
- Rationale: Grid is no longer effective at current price levels
- Action: Stop existing grid, propose new grid with bounds aligned to current market
-
Grid Running + Favorable Conditions:
- Recommendation: “Continue running grid” or “Run grid without scaling”
- Rationale: Grid is operating in suitable market conditions
- Action: Maintain current grid operation
-
Grid Running + Unfavorable Conditions:
- Recommendation: “Stop grid”
- Rationale: Market conditions unsuitable for grid trading
- Action: Stop grid operation
Interface:
# Grid Manage Recommendation Example
recommendation:
id: rec_2025-01-18T12-00_ETH_1h
type: GRID_MANAGE
verdict: RANGE_OK
action: RUN_GRID
confidence: 0.78
strength: STRONG # UI affordance: STRONG (>0.75), MODERATE (0.55-0.75), WEAK (<0.55)
timeout: null # No timeout for RANGE_OK
justification:
- "Price remains within established range"
- "Mean reversion intact with 0.80 strength"
- "Volatility stable at baseline levels"
why_now: "Range integrity confirmed at 0.85 with stable volatility and strong mean reversion signals."
# Grid Setup Recommendation Example
grid_setup_recommendation:
id: rec_2025-01-18T14-00_ETH_setup
type: GRID_SETUP
verdict: RANGE_OK
action: CREATE_GRID
confidence: 0.82
strength: STRONG
validity_window: 2025-01-18T16:00:00Z
grid_specification:
grid_id: eth-primary
symbol: ETH/USDT
levels: 7
spacing_method: EVEN
upper_bound: 3200.00
lower_bound: 2900.00
stop_condition:
trigger_logic: "2x 1h closes below 2900"
execution_mode: KEEP_ASSETS
### Probationary Grid Management
The system implements conservative probationary grid behavior when confidence is moderate or ranges are newly detected, allowing validation of mean reversion behavior without excessive risk exposure.
**Probationary Grid Triggers:**
Probationary grids are recommended when:
- Confidence scores fall between 0.60-0.80 (moderate confidence)
- Range duration is less than 36-48 hours (early detection)
- First range detected after a trend period (maturity concerns)
- Grid spacing is narrow relative to range width (< 2% of range)
**Probationary Grid Parameters:**
When probationary behavior is triggered, the system applies conservative parameters:
```yaml
# Probationary Grid Configuration
probationary_grid:
capital_allocation: 0.6 # 50-60% of normal allocation (restart_risk_scaling)
grid_levels: 5 # Reduced from normal 8-10 levels
spacing_multiplier: 0.50 # Wider spacing: 0.45-0.55 × ATR vs normal
grid_type: "PROBE" # Marked as probe grid, not commitment grid
# Validation criteria for scaling up
validation_requirements:
minimum_duration_hours: 36
boundary_rejections_required: 2 # Both upper and lower bounds
inventory_oscillation: true # No persistent bias
confidence_stability: true # Confidence remains stable, not rising sharply
# Quick stop criteria (feature, not failure)
stop_triggers:
one_sided_inventory: 0.7 # > 70% inventory on one side
volatility_expansion: 1.2 # > 20% volatility increase
range_breach: true # Clean break of discovered range boundsProbationary to Normal Grid Transition:
The system transitions from probationary to normal grid parameters when:
- Time Validation: Grid operates successfully for 36-48+ hours
- Range Validation: Upper and lower bounds receive multiple clean rejections
- Inventory Validation: Inventory oscillates without persistent bias
- Confidence Validation: Confidence remains stable without sharp increases
Probationary Grid Recommendations:
# Example Probationary Grid Recommendation
probationary_recommendation:
id: rec_2025-01-18T17-00_ETH_probationary
type: GRID_SETUP
verdict: RANGE_OK
action: CREATE_GRID
confidence: 0.72 # Moderate confidence triggers probationary
grid_mode: PROBATIONARY
justification:
- "Range detected but only 18 hours duration"
- "Price in upper half of range (position risk)"
- "First range after trend period (maturity penalty)"
- "Confidence penalties applied: 0.87 × 0.92 × 0.87 = 0.70"
probationary_rationale:
- "Conservative allocation (60%) to validate mean reversion"
- "Wider spacing (0.50 × ATR) to reduce breakout sensitivity"
- "Fewer levels (5) to limit exposure during validation"
- "Quick stop enabled for one-sided inventory or volatility expansion"
grid_specification:
grid_id: eth-probationary
symbol: ETH/USDT
levels: 5 # Reduced from normal 8
spacing_method: EVEN
spacing_atr_multiplier: 0.50 # Wider than normal 0.35
upper_bound: 3200.00
lower_bound: 2950.00 # Slightly tighter range for probationary
capital_allocation_percent: 60 # Reduced allocation
validation_criteria:
duration_required_hours: 36
boundary_rejections_required: 2
max_inventory_bias: 0.3 # 30% max bias to one side
stop_condition:
trigger_logic: "1x 1h close below 2950 OR inventory bias > 70%"
execution_mode: KEEP_ASSETS
transition_plan:
normal_grid_levels: 8
normal_spacing_multiplier: 0.35
normal_allocation_percent: 100
transition_trigger: "All validation criteria met for 12+ hours"Interface Integration:
The probationary grid system integrates with existing components:
- Regime Engine: Calculates penalty factors that trigger probationary mode
- Recommendation Generator: Generates probationary vs normal grid recommendations
- Grid Manager: Tracks probationary status and validation progress
- Decision Records: Records probationary rationale and transition criteria
- Notification Manager: Clearly communicates probationary status to user
capital_allocation: 920.00
fee_estimates: # For user information only, never used for decisions
estimated_pct_per_month: 4.2
estimated_usd_per_month: 12.50
basis: “Based on 8 trades/day at 0.1% fee rate”
used_for: GRID_SETUP_ONLY
justification:
- “Strong range established with 85% integrity”
- “High confidence regime classification”
- “Optimal volatility for grid spacing” why_now: “Range breakout probability low with strong support at 2900 and resistance at 3200.”
Regime Change Event Example
regime_change_event: from: RANGE_OK to: TRANSITION confidence_delta: -0.25 timestamp: 2025-01-18T15:30:00Z trigger_factors: - “Volatility expansion exceeded 1.6x baseline” - “Price closed outside range for 2 consecutive candles” - “Mean reversion strength dropped to 0.45”
Stopped Grid with Favorable Conditions Example
stopped_grid_favorable_recommendation: id: rec_2025-01-18T16-00_ETH_1h type: GRID_SETUP verdict: RANGE_OK action: CREATE_GRID grid_running: false confidence: 0.82 strength: STRONG justification: - “Grid is currently stopped” - “Market conditions are favorable for grid trading” - “Strong range established with 85% integrity” why_now: “Consider creating a new grid - market conditions are suitable for grid trading with strong range integrity and stable volatility.” actionable: true
Stopped Grid with Unfavorable Conditions Example
stopped_grid_unfavorable_recommendation: id: rec_2025-01-18T17-00_ETH_1h type: GRID_MANAGE verdict: TRANSITION action: NO_ACTION grid_running: false confidence: 0.35 strength: WEAK justification: - “Grid is currently stopped” - “Market in transition - unsuitable for grid trading” - “Grid already in safe state” why_now: “No action needed - grid is stopped and market conditions are unfavorable for grid trading.” actionable: false
Grid Repositioning Recommendation Example
grid_repositioning_recommendation: id: rec_2025-01-18T18-00_ETH_1h type: GRID_REPOSITION verdict: RANGE_OK action: STOP_AND_RECREATE grid_running: true confidence: 0.75 strength: STRONG current_grid: upper_bound: 3150.00 lower_bound: 2850.00 current_price: 3238.00 price_outside_bounds: true distance_from_bounds_pct: 2.8 proposed_grid: upper_bound: 3400.00 lower_bound: 3100.00 rationale: “Price has moved significantly above current grid range” justification: - “Current grid range no longer effective” - “Price 2.8% above upper bound” - “New range conditions are favorable” why_now: “Stop current grid and create new grid with updated bounds (3100-3400) to align with current market price of 3238.” actionable: true
#### Grid Proposal Generation
The Recommendation Generator doesn't just output a regime label—it outputs a complete grid "proposal" (range, spacing, number of orders, and per-order size) that's mechanically tied to volatility, fees, and regime risk.
**Discovered Range vs Configured Grid Comparison:**
When a grid is already configured, the system compares the **discovered market range** (from range_analysis) to the **configured grid bounds** to determine if repositioning is needed:
```python
def compare_discovered_vs_configured(discovered_range, configured_grid, current_price):
"""
Compare discovered market range to configured grid bounds
Returns: recommendation for grid adjustment
"""
# Extract bounds
discovered_upper = discovered_range.upper_bound
discovered_lower = discovered_range.lower_bound
configured_upper = configured_grid.upper_bound
configured_lower = configured_grid.lower_bound
# Calculate alignment metrics
price_outside_grid = (
current_price > configured_upper or
current_price < configured_lower
)
discovered_center = (discovered_upper + discovered_lower) / 2
configured_center = (configured_upper + configured_lower) / 2
center_shift_pct = abs(discovered_center - configured_center) / configured_center
range_width_change = (
(discovered_upper - discovered_lower) /
(configured_upper - configured_lower)
)
# Decision logic
if price_outside_grid:
# Price has moved outside configured grid
return {
'action': 'REPOSITION',
'reason': 'PRICE_OUTSIDE_BOUNDS',
'proposed_bounds': (discovered_lower, discovered_upper),
'urgency': 'HIGH'
}
elif center_shift_pct > 0.10: # 10% center shift
# Discovered range has shifted significantly
return {
'action': 'REPOSITION',
'reason': 'RANGE_SHIFTED',
'proposed_bounds': (discovered_lower, discovered_upper),
'urgency': 'MEDIUM'
}
elif range_width_change < 0.5 or range_width_change > 2.0:
# Discovered range is much tighter or wider than configured
return {
'action': 'REPOSITION',
'reason': 'RANGE_WIDTH_MISMATCH',
'proposed_bounds': (discovered_lower, discovered_upper),
'urgency': 'MEDIUM'
}
else:
# Configured grid still aligns with discovered range
return {
'action': 'CONTINUE',
'reason': 'ALIGNMENT_OK',
'alignment_score': calculate_alignment_score(...)
}
Comparison Output in Recommendations:
When a grid exists, recommendations include explicit comparison data:
grid_comparison:
discovered_range:
upper_bound: 320000 # 3200.00 * 100
lower_bound: 290000 # 2900.00 * 100
center: 305000
width: 30000
confidence: 0.85
configured_grid:
upper_bound: 315000 # 3150.00 * 100
lower_bound: 285000 # 2850.00 * 100
center: 300000
width: 30000
alignment_analysis:
price_outside_grid: false
current_price: 305000
center_shift_pct: 1.67 # (305000 - 300000) / 300000
width_ratio: 1.0 # Discovered width / configured width
alignment_score: 0.92 # High = good alignment
recommendation:
action: CONTINUE # or REPOSITION
reason: ALIGNMENT_OK # or PRICE_OUTSIDE_BOUNDS, RANGE_SHIFTED, etc.
urgency: null # or LOW, MEDIUM, HIGHRepositioning Triggers:
The system recommends repositioning when:
-
Price Outside Bounds (HIGH urgency):
- Current price > configured upper bound
- Current price < configured lower bound
- Action: Immediate repositioning to discovered range
-
Range Shifted (MEDIUM urgency):
- Discovered range center shifted >10% from configured center
- Market structure has moved but price still within old grid
- Action: Reposition to align with new market structure
-
Width Mismatch (MEDIUM urgency):
- Discovered range <50% or >200% of configured width
- Market volatility has changed significantly
- Action: Adjust grid width to match current market conditions
-
Alignment OK (No action):
- Price within configured bounds
- Center shift <10%
- Width ratio between 0.5-2.0
- Action: Continue with current grid
Input Calculations (Computed Each Hour for 1h Bars):
Volatility + Range Metrics:
ATR = ATR(14)on 1h barsBBmid, BBup, BBlow = Bollinger(20, 2)BB_half = (BBup - BBlow) / 2RV = stdev(log returns, 24)(optional)
Trend / Drift Metrics:
EMA_fast = EMA(20),EMA_slow = EMA(100)slope = |EMA_fast - EMA_fast[-12]| / (12 * ATR)(normalized 12h slope)ADX = ADX(14)(optional but helpful)
Market Frictions:
fee_rt = taker/maker fee rate(as decimal)slip_rt = slippage estimate(or fixed bps)
Regime → Grid Shape Mapping:
RANGE_OK (Mean Reversion Strong, Trend Weak)
- Style: Symmetric grid around center
- Center: BBmid (or EMA20)
- Spacing: Relatively tight (more fills)
- Width: Moderate
RANGE_WEAK (Range-y But Low Vol / Weak Oscillation)
- Style: Symmetric but fewer orders + wider spacing
- Goal: Avoid fee-churn
- Width: Narrower than RANGE_OK
TRANSITION (Breakout Risk / Vol Expansion / Drift Forming)
- Style: Wider + fewer levels, often skewed
- Skew Logic:
- If drift up: more buy levels below, fewer sells above
- If drift down: more sell levels above, fewer buys below
- Goal: Reduce getting “run over” while still participating if it reverts
TREND (Directional Persistence Dominates)
- Default: Don’t run a classic grid
- If Must Trade: One-sided “re-entry ladder” with small size, wide spacing
- Uptrend: place buys below (catch pullbacks), avoid stacking sells above
- Downtrend: place sells above, avoid stacking buys below
Grid Width and Spacing Calculations:
3.1 Spacing Must Clear Fees + Noise
Minimum spacing rule:
spacing_min_pct = 2*fee_rt + slip_rt + buffer_rt
Where buffer_rt typically 2-5 bps (0.0002-0.0005) or calibrated value.
Convert to price:
spacing_min = spacing_min_pct * price
Choose spacing as the larger of:
- Volatility-based spacing
- Fee-based minimum
Vol-based spacing (default):
spacing_vol = k_s * ATR
Typical k_s by regime:
- RANGE_OK: 0.35–0.60
- RANGE_WEAK: 0.60–0.90
- TRANSITION: 0.90–1.30
- TREND: 1.30–2.00 (if you ladder at all)
Final spacing:
spacing = max(spacing_vol, spacing_min)
3.2 Width Should Reflect Where You Expect Reversion
Pick a half-width from bands/ATR:
half_width = max(k_w * ATR, m_w * BB_half)
Typical values:
- RANGE_OK: k_w = 2.5–4.0, m_w = 0.9–1.2
- RANGE_WEAK: k_w = 2.0–3.0, m_w = 0.7–1.0
- TRANSITION: k_w = 4.0–6.0, m_w = 1.2–1.8 (wider but fewer orders)
- TREND: if active, use k_w = 3.0–5.0 but one-sided
Grid bounds:
lower = center - half_width
upper = center + half_width
3.3 Number of Levels
levels_each_side = floor(half_width / spacing)
Clamp to sane ranges:
- RANGE_OK: 6–14 each side (depends on liquidity + fees)
- RANGE_WEAK: 3–8 each side
- TRANSITION: 2–6 each side
- TREND: 0–6 on one side only (often 0 is correct)
Total Grid Size and Per-Order Sizing:
4.1 Risk Budget Approach (Simple + Effective)
Let:
E = equity allocated to this strategy (USDT)
Choose a max adverse move you’re willing to tolerate:
shock = 2 * ATR(for 1h)
Choose allowed drawdown on allocated equity for that shock:
dd = 0.5% to 2%depending on risk appetite
Then maximum inventory value (approx):
inv_max_usdt = (E * dd) / (shock / price)
Because a move of shock is shock/price in percent terms.
Example logic:
- If ETH ~ 3200, ATR(1h)=40, shock=80 → shock% = 2.5%
- If dd=1%, inv_max ≈ 0.01/0.025 = 0.40 of E (40% of allocated equity notional)
Now allocate inventory across levels.
4.2 Per-Order Size
Uniform sizing (simple):
order_notional = inv_max_usdt / levels_each_side
Cap at exchange min size, and cap total orders.
Better (reduces risk): Size increases away from center
Use geometric progression so far orders are bigger:
w_i = r^i with r=1.05–1.20
Normalize so sum(w)=1:
size_i = inv_max_usdt * w_i
Regime scaling:
- RANGE_OK: use full
inv_max_usdt - RANGE_WEAK: use 0.4–0.7x
- TRANSITION: use 0.2–0.5x + skew
- TREND: 0–0.3x and one-sided only
Skew Rules (TRANSITION/TREND):
Compute a drift direction score:
dir = sign(EMA20 - EMA100) # or sign of EMA20 slope
trend_strength = clamp01((slope - s0) / (s1 - s0)) # e.g., s0=0.15, s1=0.40
In TRANSITION:
- If up-drift: allocate 70–90% of notional to buy levels below, 10–30% to sells above
- If down-drift: inverse
In TREND:
- Only place orders with the trend as pullback entries (one-sided ladder), or place none
Grid Proposal Template:
The system outputs a complete proposal each hour:
grid_proposal:
regime: RANGE_OK
center: 3235.6
lower: 3180.0
upper: 3290.0
spacing: 18.0
levels_each_side: 6
style: symmetric
inv_max_usdt: 12000
order_sizes_usdt: [1200, 1300, 1400, 1500, 1600, 1800]
notes: "ATR=40, spacing=max(0.45*ATR, fees), k_s=0.45, k_w=3.2"
# Detailed calculation breakdown
calculations:
atr_1h: 40.0
bb_mid: 3235.6
bb_half: 55.0
spacing_vol: 18.0 # 0.45 * 40
spacing_min: 12.8 # 2*0.001 + 0.0005 + 0.0003 = 0.004 * 3200
spacing_final: 18.0 # max(18.0, 12.8)
half_width_atr: 128.0 # 3.2 * 40
half_width_bb: 49.5 # 0.9 * 55
half_width_final: 128.0 # max(128.0, 49.5)
risk_parameters:
equity_allocated: 50000 # USDT
shock_atr: 80.0 # 2 * ATR
shock_pct: 0.025 # 80 / 3200
drawdown_tolerance: 0.01 # 1%
inv_max_usdt: 20000 # (50000 * 0.01) / 0.025
regime_scaling: 0.6 # RANGE_OK uses 0.6x for this example
effective_inv_max: 12000 # 20000 * 0.6Default Parameter Set (Good Starting Point for ETH 1h):
k_s (spacing multiplier):
- RANGE_OK: 0.45
- RANGE_WEAK: 0.75
- TRANSITION: 1.1
- TREND: 1.6
k_w (ATR half-width):
- RANGE_OK: 3.2
- RANGE_WEAK: 2.6
- TRANSITION: 5.0
- TREND: 4.0 (one-sided)
Risk scaling on inv_max_usdt:
- RANGE_OK: 1.0x
- RANGE_WEAK: 0.6x
- TRANSITION: 0.35x
- TREND: 0–0.25x
Min spacing floor:
2*fees + slippage + 3bps
Grid Restart Gates
After a grid is stopped (typically with verdict TRANSITION or TREND), the system enforces a three-gate evaluation process before allowing new grid creation. This prevents premature grid restart during trend continuations or unstable market conditions.
Design Philosophy:
The restart gates implement a conservative approach to grid recreation:
- Wait for trend to end (Gate 1: Directional Energy Decay)
- Confirm range behavior has returned (Gate 2: Mean Reversion Return)
- Verify volatility is tradable (Gate 3: Tradable Volatility)
Only after all three gates pass sequentially does the system transition from RANGE_WEAK to RANGE_OK and allow CREATE_GRID recommendations.
Gate Evaluation State Machine:
stateDiagram-v2 [*] --> GridStopped: Grid stopped (TRANSITION/TREND) GridStopped --> EvaluatingGate1: Begin gate evaluation EvaluatingGate1 --> Gate1Failed: Gate 1 fails EvaluatingGate1 --> EvaluatingGate2: Gate 1 passes Gate1Failed --> TRANSITION_or_TREND: Classify as TRANSITION/TREND TRANSITION_or_TREND --> EvaluatingGate1: Next bar EvaluatingGate2 --> Gate2Failed: Gate 2 fails EvaluatingGate2 --> EvaluatingGate3: Gate 2 passes Gate2Failed --> RANGE_WEAK: Classify as RANGE_WEAK RANGE_WEAK --> EvaluatingGate1: Next bar (restart from Gate 1) EvaluatingGate3 --> Gate3Failed: Gate 3 fails EvaluatingGate3 --> AllGatesPassed: Gate 3 passes Gate3Failed --> RANGE_WEAK: Classify as RANGE_WEAK AllGatesPassed --> RANGE_OK: Transition to RANGE_OK RANGE_OK --> [*]: Allow CREATE_GRID
Gate 1: Directional Energy Decay (MANDATORY)
Purpose: Ensure trend energy has dissipated before considering grid restart.
Evaluation Logic:
def evaluate_gate_1(market_data, config):
"""
Gate 1: Directional Energy Decay
Returns: (passed: bool, metrics: dict, reason: str)
"""
# Calculate trend indicators
trend_score = calculate_trend_score(market_data) # From regime engine
adx = calculate_adx(market_data, period=14)
adx_slope = calculate_slope(adx, lookback=5)
normalized_slope = abs(calculate_ema_slope(market_data)) / calculate_atr(market_data)
efficiency_ratio = calculate_efficiency_ratio(market_data, window=config.er_window)
has_hh_ll = detect_higher_highs_lower_lows(
market_data,
lookback=config.hh_ll_lookback,
method=config.swing_detection_method # e.g., 'FRACTAL', 'SWING_HIGH_LOW'
)
# Define conditions
conditions = {
'trend_score_low': trend_score < config.trend_exit_threshold, # e.g., < 50
'adx_falling_low': adx < config.adx_midzone and adx_slope < 0, # e.g., < 25 and falling
'slope_low': normalized_slope < config.slope_threshold, # e.g., < 0.15
'er_low': efficiency_ratio < config.er_threshold, # e.g., < 0.45
'no_hh_ll': not has_hh_ll
}
# Apply passage logic (configurable: consecutive or majority)
if config.gate1_passage_logic == 'CONSECUTIVE':
# Require N consecutive bars meeting conditions
passed = check_consecutive_bars(conditions, n=config.gate1_consecutive_bars)
else: # 'MAJORITY'
# Require N of M bars meeting conditions
passed = check_majority_bars(
conditions,
n=config.gate1_majority_n,
m=config.gate1_majority_m
)
metrics = {
'trend_score': trend_score,
'adx': adx,
'adx_slope': adx_slope,
'normalized_slope': normalized_slope,
'efficiency_ratio': efficiency_ratio,
'has_higher_highs_lower_lows': has_hh_ll,
'conditions_met': sum(conditions.values()),
'total_conditions': len(conditions)
}
if passed:
reason = "Directional energy has decayed - trend no longer pushing with conviction"
else:
failing_conditions = [k for k, v in conditions.items() if not v]
reason = f"Gate 1 failed - directional energy still present: {', '.join(failing_conditions)}"
return passed, metrics, reasonConfiguration Parameters:
gate_1_config:
# Thresholds (per-symbol, per-timeframe customizable)
trend_exit_threshold: 50 # TrendScore must fall below this
adx_midzone: 25 # ADX must be below this and falling
slope_threshold: 0.15 # Normalized slope threshold
er_threshold: 0.45 # Efficiency Ratio threshold
hh_ll_lookback: 20 # Bars to check for higher highs/lower lows
swing_detection_method: 'FRACTAL' # or 'SWING_HIGH_LOW', 'PRICE_CLUSTERS'
# Passage logic
passage_logic: 'MAJORITY' # or 'CONSECUTIVE'
consecutive_bars: 3 # If using CONSECUTIVE
majority_n: 3 # If using MAJORITY: N of M bars
majority_m: 5Gate 2: Mean Reversion Return (MANDATORY)
Purpose: Confirm that mean-reverting behavior has returned and price is oscillating around a local mean.
Evaluation Logic:
def evaluate_gate_2(market_data, config):
"""
Gate 2: Mean Reversion Return
Returns: (passed: bool, metrics: dict, reason: str)
"""
# Calculate mean reversion indicators
mean_rev_score = calculate_mean_rev_score(market_data) # From regime engine
lag1_autocorr = calculate_autocorrelation(market_data.returns, lag=1)
ou_half_life = calculate_ou_half_life(market_data)
z_score_reversions = count_z_score_reversions(market_data, threshold=config.z_threshold)
# Calculate oscillations across candidate grid spacing
candidate_spacing = config.grid_spacing_multiplier * calculate_atr(market_data)
oscillations = count_price_oscillations(
market_data,
spacing_multiple=config.oscillation_spacing_multiple,
candidate_spacing=candidate_spacing,
revert_within_bars=config.oscillation_revert_bars
)
# Define conditions
conditions = {
'mean_rev_high': mean_rev_score >= config.mean_rev_threshold, # e.g., >= 60
'autocorr_negative': lag1_autocorr <= 0,
'ou_half_life_short': ou_half_life is not None and ou_half_life < config.ou_max_half_life, # e.g., < 48 bars (2 days on 1h)
'z_reversions': z_score_reversions >= config.min_z_reversions, # e.g., >= 3
'oscillations': oscillations >= config.min_oscillations # e.g., >= 2
}
passed = all(conditions.values())
metrics = {
'mean_rev_score': mean_rev_score,
'lag1_autocorr': lag1_autocorr,
'ou_half_life_bars': ou_half_life,
'z_score_reversions': z_score_reversions,
'oscillations_count': oscillations,
'candidate_spacing': candidate_spacing,
'conditions_met': sum(conditions.values()),
'total_conditions': len(conditions)
}
if passed:
reason = "Mean reversion has returned - price bouncing around local mean"
else:
failing_conditions = [k for k, v in conditions.items() if not v]
reason = f"Gate 2 failed - mean reversion not established: {', '.join(failing_conditions)}"
return passed, metrics, reasonConfiguration Parameters:
gate_2_config:
# Thresholds
mean_rev_threshold: 60 # MeanRevScore must be >= this
ou_max_half_life: 48 # OU half-life must be < this (in bars)
z_threshold: 1.5 # Z-score threshold for reversion detection
min_z_reversions: 3 # Minimum number of z-score excursions that revert
min_oscillations: 2 # Minimum oscillations across grid spacing
# Oscillation detection
grid_spacing_multiplier: 0.45 # Multiplier for ATR to get candidate spacing
oscillation_spacing_multiple: 1.0 # Price must cross ±1.0x spacing
oscillation_revert_bars: 12 # Must revert within this many barsGate 3: Tradable Volatility (MANDATORY)
Purpose: Ensure volatility is in a tradable band - not too low (no fills) and not too high (inventory blowups).
Evaluation Logic:
def evaluate_gate_3(market_data, config):
"""
Gate 3: Tradable Volatility
Returns: (passed: bool, metrics: dict, reason: str)
"""
# Calculate volatility indicators
atr = calculate_atr(market_data, period=14)
atr_pct = atr / market_data.close[-1]
bb_bandwidth = calculate_bb_bandwidth(market_data)
vol_percentile = calculate_percentile(atr, market_data.atr_history, window=config.vol_window)
vol_expansion_ratio = atr / calculate_baseline_atr(market_data, window=config.baseline_window)
bb_slope = calculate_slope(bb_bandwidth, lookback=config.bb_slope_lookback)
vol_percentile_slope = calculate_slope(vol_percentile, lookback=config.vol_slope_lookback)
# Define conditions
conditions = {
'atr_above_min': atr_pct >= config.atr_min_pct, # e.g., >= 0.008 (0.8%)
'atr_below_max': atr_pct <= config.atr_max_pct, # e.g., <= 0.04 (4%)
'vol_not_expanding': vol_expansion_ratio <= config.vol_expansion_max, # e.g., <= 1.3
'bb_stable_or_shrinking': bb_slope <= 0,
'vol_percentile_flat_or_declining': vol_percentile_slope <= config.vol_percentile_slope_max # e.g., <= 0
}
passed = all(conditions.values())
metrics = {
'atr': atr,
'atr_pct': atr_pct,
'bb_bandwidth': bb_bandwidth,
'vol_percentile': vol_percentile,
'vol_expansion_ratio': vol_expansion_ratio,
'bb_slope': bb_slope,
'vol_percentile_slope': vol_percentile_slope,
'conditions_met': sum(conditions.values()),
'total_conditions': len(conditions)
}
if passed:
reason = "Volatility is in tradable band - will get fills without blowups"
else:
failing_conditions = [k for k, v in conditions.items() if not v]
reason = f"Gate 3 failed - volatility not tradable: {', '.join(failing_conditions)}"
return passed, metrics, reasonConfiguration Parameters:
gate_3_config:
# Volatility thresholds
atr_min_pct: 0.008 # 0.8% - minimum for fills
atr_max_pct: 0.04 # 4% - maximum before blowup risk
vol_expansion_max: 1.3 # Max expansion ratio vs baseline
vol_percentile_slope_max: 0 # Must be flat or declining
# Calculation windows
vol_window: 100 # Window for volatility percentile
baseline_window: 200 # Window for baseline ATR
bb_slope_lookback: 5 # Bars for BB bandwidth slope
vol_slope_lookback: 5 # Bars for vol percentile slopeSequential Gate Evaluation:
def evaluate_restart_gates(market_data, gate_state, config):
"""
Evaluate restart gates sequentially
Returns: (verdict: str, gate_results: dict)
"""
gate_results = {
'gate_1': {'evaluated': False, 'passed': False, 'metrics': {}, 'reason': ''},
'gate_2': {'evaluated': False, 'passed': False, 'metrics': {}, 'reason': ''},
'gate_3': {'evaluated': False, 'passed': False, 'metrics': {}, 'reason': ''},
'all_passed': False
}
# Gate 1: Directional Energy Decay
gate_1_passed, gate_1_metrics, gate_1_reason = evaluate_gate_1(market_data, config.gate_1)
gate_results['gate_1'] = {
'evaluated': True,
'passed': gate_1_passed,
'metrics': gate_1_metrics,
'reason': gate_1_reason
}
if not gate_1_passed:
# Gate 1 failed - classify as TRANSITION or TREND, block subsequent gates
verdict = classify_trend_or_transition(market_data, gate_1_metrics)
return verdict, gate_results
# Gate 1 passed - proceed to Gate 2
gate_2_passed, gate_2_metrics, gate_2_reason = evaluate_gate_2(market_data, config.gate_2)
gate_results['gate_2'] = {
'evaluated': True,
'passed': gate_2_passed,
'metrics': gate_2_metrics,
'reason': gate_2_reason
}
if not gate_2_passed:
# Gate 2 failed - classify as RANGE_WEAK, block Gate 3
return 'RANGE_WEAK', gate_results
# Gate 2 passed - proceed to Gate 3
gate_3_passed, gate_3_metrics, gate_3_reason = evaluate_gate_3(market_data, config.gate_3)
gate_results['gate_3'] = {
'evaluated': True,
'passed': gate_3_passed,
'metrics': gate_3_metrics,
'reason': gate_3_reason
}
if not gate_3_passed:
# Gate 3 failed - classify as RANGE_WEAK
return 'RANGE_WEAK', gate_results
# All gates passed - transition to RANGE_OK
gate_results['all_passed'] = True
return 'RANGE_OK', gate_resultsGate State Tracking:
class GateState:
"""Track gate evaluation state across bars"""
def __init__(self, grid_id):
self.grid_id = grid_id
self.stopped_at = None
self.stopping_verdict = None
self.gate_history = [] # Bar-by-bar gate results
self.all_gates_passed = False
def record_stop(self, timestamp, verdict):
"""Initialize gate tracking when grid stops"""
self.stopped_at = timestamp
self.stopping_verdict = verdict
self.gate_history = []
self.all_gates_passed = False
def record_evaluation(self, timestamp, gate_results):
"""Record gate evaluation for this bar"""
self.gate_history.append({
'timestamp': timestamp,
'results': gate_results
})
if gate_results['all_passed']:
self.all_gates_passed = True
def get_status(self):
"""Get current gate status"""
if not self.gate_history:
return 'NOT_STARTED'
latest = self.gate_history[-1]['results']
if latest['all_passed']:
return 'ALL_PASSED'
elif latest['gate_1']['evaluated'] and not latest['gate_1']['passed']:
return 'GATE_1_BLOCKING'
elif latest['gate_2']['evaluated'] and not latest['gate_2']['passed']:
return 'GATE_2_BLOCKING'
elif latest['gate_3']['evaluated'] and not latest['gate_3']['passed']:
return 'GATE_3_BLOCKING'
else:
return 'IN_PROGRESS'Gate Evaluation Output in Regime Analysis:
regime_analysis:
verdict: RANGE_WEAK # or TRANSITION, TREND, RANGE_OK
confidence: 0.65
strength: MODERATE
# Gate evaluation results (when applicable)
restart_gates:
status: GATE_2_BLOCKING # or ALL_PASSED, GATE_1_BLOCKING, etc.
stopped_at: '2026-01-07T18:30:00Z'
stopping_verdict: TRANSITION
bars_since_stop: 8
gate_1:
evaluated: true
passed: true
reason: "Directional energy has decayed - trend no longer pushing with conviction"
metrics:
trend_score: 42
adx: 18
adx_slope: -2.3
normalized_slope: 0.12
efficiency_ratio: 0.38
has_higher_highs_lower_lows: false
conditions_met: 5
total_conditions: 5
gate_2:
evaluated: true
passed: false
reason: "Gate 2 failed - mean reversion not established: oscillations"
metrics:
mean_rev_score: 58
lag1_autocorr: -0.15
ou_half_life_bars: 36
z_score_reversions: 4
oscillations_count: 1 # Need 2, only have 1
candidate_spacing: 18.0
conditions_met: 4
total_conditions: 5
gate_3:
evaluated: false # Blocked by Gate 2 failure
passed: false
reason: "Not evaluated - Gate 2 must pass first"
metrics: {}Grid Parameter Calculation After Gates Pass:
When all gates pass and verdict transitions to RANGE_OK, the system computes grid parameters from current market structure and does NOT reuse stopped grid parameters:
def compute_new_grid_after_gates_pass(market_data, config):
"""
Compute fresh grid parameters after restart gates pass
CRITICAL: Do NOT reuse stopped grid's center, spacing, or bounds
"""
# Discover current market range (independent of old grid)
discovered_range = discover_support_resistance(market_data, config)
# Calculate spacing from current ATR
current_atr = calculate_atr(market_data)
spacing = config.k_s_range_ok * current_atr # e.g., 0.45 * ATR
# Calculate width from current volatility
half_width = max(
config.k_w_range_ok * current_atr, # e.g., 3.2 * ATR
config.m_w * calculate_bb_half_width(market_data)
)
# Use discovered range center (not old grid center)
center = discovered_range.center
# Conservative initial parameters
grid_params = {
'center': center,
'lower': center - half_width,
'upper': center + half_width,
'spacing': spacing,
'levels_each_side': min(
int(half_width / spacing),
config.max_levels_conservative # e.g., 6 for conservative restart
),
'style': 'symmetric',
'sizing_mode': 'conservative', # Start with reduced size
'risk_scaling': config.restart_risk_scaling # e.g., 0.6x normal
}
return grid_paramsConfiguration Example:
restart_gates_config:
enabled: true
# Minimum time floor after stop (prevents churn)
min_bars_after_stop: 3 # Wait at least 3 bars before evaluating gates
# Confidence monotonicity guard (UI/operator sanity)
confidence_max_delta: 0.15 # Max confidence change per bar when regime unchanged
# Gate 1: Directional Energy Decay
gate_1:
trend_exit_threshold: 50
adx_midzone: 25
slope_threshold: 0.15
er_threshold: 0.45
hh_ll_lookback: 20
swing_detection_method: 'FRACTAL'
passage_logic: 'MAJORITY'
majority_n: 3
majority_m: 5
# Gate 2: Mean Reversion Return
gate_2:
mean_rev_threshold: 60
ou_max_half_life: 48
z_threshold: 1.5
min_z_reversions: 3
min_oscillations: 2
grid_spacing_multiplier: 0.45
oscillation_spacing_multiple: 1.0
oscillation_revert_bars: 12
# Gate 3: Tradable Volatility
gate_3:
atr_min_pct: 0.008
atr_max_pct: 0.04
vol_expansion_max: 1.3
vol_percentile_slope_max: 0
vol_window: 100
baseline_window: 200
bb_slope_lookback: 5
vol_slope_lookback: 5
# Grid parameters after gates pass
restart_grid_params:
k_s_range_ok: 0.45 # Conservative spacing
k_w_range_ok: 3.2 # Conservative width
m_w: 0.9
max_levels_conservative: 6 # Fewer levels initially
restart_risk_scaling: 0.6 # 60% of normal risk
# CRITICAL NOTE: Confidence Ignored for Restart Grid Creation
#
# When restart gates pass and CREATE_GRID is permitted, confidence scores
# are IGNORED. Gate passage is the sole authority for grid creation permission.
#
# This is intentional:
# - Gates provide explicit, measurable criteria for restart safety
# - Confidence measures regime stability, not restart readiness
# - Mixing confidence with gates would create ambiguous decision logic
#
# Confidence is used elsewhere (e.g., STOP_GRID override), but NOT for
# CREATE_GRID permission after restart gates pass.Minimum Time Floor After Stop:
To prevent stop-restart churn in fast markets, the system enforces a minimum time floor before evaluating gates:
def should_evaluate_gates(gate_state, current_bar):
"""
Check if enough time has passed to begin gate evaluation
Returns: bool
"""
if gate_state.stopped_at is None:
return False
bars_since_stop = current_bar - gate_state.stopped_at
if bars_since_stop < config.min_bars_after_stop:
# Too soon - wait for minimum time floor
return False
return TrueThis prevents:
- Stop → immediate restart → stop churn
- Indicator noise immediately after impulse moves
- Premature gate evaluation before market settles
Confidence Monotonicity Guard:
To prevent wild bar-to-bar confidence jumps when regime remains unchanged:
def apply_confidence_monotonicity(new_confidence, prev_confidence, regime_changed):
"""
Apply maximum delta constraint to confidence changes
This is purely for UI/operator sanity, not trading logic
"""
if regime_changed:
# Allow full confidence change when regime changes
return new_confidence
# Limit confidence change when regime unchanged
max_delta = config.confidence_max_delta # e.g., 0.15
if new_confidence > prev_confidence:
return min(new_confidence, prev_confidence + max_delta)
else:
return max(new_confidence, prev_confidence - max_delta)This improves:
- UI stability (confidence doesn’t jump wildly)
- Operator confidence in the system
- Smoother confidence trends in dashboards
Note: This is NOT used for trading decisions, only for display/reporting.
Confidence Ignored for Restart Grid Creation:
CRITICAL: When restart gates pass and CREATE_GRID is permitted, confidence scores are IGNORED. Gate passage is the sole authority for grid creation permission.
This is intentional design:
- Gates provide explicit criteria: Measurable conditions for restart safety
- Confidence measures different thing: Regime stability, not restart readiness
- Avoids ambiguous logic: Mixing confidence with gates creates unclear decision rules
- Clear authority: Gates = restart permission, Confidence = risk adjustment elsewhere
Confidence is used for:
- ✅ STOP_GRID override when confidence falls below safety threshold
- ✅ Risk scaling for active grids
- ✅ UI display and operator information
Confidence is NOT used for:
- ❌ CREATE_GRID permission after restart gates pass
- ❌ Overriding gate evaluation results
- ❌ Bypassing gate requirements
Decision Records
Immutable YAML records stored in the market-maker-data Git repository, providing complete audit trail of all recommendations and outcomes.
Data Repository:
- Repository:
git@github.com:craigedmunds/market-maker-data.git - Purpose: Centralized storage for all decision records, metrics history, and system state data
- Structure: Organized by data type with timestamped files for traceability
- Version Control: Full Git history provides immutable audit trail and rollback capability
Schema:
meta:
id: string
asset: string
timeframe: string
created_at: ISO8601
engine_version: string
config_version_hash: string
recommendation: # IMMUTABLE
verdict: RegimeVerdict
action: string
confidence: number
context_snapshot: object
reasoning: string[]
actions: # APPEND-ONLY
- timestamp: ISO8601
action_taken: string
source: 'USER' | 'SYSTEM' | 'SYSTEM_DEFAULT' | 'SYSTEM_AUTOMATED'
evaluations: # APPEND-ONLY
- horizon: string
evaluated_at: ISO8601
market_outcome: object
recommendation_assessment: object
actual_action_assessment: objectGrid Manager
Handles grid creation, monitoring, and lifecycle management through the Exchange Interface.
Responsibilities:
- Execute Grid_Setup recommendations after user approval
- Monitor active grids and execute Grid_Manage actions
- Enforce capital allocation constraints and reserves
- Handle stop conditions and cooldown periods
Grid Specification:
# Grid Specification Example
grid_specification:
grid_id: eth-primary
symbol: ETH/USDT
levels: 7
spacing_method: EVEN # or PERCENT
upper_bound: 3200.00
lower_bound: 2900.00
stop_condition:
trigger_logic: "2x 1h closes below 2900"
execution_mode: KEEP_ASSETS # KEEP_ASSETS | SELL_ALL | CANCEL_ONLY
capital_allocation: 920.00
# Stop Execution Modes Examples
stop_modes:
keep_assets:
mode: KEEP_ASSETS
description: "Cancel orders, maintain current inventory (default, safest)"
use_case: "Standard grid stop, preserve positions"
sell_all:
mode: SELL_ALL
description: "Cancel orders and sell all inventory to base currency"
use_case: "Emergency exit, manual override only"
restriction: "Requires explicit user approval, never system default"
cancel_only:
mode: CANCEL_ONLY
description: "Cancel orders only, leave inventory untouched"
use_case: "Price exits range upward with minimal inventory, avoid forced conversion"Stop Execution Modes:
- KEEP_ASSETS: Cancel orders, maintain current inventory (default, safest)
- SELL_ALL: Cancel orders and sell all inventory to base currency (manual override only)
- CANCEL_ONLY: Cancel orders only, leave inventory untouched (useful when price exits range upward with minimal inventory)
Grid Configuration Manager
Manages detailed grid configuration parameters that cannot be controlled via KuCoin API, providing comprehensive tracking and versioning of manual grid setups.
Core Functionality:
- Stores detailed grid parameters from manual KuCoin configuration
- Maintains version history of configuration changes with timestamps
- Validates grid behavior against stored configuration parameters
- Integrates configuration data into metrics collection and performance calculations
Enhanced Grid Configuration Parameters:
# Enhanced Grid Configuration Example
enhanced_grid_config:
grid_id: eth-primary
version: 3
created_at: 2025-01-18T14:30:00Z
updated_at: 2025-01-18T15:45:00Z
basic_parameters:
symbol: ETH/USDT
exchange: kucoin
enabled: true
role: PRIMARY
detailed_parameters:
price_range:
upper_bound: 3150.00 # USDT
lower_bound: 2850.00 # USDT
entry_price: 2931.64 # USDT - actual entry price from KuCoin
grid_structure:
number_of_grids: 7
spacing_method: EVEN
amount_per_grid: 0.0116194 # ETH - actual amount per grid from KuCoin
profit_configuration:
profit_per_grid_min: 1.21 # % - minimum profit per grid cycle
profit_per_grid_max: 1.34 # % - maximum profit per grid cycle
profit_per_grid_avg: 1.275 # % - average expected profit per grid
risk_management:
stop_loss_price: 2750.00 # USDT - configured stop-loss price
take_profit_price: null # USDT - null if not configured
stop_loss_enabled: true
take_profit_enabled: false
reinvestment_settings:
grid_profit_reinvestment: true # Whether profits are reinvested
reinvestment_mode: COMPOUND # COMPOUND, FIXED, MANUAL
capital_allocation:
total_allocated: 920.00 # USDT - total capital allocated to this grid
reserved_capital: 30.00 # USDT - reserved capital (3% of total)
effective_capital: 890.00 # USDT - capital actually deployed
configuration_source:
source_type: MANUAL_KUCOIN # MANUAL_KUCOIN, API_CREATED, SYSTEM_GENERATED
created_by: USER
kucoin_grid_id: "kucoin_grid_12345" # Reference to actual KuCoin grid ID
version_history:
- version: 1
timestamp: 2025-01-18T14:30:00Z
changes: ["Initial configuration"]
changed_fields: []
- version: 2
timestamp: 2025-01-18T15:15:00Z
changes: ["Updated stop-loss price from 2700 to 2750"]
changed_fields: ["risk_management.stop_loss_price"]
- version: 3
timestamp: 2025-01-18T15:45:00Z
changes: ["Enabled profit reinvestment"]
changed_fields: ["reinvestment_settings.grid_profit_reinvestment"]
# Grid Configuration Validation Example
grid_validation:
grid_id: eth-primary
validation_timestamp: 2025-01-18T16:00:00Z
parameter_validation:
price_bounds_check:
configured_upper: 3150.00
configured_lower: 2850.00
current_price: 2931.64
within_bounds: true
grid_levels_check:
configured_levels: 7
active_orders: 6 # One level filled
expected_orders: 6
validation_status: VALID
capital_allocation_check:
configured_capital: 920.00
deployed_capital: 890.00
locked_capital: 50.00
available_capital: 950.00
allocation_status: VALID
behavioral_validation:
profit_tracking:
expected_profit_per_grid: 1.275
actual_profit_last_cycle: 1.28
profit_variance: 0.005 # Within expected range
stop_loss_monitoring:
stop_loss_price: 2750.00
current_price: 2931.64
distance_to_stop: 181.64 # USDT
stop_loss_risk: LOW
validation_result:
overall_status: VALID
discrepancies: []
recommendations: []Configuration Management Operations:
- Create Configuration: Initialize new grid configuration with detailed parameters
- Update Configuration: Modify existing configuration while maintaining version history
- Validate Configuration: Check configuration against actual grid behavior
- Archive Configuration: Mark configuration as inactive when grid is stopped
Integration Points:
- Metrics Collection: Include current configuration in hourly snapshots
- Performance Calculations: Use actual grid parameters instead of estimates
- Risk Monitoring: Monitor market conditions relative to configured stop-loss
- Decision Records: Reference specific configuration version in recommendations
Notification Manager
Handles notifications through n8n webhook integration, allowing flexible routing to multiple channels including Pushover.
Notification Content:
- Grid_Manage: verdict, action, confidence, strength, timeout remaining, decision link with approve/override buttons, whyNow summary
- Grid_Setup: grid_id, symbol, range summary, confidence, strength, decision link with approve/decline buttons, whyNow summary, “silence = no action”
- Trade Execution: trade details, grid status, actual fees
- Grid Stop: reason, outcome, next steps
- Regime Change: from/to verdicts, confidence delta, trigger factors
Decision Interface: The system provides a web UI accessible via notification links that allows users to:
- View complete recommendation details and reasoning
- Approve recommended actions with single click
- Override with alternative actions (SELL_ALL, different timeouts, etc.)
- Decline Grid_Setup recommendations
- All decisions are automatically written to the Decision_Record and executed
Architecture:
- Primary Channel: Webhooks to n8n automation platform
- n8n Routing: n8n handles delivery to Pushover, email, Slack, or other channels
- Fallback: Direct webhook endpoints for critical alerts if n8n is unavailable
- Throttling: Prevents duplicate notifications for same recommendation ID
Data Collector
Collects raw market data as comprehensive snapshots, supporting both scheduled hourly collection and on-demand historical backfilling.
Core Responsibility:
- Raw data collection only - no analysis or processing
- Captures comprehensive market data snapshots for any specified time period
- Stores raw exchange data with complete metadata and timestamps
- Handles data collection failures gracefully without affecting trading operations
Collection Modes:
1. Scheduled Hourly Collection
- Webhook endpoint triggered by n8n hourly schedule
- Captures data for the previous complete hour (HH:00:00 to HH:59:59)
- Automatic collection for real-time system operation
- Maintains collection continuity through n8n workflow persistence
2. Historical Backfill Collection
- On-demand collection for specified date/time ranges
- Supports collecting data for any historical period within exchange retention limits
- Enables building the required historical dataset (up to 730 days for 4h data)
- Can be triggered manually or via batch processing scripts
- Supports parallel collection of multiple time periods for efficiency
Functionality:
- Captures complete OHLCV data for all required timeframes (1m, 15m, 1h, 4h)
- Collects account balance (for recent periods only) and grid configuration state
- Stores comprehensive snapshots in time-based directory structure
- Validates data completeness and integrity before storage
- Handles exchange API rate limits and retry logic
- Supports resumable collection for large historical ranges
Interface:
# Scheduled Collection (triggered by n8n)
POST /data/collect
{
"trigger_time": "2025-01-18T18:00:00Z",
"collection_id": "n8n_hourly_2025-01-18T18-00",
"source": "n8n_schedule"
}
# Historical Backfill Collection
POST /data/collect/historical
{
"start_time": "2025-01-01T00:00:00Z",
"end_time": "2025-01-01T23:59:59Z",
"symbols": ["ETH/USDT", "BTC/USDT"],
"collection_id": "backfill_2025-01-01",
"source": "manual_backfill"
}
# Batch Historical Collection
POST /data/collect/batch
{
"date_ranges": [
{"start": "2024-01-01T00:00:00Z", "end": "2024-01-31T23:59:59Z"},
{"start": "2024-02-01T00:00:00Z", "end": "2024-02-29T23:59:59Z"}
],
"symbols": ["ETH/USDT", "BTC/USDT"],
"collection_id": "backfill_2024_q1",
"source": "batch_backfill",
"parallel_workers": 4
}Historical Data Requirements:
- 1m data: Up to 21 days (for recent detailed analysis)
- 15m data: Up to 120 days (for medium-term patterns)
- 1h data: Up to 270 days (for long-term trends)
- 4h data: Up to 540 days (for strategic analysis)
Backfill Strategy:
- Start with most recent data and work backwards
- Prioritize timeframes based on analysis needs (1h and 4h first for regime analysis)
- Handle exchange API limitations gracefully (rate limits, data retention periods)
- Resume interrupted collections automatically
- Validate data continuity and completeness
Metrics Analyzer
Derives analytical metrics from stored snapshots, implementing the second phase of the snapshot-first architecture.
Core Responsibility:
- Analysis and processing only - uses snapshots as single source of truth
- Performs regime analysis using snapshot data as input
- Generates analytical metrics files with references to underlying snapshots
- No direct exchange API calls - all data comes from snapshots
Functionality:
- Reads raw data from stored snapshots
- Performs regime classification and confidence scoring
- Calculates grid performance and risk assessments
- Generates market summaries and analytical insights
- Creates metrics files with clear data lineage references
- Supports reprocessing of historical snapshots with updated algorithms
Architectural Principles:
- Snapshot-First: Raw data collection precedes all analysis
- Single Source of Truth: All analysis derives from stored snapshots
- Data Separation: Raw snapshots stored separately from processed metrics
- Avoid Duplication: Minute-level prices stored once in snapshots, referenced by metrics
Core Functionality:
- Webhook endpoint triggered by n8n hourly schedule
- Captures comprehensive raw data snapshots for the previous complete hour
- Performs regime analysis using snapshot data as input
- Creates separate analytical metrics files with references to underlying snapshots
- Handles collection failures gracefully without affecting trading operations
- Maintains collection continuity through n8n workflow persistence
Two-Component Architecture:
Component 1: Data Collector
- Timing: At time T, capture raw data for the previous complete hour (HH:00:00 to HH:59:59)
- Data Sources: Complete OHLCV data for all timeframes (1m, 15m, 1h, 4h), account balance, grid configuration state
- Storage: Comprehensive snapshots stored in time-based directory structure
- Format: JSON files with complete exchange metadata and collection timestamps
- Retention: Supports historical lookback periods (21 days of 1m, 120 days of 15m, 270 days of 1h, 540 days of 4h)
Component 2: Metrics Analyzer
- Input: Uses stored snapshot data as single source of truth
- Processing: Regime analysis, grid performance calculation, risk assessment
- Output: Analytical metrics files with references to underlying snapshots
- Storage: Separate YAML files per market containing derived analysis only
Snapshot Storage Organization:
The system uses separate files per data type within hourly folders to optimize for different access patterns and reduce duplication of configuration and account data:
snapshots/
├── 2025/01/18/17/
│ ├── configuration.json # System configuration snapshot (shared across all symbols)
│ ├── account.json # Account balance snapshot (shared across all symbols)
│ ├── ETH-USDT_1m.json # 1-minute data snapshot for ETH/USDT
│ ├── BTC-USDT_1m.json # 1-minute data snapshot for BTC/USDT
│ ├── ETH-USDT_15m.json # 15-minute data snapshot for ETH/USDT
│ ├── BTC-USDT_15m.json # 15-minute data snapshot for BTC/USDT
│ ├── ETH-USDT_1h.json # 1-hour data snapshot for ETH/USDT
│ ├── BTC-USDT_1h.json # 1-hour data snapshot for BTC/USDT
│ └── ...
├── 2025/01/18/16/
│ ├── configuration.json # System configuration snapshot
│ ├── account.json # Account balance snapshot
│ ├── ETH-USDT_4h.json # 4-hour data snapshot (covers 16:00-19:59)
│ ├── BTC-USDT_4h.json # 4-hour data snapshot
│ └── ...
└── 2025/01/18/15/
├── configuration.json # System configuration snapshot
├── account.json # Account balance snapshot
└── ...
metrics/
├── 2025/01/18/
│ ├── 17_ETH-USDT.yaml # Analytical metrics (references all timeframes)
│ ├── 17_BTC-USDT.yaml # Analytical metrics (references all timeframes)
│ └── 17_summary.yaml # Cross-market analysis
└── 2025/01/19/
├── 08_ETH-USDT.yaml
└── ...
Benefits of Hourly-Based Organization:
- Reduced Duplication: Configuration and account data stored once per hour, not per symbol/timeframe
- Efficient Access: All data for a specific hour is co-located in one directory
- Simplified Structure: No nested timeframe folders, cleaner organization
- Atomic Collection: All data for an hour can be collected and stored atomically
- Easy Cleanup: Entire hour directories can be removed for retention management
- Cross-Symbol Analysis: Configuration and account data easily accessible for multi-symbol analysis
File Type Descriptions:
- configuration.json: System configuration including grid configs, regime settings, tracked symbols, and other system parameters (shared across all symbols for the hour)
- account.json: Account balance data including total USDT, available USDT, locked USDT, and deployable capital (shared across all symbols for the hour)
- {SYMBOL}_{TIMEFRAME}.json: Market-specific data including minute-level prices, historical OHLCV data, and technical analysis results for the specific symbol and timeframe
Snapshot Data Structure:
Market Data Snapshot (ETH-USDT_1h.json):
{
"metadata": {
"timestamp": "2025-01-18T17:00:00Z",
"symbol": "ETH/USDT",
"timeframe": "1h",
"collection_mode": "real_time",
"analysis_period": {
"start": "2025-01-18T17:00:00Z",
"end": "2025-01-18T17:59:59.999999Z"
}
},
"raw_exchange_data": {
"candles": [
/* 48 OHLCV candles for 1h timeframe */
],
"source": "KuCoin",
"api_endpoint": "/api/v1/market/candles",
"collection_timestamp": "2025-01-18T18:00:15Z"
}
}Configuration Snapshot Data Structure (configuration.json):
{
"metadata": {
"timestamp": "2025-01-18T17:00:00Z",
"collection_mode": "real_time",
"analysis_period": {
"start": "2025-01-18T17:00:00Z",
"end": "2025-01-18T17:59:59.999999Z"
}
},
"grid_config": {
/* Complete grid configuration for all symbols */
},
"system_settings": {
/* System configuration including regime thresholds */
},
"thresholds": {
/* Analysis thresholds and parameters */
}
}Account Snapshot Data Structure (account.json):
{
"metadata": {
"timestamp": "2025-01-18T17:00:00Z",
"collection_mode": "real_time",
"analysis_period": {
"start": "2025-01-18T17:00:00Z",
"end": "2025-01-18T17:59:59.999999Z"
}
},
"account_data": {
"total_usdt": 1000.0,
"available_usdt": 750.0,
"locked_usdt": 250.0,
"deployable_capital": 700.0
}
}Metrics File Structure (References Snapshots):
# 17_ETH-USDT.yaml
config:
symbol: ETH/USDT
analyzed_period:
start: '2025-01-18T17:00:00Z'
end: '2025-01-18T17:59:59.999999Z'
data_sources:
configuration_snapshot: "snapshots/2025/01/18/17/configuration.json"
account_snapshot: "snapshots/2025/01/18/17/account.json"
market_snapshots:
- "snapshots/2025/01/18/17/ETH-USDT_1m.json" # 1-minute data
- "snapshots/2025/01/18/17/ETH-USDT_15m.json" # 15-minute data
- "snapshots/2025/01/18/17/ETH-USDT_1h.json" # 1-hour data
- "snapshots/2025/01/18/16/ETH-USDT_4h.json" # 4-hour data from previous period
snapshot_timestamp: "2025-01-18T18:00:15Z"
analysis:
regime_analysis:
verdict: RANGE_OK
confidence: 0.75
# Derived from snapshot data, no raw prices duplicated
grid_analysis:
performance_summary:
# Calculated from snapshot + configuration
risk_assessment:
# Derived analysis onlyData Lineage and Traceability:
- Clear References: Each metrics file references its source snapshot
- Regeneration Capability: All analysis can be regenerated from snapshots
- Historical Analysis: Can reprocess old snapshots with updated algorithms
- Audit Trail: Complete chain from raw data to final analysis
Historical Data Management:
- Backfill Support: Can collect historical snapshots for missing periods
- Retention Policies: Automated cleanup based on timeframe-specific retention periods
- Re-aggregation: Monthly processes to optimize storage and access patterns
- Efficient Queries: Snapshot organization supports both current analysis and historical lookbacks
Data Collector Interface:
# Scheduled Collection Webhook (from n8n)
POST /data/collect
{
"trigger_time": "2025-01-18T18:00:00Z",
"collection_id": "n8n_hourly_2025-01-18T18-00",
"source": "n8n_schedule"
}
# Historical Backfill Task Invocation
POST /data/collect/historical
{
"start_time": "2025-01-01T00:00:00Z",
"end_time": "2025-01-01T23:59:59Z",
"symbols": ["ETH/USDT", "BTC/USDT"],
"collection_id": "backfill_2025-01-01",
"source": "task_invocation"
}
# Data Collector Response
{
"status": "success",
"collection_id": "snap_2025-01-18T17-00",
"analyzed_period": {
"start": "2025-01-18T17:00:00Z",
"end": "2025-01-18T17:59:59Z"
},
"timestamp": "2025-01-18T18:00:15Z",
"markets_processed": ["ETH/USDT", "BTC/USDT"],
"snapshots_created": {
"configuration": "snapshots/2025/01/18/17/configuration.json",
"account": "snapshots/2025/01/18/17/account.json",
"market_data": [
"snapshots/2025/01/18/17/ETH-USDT_1m.json",
"snapshots/2025/01/18/17/ETH-USDT_15m.json",
"snapshots/2025/01/18/17/ETH-USDT_1h.json",
"snapshots/2025/01/18/17/BTC-USDT_1m.json",
"snapshots/2025/01/18/17/BTC-USDT_15m.json",
"snapshots/2025/01/18/17/BTC-USDT_1h.json",
"snapshots/2025/01/18/16/ETH-USDT_4h.json",
"snapshots/2025/01/18/16/BTC-USDT_4h.json"
]
},
"storage_status": "committed_to_git"
}
# Metrics Analyzer Interface
POST /metrics/analyze
{
"snapshot_directory": "snapshots/2025/01/18/17/",
"analysis_id": "analysis_2025-01-18T17-00",
"source": "scheduled_analysis",
"markets": ["ETH/USDT", "BTC/USDT"]
}
# Metrics Analyzer Response
{
"status": "success",
"analysis_id": "analysis_2025-01-18T17-00",
"analyzed_period": {
"start": "2025-01-18T17:00:00Z",
"end": "2025-01-18T17:59:59Z"
},
"timestamp": "2025-01-18T18:00:25Z",
"markets_analyzed": ["ETH/USDT", "BTC/USDT"],
"metrics_created": ["17_ETH-USDT.yaml", "17_BTC-USDT.yaml"],
"dashboard_updated": true
}
# Metrics File Example (17_ETH-USDT.yaml) - No Raw Data Duplication
config:
symbol: ETH/USDT
analyzed_period:
start: '2025-01-18T17:00:00Z'
end: '2025-01-18T17:59:59.999999Z'
data_sources:
snapshot_file: "snapshots/2025/01/18/17_ETH-USDT_snapshot.json"
snapshot_timestamp: "2025-01-18T18:00:15Z"
collection_metadata:
collection_id: snap_2025-01-18T17-00
collected_at: 2025-01-18T18:00:15Z
trigger_source: n8n_schedule
analysis:
regime_analysis:
verdict: RANGE_OK
confidence: 0.78
strength: STRONG
# Analysis derived from snapshot data
range_analysis:
upper_bound: 320000 # Discovered from snapshot OHLCV data
lower_bound: 290000 # Discovered from snapshot OHLCV data
range_integrity: 0.85
closes_inside_range_pct: 88.3
boundary_violations: 2
range_duration_hours: 18
support_strength: 0.82
resistance_strength: 0.79
mean_reversion:
reversion_strength: 0.80
avg_reversion_time_minutes: 45
oscillation_frequency: 0.65
extreme_rejection_rate: 0.75
volatility_metrics:
atr_1h: 1525 # Calculated from snapshot 1m data
volatility_state: STABLE
volatility_expansion_ratio: 0.95
volatility_symmetry: 0.88
baseline_atr: 1600
time_consistency:
regime_agreement_pct: 85.7
regime_flips_1h: 0
regime_flips_4h: 1
stability_score: 0.82
competing_verdicts:
- verdict: RANGE_WEAK
weight: 0.18
reasons: ["Slightly elevated volatility", "One boundary test"]
supporting_data:
volatility_expansion_ratio: 1.02
boundary_test_count: 1
resistance_strength: 0.79
market_summary:
# Derived from snapshot minute data, not duplicated
opening_price: 311500
closing_price: 311875
high_price: 312550
low_price: 311025
volume_1h: 2150.75
change_1h_pct: 0.12
price_range_pct: 0.48
volatility_indicator: LOW
grid_analysis:
# Grid performance calculated from snapshot + configuration
performance_summary:
status: ACTIVE
estimated_fills: 4
grid_efficiency: "4/6 levels active"
risk_metrics:
stop_loss_risk: MEDIUM
position_health: WITHIN_GRID
configuration_drift: false
reasons: ["Minor volume spike at resistance"]
supporting_data:
volume_spike_ratio: 1.12
resistance_touches_1h: 1
price_distance_to_resistance: 75 # Price * 100
volume_24h_percentile: 0.68
decision_factors:
primary_drivers:
- factor: "Strong range integrity with clear support/resistance"
supporting_data:
range_integrity: 0.85
support_strength: 0.82
resistance_strength: 0.79
closes_inside_range_pct: 88.3
- factor: "Consistent mean reversion behavior"
supporting_data:
reversion_strength: 0.80
avg_reversion_time_minutes: 45
extreme_rejection_rate: 0.75
- factor: "Stable volatility regime within baseline parameters"
supporting_data:
volatility_expansion_ratio: 0.95
volatility_symmetry: 0.88
volatility_state: "STABLE"
supporting_factors:
- factor: "Extended range duration provides confidence"
supporting_data:
range_duration_hours: 18
boundary_violations: 2
- factor: "Majority of closes remain within established bounds"
supporting_data:
closes_inside_range_pct: 88.3
total_candles_analyzed: 24
- factor: "Low regime flip frequency indicates stability"
supporting_data:
regime_flips_1h: 0
regime_flips_4h: 1
stability_score: 0.82
regime_agreement_pct: 85.7
risk_factors:
- factor: "Slight volatility elevation above baseline"
supporting_data:
current_atr: 1525
baseline_atr: 1600
volatility_expansion_ratio: 0.95
- factor: "Recent boundary test at resistance level"
supporting_data:
resistance_touches_1h: 1
resistance_strength: 0.79
last_touch_minutes_ago: 23
verdict_rationale: "Range structure remains intact (85% integrity) with strong mean reversion (80% strength) and stable volatility (95% of baseline). Minor elevation in volatility and single boundary test insufficient to override strong structural indicators supporting RANGE_OK verdict."
grid_context:
active_grids:
- grid_id: eth-primary
status: ACTIVE
capital_allocated: 920.00
unrealized_pnl: 15.50
trades_in_hour: 3
# Enhanced Grid Configuration Snapshot
configuration_snapshot:
version: 3
last_updated: 2025-01-18T15:45:00Z
detailed_parameters:
price_range:
upper_bound: 315000 # 3150.00 * 100 (stored as integer)
lower_bound: 285000 # 2850.00 * 100 (stored as integer)
entry_price: 293164 # 2931.64 * 100 (stored as integer)
grid_structure:
number_of_grids: 7
spacing_method: EVEN
amount_per_grid: 116194 # 0.0116194 ETH * 10000000 (stored as integer for precision)
profit_configuration:
profit_per_grid_min: 121 # 1.21% * 100 (stored as integer)
profit_per_grid_max: 134 # 1.34% * 100 (stored as integer)
profit_per_grid_avg: 127 # 1.27% * 100 (stored as integer)
risk_management:
stop_loss_price: 275000 # 2750.00 * 100 (stored as integer)
take_profit_price: null
stop_loss_enabled: true
take_profit_enabled: false
reinvestment_settings:
grid_profit_reinvestment: true
reinvestment_mode: COMPOUND
configuration_source:
source_type: MANUAL_KUCOIN
kucoin_grid_id: "kucoin_grid_12345"
# Real-time validation against current market conditions
validation_status:
price_bounds_valid: true # Current price within configured bounds
stop_loss_distance: 18164 # Distance to stop-loss (181.64 * 100)
stop_loss_risk: LOW
grid_levels_active: 6 # Number of active grid levels
expected_levels: 6 # Expected active levels (one filled)
configuration_drift: false # Whether config differs from actual behavior
account_snapshot:
total_usdt: 1000.00
available_usdt: 950.00
locked_usdt: 50.00
deployable_capital: 920.00
reserve_required: 30.00
# Cross-Market Summary File (17_summary.yaml) - Optional
cross_market_summary:
analyzed_period:
start: 2025-01-18T17:00:00Z
end: 2025-01-18T17:59:59Z
collection_metadata:
collection_id: snap_2025-01-18T17-00
collected_at: 2025-01-18T18:00:15Z
markets_analyzed: ["ETH/USDT", "BTC/USDT"]
portfolio_summary:
total_capital: 1000.00
deployed_capital: 920.00
total_unrealized_pnl: 15.50
active_grids: 1
market_correlation:
eth_btc_correlation: 0.85
regime_agreement: 0.75 # Percentage of markets in same regime
system_health:
all_markets_healthy: true
regime_stability: HIGH
collection_success_rate: 0.98Error Handling:
- Collection Failures: Return error status to n8n, n8n handles retry logic
- Partial Data: Collect available data, mark missing fields as null, report partial success
- Storage Failures: Return error to n8n, n8n can retry or alert on persistent failures
- Market Data Gaps: Handle missing minute data gracefully, interpolate where appropriate
- File Conflicts: Use atomic writes and file locking to prevent corruption during re-runs
Metrics History
Manages persistent storage of system snapshots in YAML format with Git version control in the market-maker-data repository, using enhanced per-market file organization and minute-level data granularity.
Enhanced Storage Strategy:
- Repository:
git@github.com:craigedmunds/market-maker-data.git - Per-Market Files: Individual files for each market (e.g.,
ETH-USDT_17.yaml,BTC-USDT_17.yaml) - Minute-Level Data: Each file contains 60 minute-level price points for the analyzed hour
- File Naming:
metrics/YYYY/MM/DD/HH_SYMBOL.yaml(e.g.,metrics/2025/01/18/17_ETH-USDT.yaml) - Git Integration: Automatic commits with descriptive messages to market-maker-data repository
- Re-run Support: Updates existing files when collection is re-run for the same time period
- Retention: Configurable retention policy (default: keep all data)
- Compression: Optional compression for older data
File Organization Benefits:
- Performance: Faster loading of specific market data for dashboard generation
- Scalability: Easy addition of new markets without affecting existing file structure
- Granularity: Minute-level price data enables detailed trend analysis
- Maintenance: Easier to manage and backup individual market histories
- Parallel Processing: Multiple markets can be processed and stored concurrently
Deployment Modes:
- Local Development: Direct file system access to co-located
../market-maker-data/directory - Production: Git pull/push operations to remote repository with conflict resolution
- Hybrid: Local writes with periodic sync to remote repository for backup and sharing
Data Storage and Git Integration
The system uses a dual-mode approach for data storage to support both local development and production deployment scenarios.
Storage Modes
Storage Modes
Local Development Mode:
- Path: Relative path to
../market-maker-data/directory - Git Branch: Automatic feature branch creation (e.g.,
dev-<username>-<timestamp>) - Operations: Direct file system read/write operations on feature branch
- Git: Commits to feature branch, never merges to main
- Benefits: Fast iteration, isolated from production data, no pollution of live data
- Use Case: Development, debugging, testing
- Branch Cleanup: Optional periodic cleanup of old development branches
Production Mode:
- Path: Configurable absolute path or relative path to cloned repository
- Git Branch: Always operates on
mainbranch - Operations: File write followed by git add/commit/push operations
- Git: Automatic pull before write, push after commit, conflict resolution
- Benefits: Centralized data, backup, multi-instance coordination
- Use Case: Live trading, production deployment
Staging Mode:
- Path: Configurable path to cloned repository
- Git Branch: Dedicated staging branch (e.g.,
staging) - Operations: Similar to production but on staging branch
- Git: Can be merged to main after validation
- Benefits: Production-like testing without affecting live data
- Use Case: Pre-production validation, integration testing
Configuration
# Storage Configuration Example
storage:
mode: PRODUCTION # LOCAL | PRODUCTION | STAGING
local:
data_path: "../market-maker-data"
branch_strategy: FEATURE_BRANCH # FEATURE_BRANCH | NO_BRANCH
branch_prefix: "dev"
auto_commit: true
auto_push: false # Never push development branches
cleanup_old_branches: true
cleanup_after_days: 7
production:
data_path: "/opt/regime-management/data"
repository_url: "git@github.com:craigedmunds/market-maker-data.git"
branch: "main"
auto_pull: true
auto_push: true
conflict_resolution: ABORT_ON_CONFLICT # ABORT_ON_CONFLICT | FORCE_PUSH | MERGE_STRATEGY
ssh_key_path: "/opt/regime-management/.ssh/id_rsa"
staging:
data_path: "/opt/regime-management/staging-data"
repository_url: "git@github.com:craigedmunds/market-maker-data.git"
branch: "staging"
auto_pull: true
auto_push: true
merge_to_main: false # Manual merge after validationGit Operations Workflow
Git Operations Workflow
Local Development Sequence:
- Branch Check: Verify current branch or create new feature branch
- Branch Creation:
git checkout -b dev-<username>-<timestamp>if needed - Write Data: Write YAML file to local repository on feature branch
- Git Add:
git add <file_path> - Git Commit:
git commit -m "Add <record_type> <timestamp> [DEV]" - No Push: Development branches are never pushed to remote
- Branch Isolation: Each development session uses isolated branch
Production Write Sequence:
- Branch Check: Ensure on
mainbranch - Pre-write Pull:
git pull origin mainto get latest changes - Conflict Check: Verify no conflicts in target file path
- Write Data: Write YAML file to local repository
- Git Add:
git add <file_path> - Git Commit:
git commit -m "Add <record_type> <timestamp> [PROD]" - Git Push:
git push origin main - Error Handling: Retry on network failures, abort on conflicts
Staging Write Sequence:
- Branch Check: Ensure on
stagingbranch - Pre-write Pull:
git pull origin stagingto get latest changes - Write Data: Write YAML file to local repository
- Git Add:
git add <file_path> - Git Commit:
git commit -m "Add <record_type> <timestamp> [STAGING]" - Git Push:
git push origin staging - Manual Merge: Staging branch can be merged to main after validation
Branch Management Strategies:
- Feature Branch Naming:
dev-<username>-<timestamp>ordev-<username>-<session-id> - Branch Lifecycle: Created per development session, never merged, cleaned up periodically
- Data Isolation: Development data completely separate from production
- Testing: Can test full git workflows without affecting production data
- Cleanup: Automatic cleanup of branches older than configured days
Conflict Resolution Strategies:
- ABORT_ON_CONFLICT: Stop operation, alert, require manual intervention
- FORCE_PUSH: Overwrite remote (dangerous, use only in single-instance deployments)
- MERGE_STRATEGY: Use timestamp-based file naming to avoid conflicts
Network Failure Handling:
- Queue Locally: Store records in local queue during network outages
- Retry Logic: Exponential backoff for git operations
- Batch Sync: Push multiple queued records in single operation when connectivity restored
- Alert System: Notify on persistent sync failures
File Locking and Concurrency
Local Development:
- Branch Isolation: Each developer works on separate feature branch
- No Conflicts: Development branches never merge, eliminating conflicts
- Git History: Full version control within development branch
Production:
- File Locking: Prevent concurrent writes to same file
- Atomic Operations: Write to temp file, then move to final location
- Timestamp Precision: Microsecond precision to avoid filename conflicts
- Instance Coordination: Use instance ID in commit messages for multi-instance deployments
Branch Cleanup:
- Automatic Cleanup: Remove development branches older than configured threshold
- Manual Cleanup:
git branch -D dev-*to remove all development branches - Selective Cleanup: Keep branches with specific tags or recent activity
Backup and Recovery
Local Development:
- Branch History: Full version control within development branch
- No Remote Backup: Development data stays local (by design)
- Manual Export: Can export specific records for testing in other environments
Production:
- Remote Repository: Automatic backup through git push to main branch
- Local Retention: Configurable local file retention policy
- Disaster Recovery: Clone repository to restore full history
- Point-in-Time Recovery: Git checkout to specific commit for data recovery
Data Migration:
- Dev to Staging: Manual copy of specific records for testing
- Staging to Production: Merge staging branch to main after validation
- Production to Dev: Pull production data to development branch for debugging
Deployment and Configuration Management
Container Deployment Architecture
The system supports flexible deployment configurations through environment variable overrides and standardized task execution methods.
Task-Based Execution:
All deployment environments use task-based execution methods, with different tasks for different deployment scenarios:
# Local Development (no Git operations)
$ task collect
# Container Deployment with Git workflow
$ task collect:gitTask Hierarchy:
# Taskfile.yml structure
tasks:
collect:
desc: "Run metrics collection (local development)"
cmds:
- python collect_metrics.py {{.CLI_ARGS}}
collect:git:
desc: "Run metrics collection with Git workflow (for deployments)"
cmds:
- task: collect:git:setup
- task: collect
- task: collect:git:commit
collect:git:setup:
desc: "Clone and setup Git repository"
cmds:
- git clone ${REPO_URL} /tmp/market-maker-data
- cd /tmp/market-maker-data && git config user.name "Hourly Collection"
- cp -r /tmp/market-maker-data/metrics/* /app/data/metrics/ || true
collect:git:commit:
desc: "Commit and push results to Git"
cmds:
- cp -r /app/data/metrics/* /tmp/market-maker-data/metrics/
- cd /tmp/market-maker-data && git add .
- cd /tmp/market-maker-data && git commit -m "Hourly collection: $(date -u +%Y-%m-%d_%H:%M:%S)" || true
- cd /tmp/market-maker-data && git push origin mainBenefits of Task Standardization:
- Consistency: Same core collection logic (
task collect) in all environments - Modularity: Git operations separated into reusable tasks
- Testability: Can test
task collect:cronjoblocally to simulate production - Maintainability: Single source of truth for collection logic, modular Git operations
- Debugging: Easier troubleshooting with standardized task structure
Environment Variable Configuration System
The system implements a convention-based environment variable override system that allows any configuration value in environment.yaml to be overridden via environment variables.
Convention-Based Naming:
Environment variables follow the pattern: MARKET_MAKER_<CONFIG_PATH>
# environment.yaml structure
data:
repository:
base_path: "/Users/craig/src/ctoaas/market-maker-data"
api:
kucoin:
sandbox: false
regime:
confidence:
min_threshold: 0.6
# Environment variable overrides
MARKET_MAKER_DATA_REPOSITORY_BASE_PATH="/app/data"
MARKET_MAKER_API_KUCOIN_SANDBOX="true"
MARKET_MAKER_REGIME_CONFIDENCE_MIN_THRESHOLD="0.7"Configuration Loading Priority:
- Default Values: Built-in system defaults
- environment.yaml: File-based configuration
- Environment Variables: Runtime overrides (highest priority)
Implementation Example:
def load_configuration():
"""Load configuration with environment variable overrides"""
# Load base configuration from environment.yaml
config = load_yaml_config("config/environment.yaml")
# Apply environment variable overrides
for env_var, value in os.environ.items():
if env_var.startswith("MARKET_MAKER_"):
# Convert MARKET_MAKER_DATA_REPOSITORY_BASE_PATH to data.repository.base_path
config_path = env_var[13:].lower().replace("_", ".") # Remove MARKET_MAKER_ prefix
set_nested_config_value(config, config_path, value)
return config
def set_nested_config_value(config, path, value):
"""Set nested configuration value from dot-separated path"""
keys = path.split(".")
current = config
for key in keys[:-1]:
if key not in current:
current[key] = {}
current = current[key]
# Type conversion based on existing value type
if isinstance(current.get(keys[-1]), bool):
current[keys[-1]] = value.lower() in ("true", "1", "yes")
elif isinstance(current.get(keys[-1]), (int, float)):
current[keys[-1]] = type(current[keys[-1]])(value)
else:
current[keys[-1]] = valueContainer Configuration
Dockerfile Enhancements:
FROM python:3.11-slim
# Install system dependencies including Task runner
RUN apt-get update && apt-get install -y \
git \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install Task runner
RUN curl -sL https://taskfile.dev/install.sh | sh
# Set working directory
WORKDIR /app
# Copy requirements and install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy source code and Taskfile
COPY . .
COPY Taskfile.yml .
# Create non-root user
RUN useradd --create-home --shell /bin/bash app && chown -R app:app /app
USER app
# Expose port
EXPOSE 8000
# Default command uses task system
CMD ["task", "run"]Kubernetes CronJob Configuration
Simplified CronJob with Task-Based Execution:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hourly-regime-detection
labels:
app: metrics-service
component: hourly-collection
spec:
schedule: "1 * * * *"
timeZone: "UTC"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
metadata:
labels:
app: metrics-service
component: hourly-collection
job-type: scheduled-collection
spec:
restartPolicy: OnFailure
imagePullSecrets:
- name: gh-docker-registry-creds
containers:
- name: hourly-collector
image: metrics-service:latest
command: ["task", "collect:git"]
env:
# Environment variable overrides for container deployment
- name: MARKET_MAKER_DATA_REPOSITORY_BASE_PATH
value: "/app/data"
- name: REPO_URL
value: "https://${GITHUB_TOKEN}@github.com/craigedmunds/market-maker-data.git"
# API credentials (unchanged)
- name: KUCOIN_API_KEY
valueFrom:
secretKeyRef:
name: metrics-service-credentials
key: kucoin-api-key
- name: GITHUB_TOKEN
valueFrom:
secretKeyRef:
name: gh-oauth
key: GITHUB_TOKEN
# ... other credentials
volumeMounts:
- name: data-volume
mountPath: /app/data
- name: config-volume
mountPath: /app/config
readOnly: true
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: data-volume
emptyDir: {}
- name: config-volume
configMap:
name: metrics-service-configAlternative: Local Storage Only Mode
For deployments that don’t need Git integration, use the existing task collect:
# Use existing task collect for local storage only
command: ["task", "collect"]
args: ["--source=cronjob_hourly"]
env:
- name: MARKET_MAKER_DATA_REPOSITORY_BASE_PATH
value: "/app/data"Key Improvements:
- Modular Tasks: Uses
task collect:cronjobwhich encapsulates Git workflow - Reusable Core: Core collection logic (
task collect) remains unchanged - Local Testability: Can run
task collect:cronjoblocally to test full workflow - Flexible Deployment: Can use either Git workflow or local-only mode
Configuration Validation
Startup Validation:
def validate_deployment_configuration():
"""Validate configuration for deployment environment"""
config = load_configuration()
# Validate required paths exist
data_path = config.data.repository.base_path
if not os.path.exists(data_path):
os.makedirs(data_path, exist_ok=True)
logger.info(f"Created data directory: {data_path}")
# Validate API credentials
if not all([
config.api.kucoin.api_key,
config.api.kucoin.api_secret,
config.api.kucoin.api_passphrase
]):
raise ConfigurationError("Missing required KuCoin API credentials")
# Validate task system availability
if not shutil.which("task"):
raise ConfigurationError("Task runner not found in PATH")
logger.info("Deployment configuration validated successfully")
return configEnvironment-Specific Configurations:
# Development Override Example
MARKET_MAKER_DATA_REPOSITORY_BASE_PATH="/tmp/dev-data"
MARKET_MAKER_API_KUCOIN_SANDBOX="true"
MARKET_MAKER_REGIME_CONFIDENCE_MIN_THRESHOLD="0.5" # Lower threshold for testing
# Production Override Example
MARKET_MAKER_DATA_REPOSITORY_BASE_PATH="/app/data"
MARKET_MAKER_API_KUCOIN_SANDBOX="false"
MARKET_MAKER_REGIME_CONFIDENCE_MIN_THRESHOLD="0.7" # Higher threshold for production
# Container Override Example
MARKET_MAKER_GIT_AUTO_PUSH="false" # Disable Git operations
MARKET_MAKER_STORAGE_MODE="local" # Use local storage only
MARKET_MAKER_LOG_LEVEL="INFO" # Container-appropriate loggingThis deployment architecture provides:
- Flexibility: Any configuration can be overridden via environment variables
- Consistency: Same task-based execution across all environments
- Simplicity: Removes complex deployment-specific logic
- Testability: Local simulation of production behavior
- Maintainability: Single source of truth for execution methods
Data Organization
# Enhanced Directory Structure in market-maker-data Repository
market-maker-data/
├── decisions/
│ ├── 2025/
│ │ ├── 01/
│ │ │ ├── 17/
│ │ │ │ ├── rec_2025-01-17T14-00_ETH_1h.yaml
│ │ │ │ ├── rec_2025-01-17T15-30_ETH_1h.yaml
│ │ │ │ └── ...
│ │ │ └── 18/
│ │ │ ├── rec_2025-01-18T09-15_ETH_1h.yaml
│ │ │ └── ...
│ │ └── ...
├── metrics/
│ ├── 2025/
│ │ ├── 01/
│ │ │ ├── 17/
│ │ │ │ ├── 14_ETH-USDT.yaml # ETH data for 2025-01-17 14:00-14:59
│ │ │ │ ├── 14_BTC-USDT.yaml # BTC data for 2025-01-17 14:00-14:59
│ │ │ │ ├── 14_summary.yaml # Cross-market summary (optional)
│ │ │ │ ├── 15_ETH-USDT.yaml # ETH data for 2025-01-17 15:00-15:59
│ │ │ │ ├── 15_BTC-USDT.yaml # BTC data for 2025-01-17 15:00-15:59
│ │ │ │ ├── 15_summary.yaml # Cross-market summary (optional)
│ │ │ │ └── ...
│ │ │ └── 18/
│ │ │ ├── 00_ETH-USDT.yaml # ETH data for 2025-01-18 00:00-00:59
│ │ │ ├── 00_BTC-USDT.yaml # BTC data for 2025-01-18 00:00-00:59
│ │ │ ├── 00_summary.yaml # Cross-market summary (optional)
│ │ │ ├── 01_ETH-USDT.yaml # ETH data for 2025-01-18 01:00-01:59
│ │ │ ├── 01_BTC-USDT.yaml # BTC data for 2025-01-18 01:00-01:59
│ │ │ ├── 01_summary.yaml # Cross-market summary (optional)
│ │ │ └── ...
│ │ └── ...
├── config/
│ ├── system_config_v1.yaml
│ ├── grid_definitions_v2.yaml
│ └── ...
└── index.yaml # Repository metadata and collection statusEnhanced Index File:
# Repository Index Example (market-maker-data/index.yaml)
repository_index:
repository: "git@github.com:craigedmunds/market-maker-data.git"
version: "2.0" # Updated for per-market file structure
created_at: 2025-01-17T14:00:00Z
last_updated: 2025-01-18T18:00:00Z
data_types:
decisions:
total_records: 156
last_record: 2025-01-18T17:00:00Z
path_pattern: "decisions/YYYY/MM/DD/rec_*.yaml"
metrics:
total_snapshots: 94 # 47 hours × 2 markets
markets_tracked: ["ETH/USDT", "BTC/USDT"]
last_collection: 2025-01-18T17:00:00Z
path_pattern: "metrics/YYYY/MM/DD/HH_SYMBOL.yaml"
minute_data_points: 5640 # 94 snapshots × 60 minutes
config:
current_version: "v2"
last_updated: 2025-01-15T10:30:00Z
path_pattern: "config/*_v*.yaml"
retention_policy:
keep_all: true
compress_after_days: 90
collection_stats:
successful_collections: 46
failed_collections: 1
last_failure: 2025-01-17T18:00:00Z
failure_reason: "Exchange API timeout"
market_stats:
ETH/USDT:
files_count: 47
data_points: 2820 # 47 hours × 60 minutes
last_collection: 2025-01-18T17:00:00Z
file_size_avg_kb: 15.2
BTC/USDT:
files_count: 47
data_points: 2820 # 47 hours × 60 minutes
last_collection: 2025-01-18T17:00:00Z
file_size_avg_kb: 14.8
file_structure:
per_market_files: true
minute_level_data: true
cross_market_summaries: true
rerun_support: trueStatic Dashboard
Generates a self-contained HTML dashboard with embedded JavaScript charts for visualizing metrics history with enhanced support for minute-level data and multi-market aggregation.
Enhanced Dashboard Features:
- Responsive Design: Works on desktop and mobile devices
- Interactive Charts: Zoom, pan, hover tooltips using Chart.js or similar
- Minute-Level Granularity: Detailed price movement visualization within each hour
- Multi-Market Aggregation: Loads and combines data from individual market files
- Time Range Controls: 24h, 7d, 30d, all time views with minute-level detail
- Real-time Updates: Regenerated after each metrics collection
- Offline Capable: No external dependencies, works without internet
- Per-Market Views: Individual market analysis alongside portfolio overview
Enhanced Chart Types:
- Account Balance Timeline: Total, available, deployed capital over time
- Grid Performance: Individual grid P&L, trade frequency, capital utilization
- Market Price Charts: Minute-level price movements with regime classification overlays
- Regime Analysis: Confidence trends, verdict distribution, regime stability per market
- Cross-Market Correlation: Market correlation analysis and regime agreement
- Portfolio Overview: Combined metrics across all tracked markets
Data Loading Strategy:
- File Aggregation: Automatically loads and combines per-market YAML files
- Efficient Loading: Loads only required time ranges to optimize performance
- Caching: Caches processed data to improve dashboard generation speed
- Error Handling: Gracefully handles missing files or incomplete data
Interface:
# Enhanced Dashboard Configuration Example
dashboard_config:
title: "Regime Management System Metrics"
refresh_interval: 3600 # Regenerate every hour
default_time_range: "7d"
minute_level_detail: true
data_sources:
markets: ["ETH/USDT", "BTC/USDT"]
file_pattern: "metrics/YYYY/MM/DD/HH_SYMBOL.yaml"
aggregation_strategy: "load_and_merge"
charts:
- id: account_balance
title: "Account Balance Over Time"
type: line
data_sources: [account.total_usdt, account.available_usdt, account.deployable_capital]
y_axis: "USDT"
granularity: "hourly"
- id: market_prices_eth
title: "ETH/USDT Price Movement (Minute-Level)"
type: candlestick_overlay
data_sources: [ETH-USDT.minute_prices, ETH-USDT.regime_analysis.verdict]
y_axis: "Price"
overlay: regime_classification
granularity: "minute"
time_aggregation: "1h_windows"
- id: market_prices_btc
title: "BTC/USDT Price Movement (Minute-Level)"
type: candlestick_overlay
data_sources: [BTC-USDT.minute_prices, BTC-USDT.regime_analysis.verdict]
y_axis: "Price"
overlay: regime_classification
granularity: "minute"
time_aggregation: "1h_windows"
- id: grid_performance
title: "Grid Performance by Market"
type: multi_line
data_sources: [*.grid_context.active_grids.*.unrealized_pnl]
y_axis: "USDT"
group_by: [market, grid_id]
- id: regime_confidence_comparison
title: "Regime Confidence by Market"
type: multi_line
data_sources: [*.regime_analysis.confidence]
y_axis: "Confidence (0-1)"
group_by: market
thresholds: [0.55, 0.75] # Safety and strong thresholds
- id: cross_market_correlation
title: "Market Correlation Analysis"
type: correlation_matrix
data_sources: [*.minute_prices]
calculation: "rolling_correlation_1h"
update_frequency: "hourly"Enhanced Generated HTML Structure:
<!DOCTYPE html>
<html>
<head>
<title>Regime Management System Metrics</title>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-adapter-date-fns"></script>
<style>/* Embedded responsive CSS with minute-level chart support */</style>
</head>
<body>
<div class="dashboard">
<header>
<h1>Regime Management System Metrics</h1>
<div class="time-controls">
<button data-range="24h">24H</button>
<button data-range="7d" class="active">7D</button>
<button data-range="30d">30D</button>
<button data-range="all">All</button>
</div>
<div class="granularity-controls">
<button data-granularity="minute">Minute</button>
<button data-granularity="hour" class="active">Hour</button>
<button data-granularity="day">Day</button>
</div>
</header>
<div class="market-tabs">
<button data-market="all" class="active">Portfolio</button>
<button data-market="ETH-USDT">ETH/USDT</button>
<button data-market="BTC-USDT">BTC/USDT</button>
</div>
<div class="charts-grid">
<div class="chart-container">
<canvas id="account_balance"></canvas>
</div>
<div class="chart-container">
<canvas id="market_prices_eth"></canvas>
</div>
<div class="chart-container">
<canvas id="market_prices_btc"></canvas>
</div>
<div class="chart-container">
<canvas id="regime_confidence_comparison"></canvas>
</div>
<!-- More charts... -->
</div>
</div>
<script>
// Embedded aggregated metrics data from multiple market files
const aggregatedMetricsData = {
"ETH-USDT": /* Processed data from ETH-USDT_*.yaml files */,
"BTC-USDT": /* Processed data from BTC-USDT_*.yaml files */,
"portfolio": /* Cross-market aggregated data */
};
// Enhanced Chart.js initialization with minute-level support
// Market-specific chart rendering
// Time range and granularity controls
// Regime overlay rendering
</script>
</body>
</html>Data Aggregation Process:
- File Discovery: Scan metrics directory for market-specific files in requested time range
- Data Loading: Load YAML files for each market and time period
- Time Alignment: Align minute-level data across markets for correlation analysis
- Aggregation: Combine market data for portfolio-level metrics
- Chart Data Preparation: Format data for Chart.js consumption with appropriate granularity
- HTML Generation: Embed processed data and generate self-contained HTML file
Data Models
Market Data
# Market Data Structure
market_data:
symbol: ETH/USDT
timeframe: 1h
source: KUCOIN
timestamp: 2025-01-18T12:00:00Z
candles:
- timestamp: 2025-01-18T11:00:00Z
open: 3100.50
high: 3120.75
low: 3095.25
close: 3115.00
volume: 1250.75
- timestamp: 2025-01-18T10:00:00Z
open: 3095.00
high: 3105.25
low: 3088.50
close: 3100.50
volume: 980.25System Configuration
# System Configuration Example
system_config:
capital:
mode: SINGLE_GRID
min_free_balance_pct: 3
regime:
confidence_stop_threshold: 0.55
confidence_strong_threshold: 0.75
automation:
grid_creation:
enabled: false
min_confidence: 0.80
timeouts:
transition_minutes: 30
trend_minutes: 15
cooldowns:
after_grid_stop_minutes: 60
after_failed_setup_minutes: 120
notifications:
n8n:
enabled: true
webhook_url: "https://n8n.example.com/webhook/regime-management"
grids:
- id: eth-primary
exchange: kucoin
symbol: ETH/USDT
enabled: true
role: PRIMARY
allocation:
preferred: MAX # Maximum deployable capital after reserve and fee buffer, bounded by absoluteCap if present
absolute_cap: null
automation:
allow_auto_create: false
constraints:
max_concurrent: 1Account and Trading
# Account Balance
account_balance:
total_usdt: 1000.00
available_usdt: 950.00
locked_usdt: 50.00
reserve_required: 30.00 # 3% of total
deployable_capital: 920.00
# Trade Record
trade:
id: trade_12345
grid_id: eth-primary
symbol: ETH/USDT
side: BUY
quantity: 0.1
price: 3100.00
fee: 0.31 # Actual fee from exchange fill
timestamp: 2025-01-18T12:05:00Z
# Fee Handling Examples
fee_estimate:
estimated_pct_per_month: 4.2
estimated_usd_per_month: 12.50
basis: "Based on 8 trades/day at 0.1% fee rate with 920 USDT capital"
used_for: GRID_SETUP_ONLY # Never used for decisions or automation
fee_actual:
trade_id: trade_12345
fee_usdt: 0.31
fee_rate: 0.001 # 0.1%
source: KUCOIN_FILL
used_for: EVALUATION_ONLYFee Handling Rules:
- Estimates: Used only in Grid_Setup recommendations for user information
- Actuals: Pulled from Trade.fee after execution
- Evaluation: Uses actual fees only, never estimates
Grid Ladder Calculation
Known Issue - Grid Ladder Spacing Calculation:
The current grid ladder calculation in _compute_grid_ladder (metrics/history.py) has been updated to match KuCoin’s actual grid behavior. However, several aspects are KuCoin-specific and should be made configurable for other exchanges:
KuCoin-Specific Behaviors:
-
Price Rounding: KuCoin rounds DOWN to nearest 0.05 tick size
- Implemented in
_apply_kucoin_price_rounding() - Other exchanges may use different tick sizes or rounding modes
- Implemented in
-
Level Numbering: KuCoin numbers levels 1-N from highest to lowest
- Level 1 = highest buy price (top of grid)
- Level N = lowest buy price (bottom of grid)
- Other exchanges may number differently
-
Ladder Ordering: KuCoin displays level 1 first (highest at top)
- Implemented via
list(reversed(ladder)) - Other exchanges may display differently
- Implemented via
Generic Implementation:
# Generic spacing formula (exchange-agnostic)
spacing = (upper - lower) / N
# Buy prices: lower, lower+spacing, ..., lower+(N-1)*spacing
# Sell prices: buy price of level above (or upper bound for top level)Current Implementation (KuCoin-specific):
# 1. Calculate generic even spacing
buy_prices = _build_even_grid_prices(lower, upper, N)
# 2. Apply KuCoin rounding (round DOWN to 0.05)
buy_prices = _apply_kucoin_price_rounding(buy_prices)
# 3. Number levels from highest to lowest (N, N-1, ..., 1)
level = N - i
# 4. Reverse ladder so level 1 appears first
return list(reversed(ladder))Future Refactoring:
- Move exchange-specific logic to
src/exchanges/kucoin.py - Create exchange interface for grid ladder calculation
- Make tick size, rounding mode, and level numbering configurable per exchange
Metrics and Dashboard Data
# System Snapshot Structure
system_snapshot:
timestamp: 2025-01-18T12:00:00Z
collection_id: snap_2025-01-18T12-00
account:
total_usdt: 1000.00
available_usdt: 950.00
locked_usdt: 50.00
deployable_capital: 920.00
reserve_required: 30.00
grids:
- grid_id: eth-primary
symbol: ETH/USDT
status: ACTIVE
capital_allocated: 920.00
unrealized_pnl: 15.50
total_trades: 12
created_at: 2025-01-17T14:30:00Z
last_trade_at: 2025-01-18T11:45:00Z
market_data:
- symbol: ETH/USDT
current_price: 3115.00
change_24h_pct: 2.3
volume_24h: 125000.00
last_updated: 2025-01-18T11:59:30Z
regime_state:
verdict: RANGE_OK
confidence: 0.78
strength: STRONG
active_recommendations:
- id: rec_2025-01-18T12-00_ETH_1h
type: GRID_MANAGE
action: RUN_GRID
expires_at: null
# Metrics Index Structure
metrics_index:
version: "1.0"
created_at: 2025-01-17T14:00:00Z
last_collection: 2025-01-18T12:00:00Z
total_snapshots: 47
collection_status: ACTIVE
retention_policy:
keep_all: true
compress_after_days: 90
collection_stats:
successful_collections: 46
failed_collections: 1
last_failure: 2025-01-17T18:00:00Z
failure_reason: "Exchange API timeout"
data_sources:
- name: account_balance
last_successful: 2025-01-18T12:00:00Z
failure_count: 0
- name: grid_status
last_successful: 2025-01-18T12:00:00Z
failure_count: 0
- name: market_data
last_successful: 2025-01-18T11:59:30Z
failure_count: 1
# Dashboard Configuration
dashboard_config:
title: "Regime Management System Metrics"
generated_at: 2025-01-18T12:05:00Z
data_range:
start: 2025-01-17T14:00:00Z
end: 2025-01-18T12:00:00Z
total_points: 47
charts:
- id: account_balance
title: "Account Balance Over Time"
type: line
data_points: 47
last_value: 1000.00
- id: grid_performance
title: "Grid Performance"
type: multi_line
active_grids: 1
total_pnl: 15.50
- id: regime_confidence
title: "Regime Confidence Trends"
type: line
current_confidence: 0.78
avg_confidence: 0.72Correctness Properties
A property is a characteristic or behavior that should hold true across all valid executions of a system-essentially, a formal statement about what the system should do. Properties serve as the bridge between human-readable specifications and machine-verifiable correctness guarantees.
Before defining the correctness properties, I need to analyze the acceptance criteria from the requirements to determine which are testable as properties, examples, or edge cases.
Based on the prework analysis and property reflection, the following correctness properties ensure the system behaves according to specifications:
Property 1: Regime Classification Validity
For any market data input, the regime classification system should produce exactly one of the four valid verdicts (RANGE_OK, RANGE_WEAK, TRANSITION, TREND) and use only the specified recommendation types (GRID_MANAGE, GRID_SETUP) Validates: Requirements 1.1, 2.1, 2.2
Property 2: Regime-to-Suitability Mapping
For any regime classification, the system should correctly map RANGE_OK to suitable, RANGE_WEAK to caution, and TRANSITION/TREND to unsuitable for grid trading Validates: Requirements 1.3, 1.4, 1.5
Property 3: Default Action Mapping
For any market regime, the system should map RANGE_OK to RUN_GRID, RANGE_WEAK to RUN_GRID_NO_SCALE, and TRANSITION/TREND to STOP_GRID as default actions Validates: Requirements 6.1, 6.2, 6.3
Property 4: Confidence Safety Override
For any system state where confidence falls below the safety threshold, the system should override default actions to STOP_GRID and never use confidence to increase risk exposure Validates: Requirements 3.2, 6.4
Property 5: Decision Record Completeness
For any generated recommendation, the system should create an immutable Decision_Record in valid YAML format containing all required fields including configuration version hash and grid definition references Validates: Requirements 4.1, 4.2
Property 6: Grid Setup Specification Completeness
For any Grid_Setup recommendation, the system should include all required fields: number of levels, spacing method, price bounds, stop conditions, and validity window Validates: Requirements 7.1
Property 7: Grid Setup Silence Handling
For any Grid_Setup recommendation that receives no user response, the system should create no grid and allow expiration at the validity window without action Validates: Requirements 7.2
Property 8: Capital Reserve Invariant
For any capital allocation operation, the system should maintain the configured global reserve percentage and never violate capital constraints Validates: Requirements 8.2
Property 9: Single Grid Constraint
For any system operation in SINGLE_GRID mode, the system should maintain at most one active grid at any time Validates: Requirements 8.1
Property 10: Capital Deployment Permission
For any capital deployment request, the system should require explicit user permission and never deploy capital automatically by default Validates: Requirements 9.2
Property 11: Timeout Assignment
For any Grid_Manage recommendation, the system should assign 30-minute timeout for TRANSITION verdict, 15-minute timeout for TREND verdict, and no timeout for RANGE_OK/RANGE_WEAK verdicts Validates: Requirements 10.1, 10.2
Property 12: Cooldown Enforcement
For any grid stop operation, the system should enforce a 60-minute cooldown before allowing new grid creation for that specific grid_id Validates: Requirements 10.3
Property 13: API Security Validation
For any API key configuration, the system should reject keys with withdrawal permissions and require trade permissions for grid operations Validates: Requirements 11.1
Property 14: Exchange Interface Compliance
For any exchange interaction, the system should use the abstract Exchange_Interface and never bypass the interface layer Validates: Requirements 12.1
Property 15: Trade Monitoring Consistency
For any active grid, the system should poll trade history at 2-5 minute intervals and use monotonic ID watermarks to detect new fills without duplication Validates: Requirements 13.1, 13.2
Property 16: Notification Deduplication
For any recommendation ID, the system should send at most one notification per recommendation and prevent duplicate notifications Validates: Requirements 13.3
Property 17: Evaluation Schedule Compliance
For any Decision_Record, the system should perform evaluations at exactly 24-hour, 72-hour, and 7-day horizons after recommendation creation Validates: Requirements 14.1
Property 18: Currency Conversion Consistency
For any USD-denominated calculation, the system should consistently treat USDT as USD proxy and record the conversion method Validates: Requirements 14.2
Property 19: Metrics Collection Webhook
For any n8n webhook trigger, the metrics collector should respond with collection status and successfully collect system snapshots Validates: Requirements 15.1, 15.8
Property 20: Grid Metrics Completeness
For any set of active grids, the collected metrics snapshot should contain all required fields (grid_id, symbol, active status, capital allocated, profit/loss) for each grid Validates: Requirements 15.2
Property 21: Account Metrics Completeness
For any account state, the collected metrics snapshot should contain all required balance fields (total USDT, available USDT, locked USDT, deployable capital) Validates: Requirements 15.3
Property 22: Market Data Completeness
For any set of tracked symbols, the collected metrics snapshot should contain all required market data fields (current price, 24h change, volume) for each symbol Validates: Requirements 15.4
Property 23: Regime State Completeness
For any regime state, the collected metrics snapshot should contain all required regime fields (verdict, confidence score, active recommendations) Validates: Requirements 15.5
Property 24: Metrics Storage Format
For any collected snapshot, the system should store it as valid YAML with timestamp-based filename in Git version control Validates: Requirements 15.6
Property 25: Collection Error Response
For any metrics collection failure, the system should return error status to n8n, log the error, and continue system operation without interruption Validates: Requirements 15.7
Property 26: N8N Response Format
For any metrics collection request from n8n, the system should respond with properly formatted status and metadata for workflow tracking Validates: Requirements 15.8
Property 27: Dashboard HTML Validity
For any dashboard generation, the system should create a valid static HTML page with embedded JavaScript charts Validates: Requirements 16.1
Property 28: Account Chart Presence
For any generated dashboard, it should contain time-series charts for total balance, available balance, and deployed capital Validates: Requirements 16.2
Property 29: Grid Chart Presence
For any generated dashboard, it should contain charts for active grid count, total capital deployed, and profit/loss trends Validates: Requirements 16.3
Property 30: Market Chart Presence
For any generated dashboard, it should contain price charts for tracked symbols with regime classification overlays Validates: Requirements 16.4
Property 31: Regime Chart Presence
For any generated dashboard, it should contain confidence score trends and regime classification distribution charts Validates: Requirements 16.5
Property 33: Previous Hour Analysis Window
For any collection time T, the system should analyze the previous complete hour (HH:00:00 to HH:59:59) and store results in a file named for that previous hour Validates: Requirements 15.2
Property 34: Minute-Level Price Completeness
For any market data collection for an hour, the system should capture exactly 60 minute-level closing prices for that hour period Validates: Requirements 15.5
Property 35: Per-Market File Structure
For any metrics collection involving multiple markets, the system should create separate YAML files for each market rather than storing data in arrays within a single file Validates: Requirements 15.8
Property 36: Collection Idempotency
For any time period, running metrics collection multiple times should update the existing files for that period rather than creating duplicate files Validates: Requirements 15.10
Property 37: Dashboard Minute-Level Granularity
For any generated dashboard, the market price charts should display minute-level price data points for detailed analysis within each hour Validates: Requirements 16.4
Property 38: Multi-Market Data Aggregation
For any dashboard generation, the system should successfully load and aggregate data from individual market files to create comprehensive multi-market visualizations Validates: Requirements 16.7
Property 39: Grid Configuration Parameter Completeness
For any grid configuration definition, the system should capture and store all detailed parameters including price bounds, entry price, grid levels, amount per grid, profit percentages, stop-loss settings, take-profit settings, and reinvestment status Validates: Requirements 17.1, 17.2
Property 40: Grid Configuration Update Integrity
For any grid configuration update, the system should maintain version history with timestamps and preserve data integrity while allowing parameter modifications Validates: Requirements 17.3, 17.4
Property 41: Performance Calculation Accuracy
For any performance calculation, the system should use actual grid configuration parameters (amount per grid, profit percentages) rather than estimates Validates: Requirements 17.5
Property 42: Grid Behavior Validation
For any active grid with stored configuration, the system should validate that actual grid behavior matches the stored configuration parameters Validates: Requirements 17.6
Property 43: Stop-Loss Monitoring Activation
For any grid configuration that includes stop-loss settings, the system should monitor market conditions relative to the configured stop-loss price Validates: Requirements 17.7
Property 44: Reinvestment Calculation Consistency
For any grid configuration with profit reinvestment enabled, the system should account for compounding effects in performance calculations Validates: Requirements 17.8
Property 45: Multi-Grid Configuration Isolation
For any symbol with multiple active grids, the system should maintain separate detailed configurations for each grid instance without interference Validates: Requirements 17.9
Property 46: Configuration Version Consistency
For any recommendation that references grid configuration, the system should use the specific version of configuration that was active at recommendation time Validates: Requirements 17.10
Property 47: Metrics Grid Configuration Completeness
For any hourly metrics collection, the system should capture current grid configuration parameters for all active grids including all detailed parameters specified in requirements Validates: Requirements 17.11, 15.4
Property 48: Historical Configuration Preservation
For any hourly metrics storage, the system should include grid configuration snapshots to enable historical analysis of how configuration changes affect performance Validates: Requirements 17.12
Property 49: Enabled Status Consideration in Recommendations
For any grid configuration with enabled status, the system should generate recommendations that are appropriate for that enabled status (e.g., “enable grid” for disabled grids with favorable conditions, no action for disabled grids with unfavorable conditions) Validates: Requirements 5.6, 5.7, 5.8
Property 50: Disabled Grid with Favorable Conditions
For any disabled grid when market conditions are favorable (RANGE_OK or RANGE_WEAK), the system should generate an actionable recommendation to enable the grid Validates: Requirements 5.7, 15.23
Property 51: Disabled Grid with Unfavorable Conditions
For any disabled grid when market conditions are unfavorable (TRANSITION or TREND), the system should not generate actionable recommendations since the grid is already in a safe state Validates: Requirements 5.8, 15.24
Property 52: Grid Repositioning Recommendation
For any enabled grid where the current price is significantly outside the configured grid bounds, the system should recommend stopping the current grid and creating a new grid with updated price bounds aligned to current market conditions Validates: Requirements 5.9, 15.25
Property 53: Metrics Collection Completeness for All Grids
For any metrics collection cycle, the system should include both enabled and disabled grids in the collected data to provide complete system visibility Validates: Requirements 15.21
Property 54: Grid Stop Event Recording
For any grid stop with verdict TRANSITION or TREND, the system should record the stop event with timestamp and stopping verdict Validates: Requirements 17.1
Property 55: Sequential Gate Evaluation Requirement
For any grid restart eligibility evaluation, the system should require all three mandatory gates to pass sequentially before allowing CREATE_GRID recommendations Validates: Requirements 17.2
Property 56: Gate 1 Passage Logic Consistency
For any Gate 1 evaluation, the system should apply the configured passage logic (consecutive-bar or majority-of-window) consistently Validates: Requirements 17.3
Property 57: Gate 1 Condition Evaluation Completeness
For any Gate 1 evaluation, the system should evaluate all specified conditions (TrendScore, ADX, normalized slope, Efficiency Ratio, higher highs/lower lows) Validates: Requirements 17.4
Property 58: Gate 1 Blocking Behavior
For any Gate 1 failure, the system should classify regime as TRANSITION or TREND, block evaluation of all subsequent gates, and not proceed to Gate 2 Validates: Requirements 17.5
Property 59: Gate 1 to Gate 2 Progression
For any Gate 1 passage, the system should proceed to evaluate Gate 2 Validates: Requirements 17.6
Property 60: Gate 2 Condition Evaluation Completeness
For any Gate 2 evaluation, the system should evaluate all specified conditions (MeanRevScore, lag-1 autocorrelation, OU half-life, z-score excursions, oscillations) Validates: Requirements 17.7
Property 61: Gate 2 Blocking Behavior
For any Gate 2 failure, the system should classify regime as RANGE_WEAK, block evaluation of Gate 3, and not allow grid creation Validates: Requirements 17.8
Property 62: Gate 2 to Gate 3 Progression
For any Gate 2 passage, the system should proceed to evaluate Gate 3 Validates: Requirements 17.9
Property 63: Gate 3 Condition Evaluation Completeness
For any Gate 3 evaluation, the system should evaluate all specified conditions (ATR percentage thresholds, volatility expansion, BB bandwidth, volatility percentile) Validates: Requirements 17.10
Property 64: Gate 3 Failure Behavior
For any Gate 3 failure, the system should classify regime as RANGE_WEAK and not allow grid creation Validates: Requirements 17.11
Property 65: All Gates Pass Regime Transition
For any evaluation where all three gates pass, the system should transition regime classification to RANGE_OK Validates: Requirements 17.12
Property 66: All Gates Pass Grid Creation Permission
For any evaluation where all three gates pass, the system should allow CREATE_GRID recommendations with conservative initial parameters as defined by system configuration Validates: Requirements 17.13
Property 67: Fresh Grid Parameter Calculation
For any new grid created after passing restart gates, the system should compute grid parameters from current market structure and not reuse the stopped grid’s center, spacing, or bounds Validates: Requirements 17.14
Property 68: Gate Evaluation Output Completeness
For any gate evaluation, the system should record gate evaluation results in regime analysis output including which gates passed, which failed, and specific metrics for each gate Validates: Requirements 17.15
Property 69: Gate State Initialization
For any grid stop, the system should initialize gate tracking state with all gates marked as not passed Validates: Requirements 17.16
Property 70: Per-Symbol Per-Timeframe Threshold Customization
For any gate threshold configuration, the system should allow per-symbol and per-timeframe threshold customization Validates: Requirements 17.17
Property 71: Bar-by-Bar Gate Status Tracking
For any gate requiring multiple bars for passage, the system should track bar-by-bar gate status and require consecutive or majority passage as configured per gate configuration Validates: Requirements 17.18
Property 72: Minimum Time Floor Enforcement
For any grid stop, the system should enforce a configurable minimum time floor (in bars) before beginning gate evaluation to prevent stop-restart churn Validates: Requirements 17.19
Property 73: Confidence Monotonicity Guard
For any confidence calculation when regime remains unchanged, the system should apply a configurable maximum delta constraint to prevent wild bar-to-bar confidence jumps Validates: Requirements 17.20
Property 74: Confidence Ignored for Restart Grid Creation
For any CREATE_GRID permission evaluation after restart gates pass, the system should use gate passage as the sole authority and not use confidence scores to override or influence the decision Validates: Requirements 17.21
Error Handling
The system implements comprehensive error handling with fail-safe defaults:
Exchange API Errors
- Connection failures: Retry with exponential backoff, fallback to cached data for regime classification
- Rate limiting: Implement request queuing and respect API limits
- Invalid responses: Log errors, use last known good data, escalate to STOP_GRID if data is stale
- Authentication errors: Fail immediately, require manual intervention
Configuration Errors
- Invalid grid definitions: Reject configuration updates, maintain current valid configuration
- Missing required fields: Use safe defaults where possible, fail validation otherwise
- Version conflicts: Prevent configuration updates that would break active grids
Decision Record Errors
- Git commit failures: Queue records locally, retry commits to market-maker-data repository, alert on persistent failures
- YAML serialization errors: Validate before writing, use backup serialization format
- File system errors: Implement redundant storage, alert on write failures
- Repository access errors: Validate SSH keys and repository permissions, fallback to local storage with sync retry
- Network connectivity: Queue operations locally during outages, batch sync when connectivity restored
- Git conflicts: Use conflict resolution strategy from configuration, alert on unresolvable conflicts
- Deployment mode errors: Graceful fallback from production to local mode on repository access failures
Grid Management Errors
- Order placement failures: Retry with adjusted parameters, escalate to manual intervention
- Partial fills: Continue monitoring, adjust grid state accordingly
- Stop condition failures: Force manual stop, alert immediately
Metrics Collection Errors
- Scheduled collection failures: Log error, continue with next scheduled collection
- Partial data collection: Collect available data, mark missing fields as null, continue operation
- Storage failures: Retry with exponential backoff, queue locally if market-maker-data repository unavailable
- Dashboard generation failures: Log error, serve last known good dashboard, retry on next cycle
- Data corruption: Validate YAML before storage, use backup serialization if needed
- Repository sync failures: Queue data locally, retry sync to market-maker-data repository, alert on persistent failures
Testing Strategy
The system employs a dual testing approach combining unit tests for specific scenarios and property-based tests for comprehensive coverage:
Property-Based Testing Framework
The system uses fast-check for TypeScript/JavaScript implementation, providing:
- Minimum 100 iterations per property test
- Automatic shrinking of failing test cases
- Custom generators for financial data types
- Integration with Jest testing framework
Property Test Configuration
Each property test is configured with:
# Example Property Test Configuration
property_test:
name: "Regime Classification Validity"
description: "For any market data input, regime classification produces valid verdict"
iterations: 100
generators:
market_data:
symbol: "ETH/USDT"
timeframe: "1h"
candle_count: 24
price_range: [2000, 5000]
volatility_range: [0.01, 0.05]
assertion: "classification.verdict in ['RANGE_OK', 'RANGE_WEAK', 'TRANSITION', 'TREND']"
tag: "Feature: regime-management, Property 1: Regime Classification Validity"Unit Testing Focus
Unit tests complement property tests by covering:
- Integration points: Exchange API interactions, notification delivery
- Edge cases: Empty market data, network timeouts, malformed responses
- Error conditions: API failures, configuration errors, file system issues
- Specific examples: Known market conditions that should produce specific outcomes
Test Data Generation
Custom generators create realistic test data:
# Market Data Generator Configuration
market_data_generator:
base_config:
symbol: "ETH/USDT"
timeframe: "1h"
candle_count: 24
scenarios:
trending_up:
price_drift: 0.02 # 2% upward drift per hour
volatility: 0.015
mean_reversion: 0.3
ranging:
price_drift: 0.0
volatility: 0.01
mean_reversion: 0.8
support_resistance: true
volatile_breakdown:
price_drift: -0.03
volatility: 0.04
mean_reversion: 0.2
# Configuration Generator
config_generator:
capital:
min_free_balance_pct: [1, 3, 5, 10]
regime:
confidence_stop_threshold: [0.4, 0.5, 0.6]
confidence_strong_threshold: [0.7, 0.75, 0.8]
# Decision Record Generator
decision_record_generator:
recommendation_types: [GRID_MANAGE, GRID_SETUP]
verdicts: [RANGE_OK, RANGE_WEAK, TRANSITION, TREND]
confidence_range: [0.1, 0.95]
include_competing_verdicts: true
# Metrics Snapshot Generator
metrics_snapshot_generator:
account_balance:
total_usdt_range: [100, 10000]
available_pct_range: [0.7, 0.95]
locked_pct_range: [0.05, 0.3]
grid_states:
count_range: [0, 3]
capital_range: [100, 5000]
pnl_range: [-100, 200]
trade_count_range: [0, 50]
market_data:
price_range: [1000, 5000]
change_24h_range: [-10, 10]
volume_range: [10000, 1000000]
regime_states:
verdicts: [RANGE_OK, RANGE_WEAK, TRANSITION, TREND]
confidence_range: [0.1, 0.95]
recommendation_probability: 0.3Testing Infrastructure
- Mock Exchange Interface: Simulates various exchange conditions and failures
- Time Manipulation: Controls system time for testing timeouts and cooldowns
- Git Repository Mocking: Tests decision record persistence without actual operations on market-maker-data repository
- Notification Capture: Intercepts and validates notification content and delivery
- Metrics Collection Mocking: Simulates scheduled collection cycles and data sources
- Dashboard Validation: Parses generated HTML and validates chart presence and data
- File System Mocking: Tests YAML storage and retrieval without actual file operations
- Repository Access Mocking: Simulates market-maker-data repository connectivity and permission scenarios
Continuous Integration
All tests run automatically on:
- Pull request creation and updates
- Main branch commits
- Scheduled daily runs with extended property test iterations
- Pre-deployment validation in staging environment
The testing strategy ensures that both individual components work correctly (unit tests) and that the system maintains its correctness properties across all possible inputs (property tests), providing confidence in the system’s reliability and safety.
Appendix: Grid Ladder Status Calculation
Overview
The grid ladder status calculation determines the state of each grid level at the end of an hourly analysis period. This is used for reporting and visualization purposes in the metrics collection system.
Path-Aware Heuristic
Grid trading is path-dependent - the status of a level depends on how price moved during the hour, not just where it ended. However, simulating every minute would be complex and could involve multiple fills per level.
Compromise: Use hourly OHLC (Open/High/Low/Close) prices to approximate end-of-hour status without full minute-by-minute simulation.
Algorithm
For each grid level with buy_price and sell_price:
def compute_status(buy_price, sell_price, high, low, close):
"""
Determine grid level status using path-aware heuristic.
Core Principle: Status reflects the LAST UNPAIRED ACTION
- If last event was BUY fill → status = SELL (holding inventory, waiting to sell)
- If last event was SELL fill → status = BUY (sold inventory, waiting to buy back)
- If neither ever filled → status = BUY (waiting to enter position)
Args:
buy_price: Level's buy order price
sell_price: Level's sell order price
high: Highest price during the hour
low: Lowest price during the hour
close: Closing price (last minute of hour)
Returns:
"BUY" or "SELL" status
"""
# Check if price touched sell level (would have sold)
touched_sell = high >= sell_price
# Check if price touched buy level (would have bought)
touched_buy = low <= buy_price
if touched_sell and touched_buy:
# Price oscillated through entire level - both orders filled
# Use close to determine which action happened LAST
if close >= sell_price:
return "BUY" # Ended above sell level, last action was SELL
elif close <= buy_price:
return "SELL" # Ended below buy level, last action was BUY
else:
return "SELL" # Ended in range, last action was BUY (holding inventory)
elif touched_sell:
# Only sell order filled - last unpaired action was SELL
return "BUY" # Sold, waiting to buy back
elif touched_buy:
# Only buy order filled - last unpaired action was BUY
return "SELL" # Bought, waiting to sell
else:
# No orders filled during the hour
# Default: assume never entered position (conservative)
# Note: This may be incorrect if grid was initialized with entry price
# between buy and sell (see Scenario 15), but we don't have access to
# entry price in this heuristic
return "BUY" # Waiting to buy (never bought at this level)Status Semantics (Last Unpaired Action)
The status reflects the last unpaired trading action at this grid level:
-
“BUY” status: Last action was SELL (or never entered)
- Sell order filled → sold inventory, now waiting to buy back
- Neither order filled → never entered position, waiting to buy
- Key insight: BUY status means “no inventory at this level”
-
“SELL” status: Last action was BUY
- Buy order filled → holding inventory, waiting to sell
- Key insight: SELL status means “holding inventory at this level”
Critical Rule: Last Unpaired Action
status = BUY if sell_price crossed since last buy (or never bought)
status = SELL if buy_price crossed since last sell (or bought but not sold)
This rule:
- ✓ Correctly handles completed cycles (buy → sell → back to BUY)
- ✓ Avoids false “SELL” states when position was never entered
- ✓ Works cleanly with minute OHLC data
- ✓ Tracks inventory state accurately
Limitations
- Does not track multiple fills within the hour
- Approximates final state, not exact order sequence
- Assumes standard grid behavior (buy low, sell high)
Accuracy
Good approximation for end-of-hour status reporting without full simulation. Suitable for:
- Historical analysis
- Dashboard visualization
- Performance reporting
Not suitable for:
- Real-time trading decisions (use actual exchange order status)
- Precise profit calculations (use actual fill data)
- Intra-hour analysis (would need minute-by-minute data)
Test Scenarios
The following scenarios validate the algorithm across different price movements and path directions. Grid configuration: 7 levels, upper=3150.0, lower=2850.0.
Grid levels:
- Level 1: buy=3107.10, sell=3150.00
- Level 2: buy=3064.25, sell=3107.10
- Level 3: buy=3021.40, sell=3064.25
- Level 4: buy=2978.55, sell=3021.40
- Level 5: buy=2935.70, sell=2978.55
- Level 6: buy=2892.85, sell=2935.70
- Level 7: buy=2850.00, sell=2892.85
Scenario 1: Rising Above Grid
Path: Price rose from 3140 to 3150.01
- Open: 3140.00, High: 3150.01, Low: 3135.00, Close: 3150.01
- All levels: high >= sell_price → all sell orders filled → All BUY
Scenario 2: Falling Above Grid (Ended at Upper Bound)
Path: Price fell from 3200 to 3150.01
- Open: 3200.00, High: 3200.00, Low: 3150.01, Close: 3150.01
- All levels: high >= sell_price → all sell orders filled → All BUY
- Same result as Scenario 1 despite different path
Scenario 2a: Falling Into Level 1 (From Above Grid)
Path: Price fell from above grid into level 1’s range
- Open: 3150.59, High: 3158.76, Low: 3139.71, Close: 3139.71
- Level 1: buy=3107.10, sell=3150.00
- high=3158.76 >= sell=3150.00 → sell order filled → BUY
- close=3139.71 in range [3107.10, 3150.00)
- Levels 2-7: high >= sell_price → All BUY
- Variant of Scenario 2 - fell from above but ended inside level 1
Scenario 3: Rising Through Level 2
Path: Price rose from 3100 to 3107.11
- Open: 3100.00, High: 3120.00, Low: 3100.00, Close: 3107.11
- Level 1: high=3120 < sell=3150 → SELL
- Levels 2-7: high >= sell_price → BUY
Scenario 4: Falling Through Level 2
Path: Price fell from 3150 to 3107.11
- Open: 3150.00, High: 3150.00, Low: 3107.11, Close: 3107.11
- Level 1: high=3150 >= sell=3150 → BUY (touched sell level)
- Levels 2-7: high >= sell_price → BUY
- Different result from Scenario 3 - path matters!
Scenario 5: Rising Through Level 3
Path: Price rose from 3060 to 3064.26
- Open: 3060.00, High: 3070.00, Low: 3060.00, Close: 3064.26
- Levels 1-2: high < sell_price → SELL
- Levels 3-7: high >= sell_price → BUY
Scenario 6: Falling Through Level 3
Path: Price fell from 3100 to 3064.26
- Open: 3100.00, High: 3100.00, Low: 3064.26, Close: 3064.26
- Level 1: high < sell_price → SELL
- Level 2: high=3100 < sell=3107.10 → SELL
- Levels 3-7: high >= sell_price → BUY
- Same result as Scenario 5
Scenario 7: At Lower Bound (Falling)
Path: Price fell to lower bound
- Open: 2860.00, High: 2860.00, Low: 2850.00, Close: 2850.00
- All levels: high < sell_price, no sells
- Level 7: low=2850 ⇐ buy=2850 → SELL (touched buy)
- Levels 1-6: low > buy_price → SELL
- All SELL
Scenario 8: Below Grid (Falling)
Path: Price fell below grid
- Open: 2860.00, High: 2860.00, Low: 2849.99, Close: 2849.99
- All levels: low ⇐ buy_price → all buy orders filled → All SELL
Scenario 9: Oscillation Through Level 2 (Ended in Range)
Path: Price oscillated through entire level
- Open: 3080.00, High: 3120.00, Low: 3060.00, Close: 3080.00
- Level 2: buy=3064.25, sell=3107.10
- high=3120 >= sell=3107.10 → touched sell
- low=3060 ⇐ buy=3064.25 → touched buy
- close=3080 in range [3064.25, 3107.10) → SELL (holding inventory)
Scenario 10: Oscillation Through Level 2 (Ended Above)
Path: Price oscillated but ended above range
- Open: 3080.00, High: 3120.00, Low: 3060.00, Close: 3110.00
- Level 2: buy=3064.25, sell=3107.10
- high >= sell, low ⇐ buy → both filled
- close=3110 >= sell=3107.10 → BUY (last action was sell)
Scenario 11: Oscillation Through Level 2 (Ended Below)
Path: Price oscillated but ended below range
- Open: 3080.00, High: 3120.00, Low: 3060.00, Close: 3062.00
- Level 2: buy=3064.25, sell=3107.10
- high >= sell, low ⇐ buy → both filled
- close=3062 ⇐ buy=3064.25 → SELL (last action was buy)
Scenario 12: Falling Into Level 1 From Above
Path: Price fell from above grid into level 1
- Open: 3150.59, High: 3158.76, Low: 3139.71, Close: 3139.71
- Level 1: buy=3107.10, sell=3150.00
- high=3158.76 >= sell=3150.00 → sell filled
- low=3139.71 > buy=3107.10 → buy NOT filled
- close=3139.71 in range [3107.10, 3150.00) → BUY (sold, waiting to buy back)
- All other levels: high >= sell_price → All BUY
Scenario 13: Rising Into Level 7 From Below
Path: Price rose from below grid into level 7
- Open: 2849.00, High: 2860.00, Low: 2840.00, Close: 2860.00
- Level 7: buy=2850.00, sell=2892.85
- high=2860.00 < sell=2892.85 → sell NOT filled
- low=2840.00 ⇐ buy=2850.00 → buy filled
- close=2860.00 in range [2850.00, 2892.85) → SELL (bought, waiting to sell)
- All other levels: low ⇐ buy_price → All SELL
Scenario 14: Entry Price Between Buy and Sell (No Boundary Touch)
Path: Grid started with entry price between level’s buy and sell, price stayed in range
- Entry: 2931.64, Open: 2930.00, High: 2941.19, Low: 2926.12, Close: 2936.47
- Level 6: buy=2892.85, sell=2935.70
- entry=2931.64 in range [2892.85, 2935.70) → buy order already filled at grid start
- high=2941.19 > sell=2935.70 → sell order filled during hour
- close=2936.47 >= sell=2935.70 → BUY (sold, waiting to buy back)
- Level 5: buy=2935.70, sell=2978.55
- entry=2931.64 < buy=2935.70 → buy order NOT filled at start
- high=2941.19 < sell=2978.55 → sell order NOT filled
- close=2936.47 in range [2935.70, 2978.55) → SELL (waiting to buy)
- Levels 1-4: entry < buy_price, high < sell_price → All SELL
- Level 7: entry > sell_price → BUY (already completed cycle)
Scenario 15: Entry Price Between Buy and Sell (Price Stays in Range)
Path: Grid started with entry price between level’s buy and sell, price never touched boundaries
- Entry: 2931.64, Open: 2930.00, High: 2933.00, Low: 2926.12, Close: 2932.00
- Level 6: buy=2892.85, sell=2935.70
- entry=2931.64 in range [2892.85, 2935.70) → buy order already filled at grid start
- high=2933.00 < sell=2935.70 → sell order NOT filled
- low=2926.12 > buy=2892.85 → buy order NOT filled again
- close=2932.00 in range [2892.85, 2935.70) → SELL (holding inventory, waiting to sell)
- This is the missing scenario that caused the bug!
Key Insights from Scenarios
- Path direction matters when high/low touch boundaries (Scenarios 3 vs 4)
- Same close price can yield different status depending on high/low
- Oscillation scenarios require close price to disambiguate (Scenarios 9-11)
- Rising vs falling paths with same close can have same result if neither touches boundaries (Scenarios 5 vs 6)
- Entry from outside grid bounds requires special attention (Scenarios 12-13)
- Falling into top level: all levels BUY (sold at all levels)
- Rising into bottom level: all levels SELL (bought at all levels)
- Entry price between buy and sell requires special handling (Scenarios 14-15)
- If entry is between buy and sell, the buy order was already filled at grid initialization
- Status depends on whether sell order filled during the hour
- If neither boundary touched, status is SELL (holding inventory)
Storage Organization Options:
The design supports flexible storage organization to optimize for different access patterns:
Hourly-Based Storage Organization (Implemented):
The system uses separate files per data type within hourly folders to optimize for different access patterns and reduce duplication of configuration and account data:
snapshots/
├── 2025/01/18/17/
│ ├── configuration.json # System configuration snapshot (shared across all symbols)
│ ├── account.json # Account balance snapshot (shared across all symbols)
│ ├── ETH-USDT_1m.json # 1-minute data snapshot for ETH/USDT
│ ├── BTC-USDT_1m.json # 1-minute data snapshot for BTC/USDT
│ ├── ETH-USDT_15m.json # 15-minute data snapshot for ETH/USDT
│ ├── BTC-USDT_15m.json # 15-minute data snapshot for BTC/USDT
│ ├── ETH-USDT_1h.json # 1-hour data snapshot for ETH/USDT
│ ├── BTC-USDT_1h.json # 1-hour data snapshot for BTC/USDT
│ └── ...
├── 2025/01/18/16/
│ ├── configuration.json # System configuration snapshot
│ ├── account.json # Account balance snapshot
│ ├── ETH-USDT_4h.json # 4-hour data snapshot (covers 16:00-19:59)
│ ├── BTC-USDT_4h.json # 4-hour data snapshot
│ └── ...
└── 2025/01/18/15/
├── configuration.json # System configuration snapshot
├── account.json # Account balance snapshot
└── ...
Benefits of Hourly-Based Organization:
- Reduced Duplication: Configuration and account data stored once per hour, not per symbol/timeframe
- Efficient Access: All data for a specific hour is co-located in one directory
- Simplified Structure: No nested timeframe folders, cleaner organization
- Atomic Collection: All data for an hour can be collected and stored atomically
- Easy Cleanup: Entire hour directories can be removed for retention management
- Cross-Symbol Analysis: Configuration and account data easily accessible for multi-symbol analysis
File Type Descriptions:
- configuration.json: System configuration including grid configs, regime settings, tracked symbols, and other system parameters (shared across all symbols for the hour)
- account.json: Account balance data including total USDT, available USDT, locked USDT, and deployable capital (shared across all symbols for the hour)
- {SYMBOL}_{TIMEFRAME}.json: Market-specific data including minute-level prices, historical OHLCV data, and technical analysis results for the specific symbol and timeframe
This approach aligns with the data collection patterns:
- Configuration data: Changes infrequently, shared across all symbols
- Account data: Updated hourly, shared across all symbols
- Market data: Symbol-specific, multiple timeframes per symbol
Monthly Cleanup and Re-aggregation:
The system implements automated maintenance processes:
- Retention Enforcement: Remove files beyond retention periods per timeframe
- Data Compression: Compress older snapshots to reduce storage overhead
- Re-aggregation: Combine multiple small files into larger archives for long-term storage
- Index Maintenance: Update metadata indexes for efficient historical queries
- Integrity Checks: Verify data consistency and repair any corruption
Implementation Considerations:
- Atomic Operations: Ensure snapshot writes are atomic to prevent partial data
- Concurrent Access: Handle multiple processes accessing snapshots simultaneously
- Error Recovery: Graceful handling of corrupted or missing snapshot files
- Backup Strategy: Regular backups of snapshot data with point-in-time recovery
- Monitoring: Track storage usage, access patterns, and cleanup effectiveness