Article 003: Development used to be synchronous

Key Message: IDE-centric work assumed tight feedback loops. Async breaks that assumption — and challenges human cognitive capacity in new ways.

Series: A — Noticing the shift (Part 3 of 5) Publish Date: Monday 2026-02-02


Story Outline

Opening hook

The tight feedback loop: change, observe, adjust. Small increments, immediate response. Each iteration was small enough to fit comfortably in your head. You could hold the change, understand it, course-correct instantly.

The shift: scale of change per iteration

  • Sync: keystroke-by-keystroke, line-by-line
  • Async: commit-by-commit, feature-by-feature, sometimes system-by-system
  • The amount of change between feedback moments grows dramatically
  • Pivot cost goes from ≈0 to potentially days of work

The upstream benefit: implementation as exploration

  • Implementation is now cheap — AI builds quickly
  • So you can build to learn: see a working version, evaluate, refine your understanding
  • Then update requirements/design based on what you learned
  • Then implement again
  • Working software becomes a tool for discovering requirements, not just fulfilling them
  • The iteration loop moves upstream: build → learn → refine reqs → build again
  • Prototyping shifts from lo-fi mockups to actual working systems

The human context window problem

  • Each async iteration produces a large change
  • Human working memory: ~4-7 items, a few thousand words at best
  • AI context windows: 100k, 200k, 1M+ tokens — massively exceeding ours
  • But: humans can hold a system conceptually — we work in abstractions, don’t need line-by-line
  • The real gap: AI can/could hold the entire business and industry in context
    • Domain knowledge, regulations, competitor approaches
    • Historical patterns, customer behavior, industry standards
    • Context no single human fully holds
  • The asymmetry isn’t code comprehension — it’s business/domain comprehension

What happens when change exceeds human context?

  • Can you still hold the whole diff in your head? When can’t you?
  • Does review become sampling? When is comprehensive still possible?
  • Do you trust patterns over line-by-line inspection?
  • Does experience become “knowing where to look”?
  • What role do tests, linting, CI play — safety net or substitute for understanding?
  • Does the skill shift from “reading code” to “triaging risk”?

The abstraction alignment problem (thread → 007, 014)

  • Human directs at intent/architecture level
  • AI works at implementation level
  • You can’t verify alignment at the detail level — context too small
  • Risk of false agreement: both think they understand, neither does
  • Misalignment surfaces late, as bugs or “that’s not what I meant”
  • (More on this in 007: where to attend; 014: experience recognizes patterns)

Implications

  • Upfront clarity less critical? Build to discover, not specify then build
  • But: need enough direction to avoid wasted iterations
  • Does review change character?
  • Does experience = better intuition about what to check?
  • Do tools need to help summarise change, not just show diffs?
  • Does trust shift from inspection to process?

Close

The tight loop fit human cognition. Each change was small enough to understand completely. Async changes the scale — but quality, learning, accountability aren’t negotiable. They’re requirements any AI-based process must solve. The question isn’t whether we need them. It’s how we achieve them when the old mechanisms no longer fit.


Notes

  • Core insight: human context window vs change-per-iteration mismatch
  • AI context windows massively exceed ours (100k → 1M+ vs ~4-7 items)
  • Review becomes probabilistic, not exhaustive
  • Experience = knowing where to sample
  • Connects to 002: work continues without you, and now it’s too big to fully review
ThemeIntroduced hereExplored in
Context window mismatch
Abstraction alignmentBriefly007, 014
Review as sampling007 (attention)
Experience matters moreHinted014

Key tension

  • Iterate on requirements/design cheaply (good)
  • Each implementation iteration may exceed review capacity (challenge)
  • Upstream clarity becomes critical