Article 012: The goalkeeper’s dilemma: when AI writes 95% of your code, can you still execute the critical 5%?

Key Message: Automation makes you rusty at the work you stop doing. The 5% of coding AI can’t handle becomes the hardest 5% you need to execute yourself.

Series: C — Tensions & Implications (Part 1 of 6) Publish Date: 2026-04-06 (weekly cadence; Article 011 published 2026-03-30)


Story Outline

Opening: The goalkeeper problem

Phase C opener note: This is the first article of a new phase. Do NOT open by picking up a thread from Article 011 or framing this as “continuing” from Phase B. Phase C begins a distinct mode: examining what the shift breaks, complicates, or challenges. The tone shifts from observational/reframing (Phase B) to tension-first. Open by dropping the reader directly into the discomfort — the hesitation at the keyboard, the moment where the skill is needed and isn’t there. No warm-up from earlier articles; earn the reader’s attention fresh. Readers may arrive without having read 011 or earlier articles; the opening must stand on its own as a Phase C piece.

The setup:

  • You’re reviewing AI-generated code for the 47th time this week
  • Tests pass, logic looks sound, you approve and move on
  • Then you hit an edge case AI can’t handle
  • You need to write the fix yourself
  • Your hands hesitate over the keyboard
  • How long has it been since you actually wrote code from scratch?

The tension:

  • AI handles 95% of routine coding
  • You mostly review, direct, supervise
  • But when AI fails or hits complexity it can’t handle, YOU need to write code
  • Can you still execute when needed?
  • Have your skills atrophied from disuse?

The goalkeeper analogy:

  • A goalkeeper spends 85 minutes watching the game
  • They must stay sharp for 3-4 critical moments
  • Long stretches of passive monitoring
  • Punctuated by rare, high-stakes intervention
  • If they don’t train between matches, their reflexes rust

The parallel:

  • You mostly watch AI code
  • Must act decisively in rare critical moments
  • Passive supervision punctuated by hands-on execution
  • If you don’t practice, your coding muscles atrophy

The shift: From efficiency gain to capability loss

What we thought automation would do:

  • Handle boring repetitive tasks
  • Free us for creative complex work
  • Make us more productive
  • Amplify our capabilities

What automation actually does (Bainbridge 1983):

  • Removes humans from the loop
  • Leaves them with passive monitoring
  • Rare crises requiring skills they no longer practice
  • “Long stretches of passive monitoring punctuated by rare, high-stakes crises they were increasingly unprepared for”

The industrial parallel:

  • Factory automation in the 1980s
  • Operators moved from doing to monitoring
  • When machines failed, operators couldn’t step in
  • Their manual skills had degraded from lack of practice
  • Same dynamic, different domain

The software version:

  • AI handles most implementation
  • Developers move from coding to reviewing
  • When AI fails, developers must code
  • But their hands-on skills have degraded
  • The deskilling problem

Why this is worse than it sounds:

  • The 5% AI can’t handle is the HARD 5%
  • Edge cases, performance issues, architectural decisions
  • Exactly the moments you need your sharpest skills
  • Exactly the moments those skills are most rusty

Evidence: What deskilling looks like

Example 1: The simple bug that isn’t simple anymore

  • Junior dev encounters authentication edge case
  • AI generates solution that breaks another module
  • Dev needs to debug and fix manually
  • Realizes they don’t remember how authentication flow works
  • They’ve been supervising AI implementations for months
  • Never had to trace the full path themselves
  • The knowledge has faded

Example 2: Performance degradation

  • Database queries getting slower
  • AI-generated code looks fine in isolation
  • Someone needs to profile, identify bottleneck, optimize
  • Team realizes no one has done database performance work in 6 months
  • AI handles most queries adequately
  • The skill of spotting N+1 queries has atrophied
  • When it matters, no one’s sharp

Example 3: The deployment that breaks

  • Production deployment fails halfway
  • Need to manually rollback, diagnose, fix
  • Team struggles with commands they used to know by heart
  • AI handles deployments 95% of the time
  • The muscle memory for emergency response is gone
  • High-stakes moment, rusty execution

The pattern:

  • AI handles routine well enough
  • Edge cases and failures are rare
  • When they happen, you need manual intervention
  • Your manual skills have degraded from disuse
  • The moment you most need capability, you have least

The false confidence trap:

  • Things work smoothly for weeks
  • You review AI output, it’s mostly fine
  • You feel productive, efficient
  • Then crisis hits
  • You discover your skills have eroded
  • Too late to train when the match is already on

Implications: The goalkeeper’s training regimen

The core problem:

  • Can’t prevent the skills gap (AI will keep improving)
  • Can’t go back to manual coding for everything (economically impossible)
  • Must find ways to maintain capability despite reduced practice
  • Need deliberate training, not just passive monitoring

Principle 1: Deliberate practice during downtime

  • Goalkeepers don’t stand idle between plays
  • Constant micro-adjustments, positioning, mental rehearsal
  • Translation: Don’t just passively review AI code
  • Actively engage: rewrite sections mentally, spot alternative approaches
  • Ask “how would I do this?” before seeing AI’s solution
  • Maintain active coding mindset even when not typing

Principle 2: Scenario drilling

  • Coaches fire shots from unpredictable angles
  • Practice rare high-stakes situations repeatedly
  • Translation: Create your own coding challenges
  • Work through edge cases manually
  • Practice debugging without AI assistance
  • Drill the skills you rarely need but must execute perfectly

Principle 3: Pattern recognition training

  • Goalkeepers study opponents, common attack patterns
  • Build mental models before the match
  • Translation: Build libraries of common failures
  • Catalog AI mistakes and edge cases
  • Study architectural antipatterns
  • Develop pattern recognition for what breaks

Principle 4: Staying warm

  • Physical warm-ups between action
  • Mental engagement even when ball is far away
  • Translation: Regular small hands-on tasks
  • Don’t wait for crisis to code manually
  • Weekly coding exercises, even trivial ones
  • Keep the muscle memory fresh

Principle 5: Post-crisis analysis

  • Review what worked, what didn’t
  • Adjust training based on actual performance
  • Translation: Track your coding struggles
  • What took longer than it should have?
  • What knowledge did you need to relearn?
  • Calibrate your training to your actual gaps

The uncomfortable truth:

  • You can’t just “work naturally” and maintain skills
  • Requires deliberate, intentional practice
  • Practice that feels inefficient in the short term
  • But prevents capability loss in the long term

The team dimension:

  • Junior developers face this immediately
  • They supervise AI before building hands-on foundation
  • Senior developers lose edge from lack of practice
  • Both need different interventions
  • One needs to build skills, one needs to maintain them

Close: Efficiency vs capability

The tradeoff we didn’t see coming:

  • Automation promised pure efficiency gains
  • Assumed we’d keep our capabilities intact
  • Bainbridge showed this was false 40 years ago
  • We’re rediscovering the same lesson

The questions we’re left with:

  • How much practice is enough to stay sharp?
  • Which skills are worth maintaining vs letting atrophy?
  • Can you train for rare events when you can’t predict which ones?
  • Is there a minimum viable coding practice?

The goalkeeper knows:

  • They’ll face 3-4 critical moments per match
  • They must train for all of them
  • Can’t predict which ones will come
  • Must be ready for any of them
  • Training is the job, not just the matches

For developers:

  • You’ll face edge cases and failures
  • You must train for manual execution
  • Can’t predict which skills you’ll need
  • Must maintain readiness across the board
  • Supervision isn’t the whole job
  • Practice is

Phase C tone check: The “Practice is” line above is too declarative — it resolves the tension. Phase C articles should name the dilemma without prescribing a solution. The training-regimen section (Principles 1–5) risks feeling like a how-to guide. Treat it as a way of showing the shape of the problem, not a prescription. The close should land on the unresolved tension, not an answer.

The tension remains:

  • More automation = more efficiency = less practice
  • Less practice = degraded skills = worse at critical moments
  • Critical moments are exactly when you need sharp skills
  • Use it or lose it
  • The question: what’s worth using?

Notes

Key Reference

Bainbridge, L. (1983). “Ironies of Automation”, Automatica, Vol. 19, No. 6

  • Archived PDF: https://web.archive.org/web/20200526003628/https://ise.ncsu.edu/wp-content/uploads/2017/02/Bainbridge_1983_Automatica.pdf
  • Seminal work on automation paradoxes during industrial revolution
  • Core insight: Automation removes humans from loop, leaves them unprepared for rare crises
  • Quote: “Automation, which was inherently designed to remove humans from the loop, left them with the worst possible job, i.e., long stretches of passive monitoring punctuated by rare, high-stakes crises they were increasingly unprepared for”
  • Same dynamic appearing in software development with AI coding assistants

Threads from earlier articles

FromThemePick up here
006Shift to supervisionSupervision assumed you kept execution capability - what if you don’t?
007Attention allocationYou allocate attention to review, but do you allocate time to practice?
010Judgment relocated earlier010 argues judgment isn’t lost, just moved to the prep phase. 012 complicates this: if your execution skills have atrophied, does the quality of that earlier judgment degrade too?
011Phone as diagnostic011 ends with “the reservoir drains if it’s never replenished” — that warning is the premise for 012. Do NOT repeat it; take it as established and investigate the mechanism of the draining (Bainbridge) and what it means for the hard 5%.

Differentiation note: Article 011 already names the risk. Article 012’s job is to explain why it happens structurally (Bainbridge 1983 — ironies of automation), what specifically erodes (not skill in general but the rare high-stakes execution skills), and why AI-era deskilling is harder to notice and counter than factory-era deskilling. The Bainbridge reference is what makes this piece distinctive.

Potential examples to develop

  • Database performance optimization (specific N+1 query example)
  • Authentication/security edge cases
  • Deployment emergency scenarios
  • Debugging production incidents
  • Performance profiling with actual tools
  • Tracing through unfamiliar code

Questions to explore

  • How do junior devs learn if they supervise before they execute?
  • What’s the minimum coding practice to maintain capability?
  • Which skills atrophy fastest? Which persist?
  • Can AI help you train (or does it prevent training)?