Article 012: Social Media
URL: https://thelongrun.work/articles/012-the-goalkeepers-dilemma
Medium
Tags (5 max)
- Technology
- Software Engineering
- Future Of Work
- Skills
- Artificial Intelligence
Subtitle
“Automation makes you rusty at the work you stop doing. The 5% AI can’t handle becomes the hardest 5% you need to execute.”
LinkedIn Posts
Post 1: Publish-day (2026-04-06)
A goalkeeper spends 85 minutes watching.
Three or four moments per match where everything depends on execution. The rest is positioning, tracking, waiting.
If they trained the same way they played — mostly passive, rarely called upon — they wouldn’t stay sharp. They’d stay in the habit of watching.
Sound familiar?
AI now handles most of the routine coding. You review, direct, approve. The supervision is real work. But the moments that actually need hands-on coding are rare enough that you can go weeks without writing anything difficult from scratch.
And when those moments come — when AI fails on the eleventh iteration of the same prompt and you actually need to trace the problem yourself — you reach for a fluency the supervision phase has been quietly depleting.
In 1983, Lisanne Bainbridge wrote about factory automation doing the same thing to operators. She called it the “Ironies of Automation.” The system that needs you to intervene effectively is the same system reducing your opportunities to practise intervening.
The skills you stop practising are the ones you need most when AI fails.
New article: The Goalkeeper’s Dilemma → https://thelongrun.work/articles/012-the-goalkeepers-dilemma
Which skills have you noticed getting rustier?
Visual idea: A goalkeeper mid-match, watching play unfold at the other end of the pitch — passive, alert, waiting. Caption: “85 minutes of this. Then everything depends on you.”
Post 2: Insight (target: 2026-04-13)
AI-era deskilling is harder to spot than the factory version.
When a factory operator stopped working the machinery manually, they knew. The gap had a shape. You hadn’t touched the controls in six months — that was legible.
When a developer spends their days reviewing AI-generated code, it feels like coding. You’re in the IDE. You’re reading implementation. You’re making decisions about correctness.
The smooth weeks don’t feel like depletion. They feel like productivity.
The problem only becomes visible when something fails and the manual skill is needed. At which point the depletion has already happened.
Recognition is easier than recall. You can read a language well after years without speaking it. Speaking it is harder than you expected.
Visual idea: Two parallel timelines — “what the work feels like” (review, approve, ship) vs “what’s quietly changing” (manual skill fading). Simple diagram or text card.
Post 3: Reflection (target: 2026-05-04)
Since writing about the goalkeeper’s dilemma, I keep noticing the same pattern in different places.
The junior developer who can evaluate AI output accurately but struggles to produce equivalent code without assistance. The senior engineer who reviews architecture well but hesitates when asked to sketch a design from scratch.
The Bainbridge insight from 1983 keeps proving durable: the system that needs you to intervene effectively is the same system that has been reducing your opportunities to practise intervening.
The question I keep coming back to: is there a minimum viable coding practice? Some baseline of hands-on work that preserves execution capacity? I don’t have a clean answer. But I suspect the people figuring it out are the ones already noticing the early signs of the problem.
Visual idea: A real moment — notebook with a handwritten problem, or working at a desk without the usual AI tools visible. Something that suggests deliberate practice.