There is a particular kind of hesitation that sometimes happens when you sit down to write code you haven’t written in a while. Not the ordinary uncertainty of an unfamiliar problem, but something more disorienting: the sense that the movements should be automatic and aren’t. You know what you need to write. The language is familiar. But your hands pause above the keyboard in a way they didn’t used to, and the fluency you expected isn’t quite there.
This experience is becoming more common, and not because developers are losing their skills. Something more specific is happening. When AI handles most of the routine implementation work, a developer’s role shifts toward review, direction, and oversight. Most of the time this feels like progress: less boilerplate, more judgment, faster delivery. But a structural problem is accumulating quietly. The skills you stop practising are the ones you need most when AI fails. And AI, reliably, fails at the hardest things.
The pattern Bainbridge named
Lisanne Bainbridge identified this exact dynamic in 1983, in a paper about factory automation. She called it “Ironies of Automation,” and the central irony she described was this: automation, designed to remove humans from the loop, left them with the worst possible version of the job. They were still required to monitor systems, catch failures, and intervene in emergencies. But they were no longer practising the manual skills those emergencies demanded. The work they were asked to do, responding effectively under pressure to rare and unexpected failures, was precisely the work their new role had stopped preparing them for.
Factory operators in the 1980s moved from doing to watching. They understood the systems they supervised, at least in the abstract, because they’d learned those systems through hands-on operation. But the hands-on operation had moved to machines. When something went wrong and a machine needed manual override, operators reached for skills that months of passive monitoring had allowed to deteriorate. The automation designed to make their work easier had made the hard moments harder.
The parallel to AI-assisted software development is uncomfortable in its closeness. A developer who spends most of their day reviewing AI-generated code is doing something genuinely useful: catching logical errors, identifying security gaps, verifying that the implementation matches the intent. But they are not debugging from first principles. They are not tracing state through a complex authentication flow. They are not writing performance-critical code under time pressure. And those are the things AI still cannot reliably handle: the edge cases, the security boundaries, the places where something subtle is wrong and the model keeps producing plausible-looking variations on the same mistake.
The goalkeeper problem
A goalkeeper spends most of a football match watching. Eighty-five minutes of positioning, tracking, and alertness, punctuated by three or four genuine interventions where everything depends on execution. If the goalkeeper trained the same way they played in a match, mostly reactive and rarely called upon, they wouldn’t stay sharp. They’d stay in the habit of watching. The work of being a goalkeeper is the training between the critical moments, not just the critical moments themselves.
There’s a version of the developer’s situation that maps onto this fairly directly. The supervision is real work; it requires attention, experience, and judgment. But the moments that actually demand hands-on coding are rare enough that you can go weeks without writing anything difficult from scratch. And when those moments come, when AI generates broken output for the eleventh iteration of the same prompt and you need to actually trace the problem yourself, you reach for a fluency that the supervision phase has been quietly depleting.
What the depletion looks like
Consider a few patterns that are beginning to surface.
A developer encounters an authentication edge case. AI generates a solution that passes the obvious tests but breaks a subtle session-handling rule. The developer needs to trace the full authentication flow manually to understand why. They find they can’t hold the execution path in their head the way they once could. They’ve been reviewing AI implementations for months and haven’t had to follow the logic themselves. The knowledge is there, but it has receded. What used to be automatic now requires effort.
A team notices database queries slowing over several weeks. The AI-generated code looks fine in isolation. Someone needs to profile the queries, identify what’s producing N+1 patterns, and understand the query planner’s behaviour. The team realises nobody has done serious database performance work in half a year. The AI handles most queries adequately, so this kind of low-level investigation has essentially stopped. When it matters, the skill is rusty in a way nobody noticed it becoming.
A production deployment fails partway through. The rollback procedure needs to be executed manually, which requires understanding the exact state of the system and the implications of each step. The team struggles with commands they used to run from memory. Deployments have been handled by automated pipelines for most things, and the muscle memory for the edge cases has degraded. The high-stakes moment arrives and the execution is uncertain.
The pattern across these examples is consistent. AI handles routine work well enough that the corresponding manual skills stop being exercised. The skills don’t disappear, but they lose the sharpness that comes from regular use. And this creates a specific asymmetry: the work AI fails at tends to be the work that demands the most from those skills. The easy things get automated. The hard things remain human. The hard things are exactly where degraded skills cost the most.
Why this is harder to see than the factory version
Factory operators had some visible warning signals. The manual skill loss had a shape they could recognise: if you hadn’t operated the machinery in six months, you knew you hadn’t operated it. The gap was legible.
AI-assisted deskilling is harder to notice. Reviewing AI code all day feels like coding. You’re in the IDE, reading implementation, making decisions about correctness. The texture of work is present even when the actual exercise of manual coding isn’t. The smooth weeks, when AI output is good and nothing breaks, don’t feel like depletion. They feel like productivity. The problem only becomes visible when something fails and the manual skill is needed, at which point the depletion has already happened.
There is also a confidence problem. When code reviews are going well and AI output is mostly correct, it is easy to feel that your understanding is intact. You approved the output, so you clearly understood it well enough to approve it. But understanding output well enough to approve it isn’t the same as being able to produce it. Recognition is easier than recall. You can read a language well after years without speaking it, but speaking it is harder than you expected.
For developers earlier in their careers, the problem is different but compounds in its own way. They may be supervising AI before they’ve built the hands-on foundation that makes supervision meaningful. They know what good output looks like from studying it; they haven’t had the experience of producing it. The capacity to evaluate execution is built, in part, through the experience of executing, and the conditions that erode that capacity in senior developers are also preventing it from forming properly in junior ones.
How goalkeepers stay sharp
Goalkeeping coaches have developed structured responses to this exact problem. The answer isn’t to replace passive match play with something else; it’s to build a parallel training practice that compensates for the passivity of the match itself. Keepers drill specific scenarios repeatedly: penalty saves, one-on-ones, crosses under pressure. They work with coaches who fire shots from unpredictable angles, practising the rare and difficult rather than just the common and manageable. Between actions during a match, they stay physically and mentally engaged: adjusting position, tracking movement, maintaining the readiness that passive watching alone would erode.
The parallel asks an uncomfortable question of software developers. The goalkeeper’s training regimen exists because the sport created a recognised structural need and built an institution around it. Coaches, drills, training sessions: these are formal, budgeted, expected. Nobody questions why a goalkeeper is running drills instead of sitting on a bench waiting to play. The training is understood as the job.
For developers, there is no equivalent institution. Deliberately writing code without AI assistance, practising debugging from first principles, drilling the skills most likely to be needed in emergencies: these activities have no recognised structural home. They are not allocated in sprint planning. They are not treated as part of the job in the way that goalkeeper training is. An individual developer can decide to practise manually, and many do, but it requires going against the grain of an environment optimised for throughput. The goalkeeper’s answer to the deskilling problem is available in principle. What’s absent is the context that makes it a normal part of professional practice rather than an individual act of discipline.
The irony Bainbridge named, arriving again
Bainbridge called her paper “Ironies of Automation” because the problem she was describing was structural, not incidental. Automation creates the conditions for the skill loss it depends on you not having. The system that needs you to intervene effectively is the same system that has been reducing your opportunities to practise intervening.
The response, “just practise more,” is not wrong, but it’s incomplete. It restates the problem without resolving it. If practising manually costs time and effort that automation was supposed to save, then deliberate practice works against the efficiency gain that justified the automation. The goalkeeper trains between matches because that’s the job. But developers are not goalkeepers: their organisations are not necessarily paying them to stay sharp for the rare moment. They’re paying for working software, and AI increasingly produces it faster.
This is the tension the goalkeeper analogy is pointing at. Not a problem to be engineered away, but a structural difficulty to be held. More automation means more efficiency and less practice. Less practice means slower, less reliable execution when it matters. The moments where execution needs to be sharpest are exactly the moments the automation was inadequate for. That’s not a coincidence; it’s the shape of the problem. Bainbridge saw it in factories. We’re meeting it again, differently dressed.
The questions this leaves open don’t have clean answers. How much manual practice is enough to stay genuinely competent at things you rarely do? Which skills are maintained by supervision and which need direct exercise to stay sharp? Is there a minimum viable coding practice that preserves execution capability without abandoning the efficiency gains that automation provides? These are not rhetorical questions. The developers who will figure this out are probably already experiencing the early versions of the problem, without quite being able to name what they’re noticing.