The IDE is where creation happens. It always has been. You open a file, you type, and code appears line by line. It’s immediate, tactile, visible — you’re there when it happens. The IDE isn’t just where you view your work. It’s where you make it. This feels definitional, so fundamental it barely needs stating. A workshop, a studio, the place where things come into being through direct effort. But something is shifting. Not what the IDE can do, but what it’s for.

Still writing, differently
You still open your IDE daily. You still write code. But increasingly, the code you’re looking at isn’t code you just wrote. An agent worked overnight. A colleague pushed changes. A refactoring tool ran while you were in a meeting. Work happened without you, and now you’re deciding: keep it or change it? The primary action shifts — from writing to reviewing, from generating to deciding, from making to approving or redirecting. The IDE becomes an intervention surface, a place where you step in to assess and steer, not where everything originates.
This shift shows up most clearly when you notice how your days actually begin. You wake up to a pull request created overnight by an agent. The diff is substantial — hundreds of lines, maybe dozens of files. Your job isn’t to write it; it’s to decide if it’s right. You scan for patterns, check alignment with the intent you specified yesterday, look for edge cases the agent might have missed. You intervene — commenting on a subtle bug, tweaking an API signature, sometimes rejecting an entire approach and redirecting toward something better. But you don’t author from scratch. The IDE is where you exercise judgment, not where you do the initial drafting. This feels different from the old rhythm, where you’d sit down to a blank file and start typing.
The same pattern appears throughout the day. You hit a build failure mid-afternoon. You open your IDE, inspect the logs, trace the problem to a dependency conflict in code you didn’t write — perhaps an agent updated it, perhaps a teammate, perhaps an automated refactoring tool made changes while you were focused elsewhere. You fix the issue in minutes and close the editor. The IDE was open briefly, just long enough to intervene and correct course. You didn’t spend hours there crafting something new. You stepped in, made a targeted change, and left. This happens more and more: short bursts of focused correction rather than long stretches of sustained creation.
Sometimes you don’t even need the IDE at all. You review a pull request directly on GitHub, scanning the diff view in your browser, leaving comments on specific lines. Or you open a web-based editor like OpenCode, make a small correction, commit directly from the browser, and move on. For certain kinds of review work — checking logic, approving straightforward changes, catching obvious problems — the full power of a local IDE feels like overkill. The browser is enough. Your phone is sometimes enough. The work has become lightweight enough that the heavyweight tools aren’t always necessary. This would have felt wrong a few years ago, almost unprofessional — “real” development happened in a proper IDE. Now it’s just practical. If the primary task is judgment rather than creation, the tools can be simpler. You don’t need autocomplete when you’re not typing much. You don’t need a debugger when you’re just reading. The shift in what you’re doing changes what tools you actually need.
What becomes harder to hold
When work happens without you watching, certain kinds of understanding become more difficult to maintain. You used to build features iteratively, making small changes and seeing their effects immediately. Each decision was informed by the last. The architecture emerged through dozens of micro-adjustments that felt natural because you were there for all of them. Now an agent might implement an entire feature overnight based on a specification you wrote yesterday. The implementation is complete when you return to it, all decisions already made, all trade-offs already chosen. You’re reviewing a finished thing rather than shaping an evolving one. This changes what you can know about it. The “why” behind each choice isn’t obvious because you didn’t make those choices. You have to reconstruct the reasoning, infer the logic, guess at the constraints the agent was optimizing for. Sometimes the choices are good. Sometimes they’re subtly wrong in ways that take time to notice. The skill shifts from building correctly to recognizing correctness after the fact.
What this means for how tools should work
Today’s IDEs are optimized for authoring. Autocomplete suggests what you might type next. Syntax highlighting shows errors as you write them. Inline suggestions offer completions for the line you’re currently crafting. The entire interface assumes you’re building code keystroke by keystroke, with the IDE supporting each small decision you make in real time. If you’re spending more time reviewing than writing, different tools start to matter. Diff views become critical — you need to see what changed, understand the scope of modifications, spot patterns across dozens of files simultaneously. Context summaries matter more than they used to: “Where did this come from? Who made this decision? What was the intent behind this change?” Navigation shifts focus — “why was this changed?” becomes as important as “where is this function used?” The questions you’re asking of your codebase have changed, but the tools haven’t fully caught up yet.
The skills that matter shift in response to this new reality. Writing well still matters, of course. Clear code, good naming, thoughtful structure — these haven’t stopped being important. But reviewing well starts to matter more. Can you assess a large diff quickly, pulling out the meaningful changes from the mechanical ones? Can you spot subtle misalignments between the intent you specified and the implementation that resulted? Do you know what parts of a change deserve close scrutiny and what parts you can safely trust? Code review used to be an occasional skill, something you applied when teammates opened pull requests and needed feedback. Now it’s constant, applied to your own work-in-progress as much as others’, exercised multiple times per day rather than a few times per week. The judgment required is different too. You’re not just checking whether code is correct — you’re evaluating whether it matches an intent that might have been expressed verbally or in prose, assessing alignment between what you meant and what got built.
Visualizing change becomes essential in ways it wasn’t before. When you wrote code yourself, you understood the relationships intuitively — you knew which modules depended on what you were changing because you’d built those connections. You could trace the impact in your head. When reviewing code you didn’t write, especially across multiple concurrent projects, that intuitive map no longer exists. You need tools that show you the blast radius of a change, the dependency graph of what touches what, the ripple effects that aren’t obvious from looking at a diff alone. An agent refactors a shared utility function used across three projects — which features break? Which tests need updating? What integration points are affected? The questions aren’t new, but answering them without having built the system yourself requires different tooling. Dependency visualizations, impact analysis, cross-project relationship mapping — these shift from nice-to-have debugging aids to essential review infrastructure. The challenge isn’t just understanding what changed in isolation, but understanding how that change propagates through a system you’re supervising rather than intimately building. When you’re juggling five concurrent feature branches, each touching different parts of the codebase, the spatial and relational understanding you used to carry in your head needs to be externalized into tools that can show you what you can no longer see directly.
Testing becomes more important and more complicated at the same time. When you wrote code yourself, you tested as you went — running it locally, checking edge cases, validating behavior before committing. When an agent generates code overnight, you wake up to something that claims to work but that you haven’t run yet. Did the agent write tests? Are they comprehensive? Do they actually validate the behavior you care about, or just the behavior that’s easy to test? You need a way to quickly spin up the change in isolation, see it running, verify it does what you intended. This matters more than it used to because you can’t trust your intuition about code you didn’t write. But it’s also harder because the infrastructure for testing might not be designed for this workflow. Standing up a test environment to validate a single change can take longer than reviewing the code itself. Preview environments, isolated test instances, quick deployment pipelines — these become essential infrastructure, not nice-to-haves. The ability to test changes rapidly, without needing the full local development environment running, becomes a bottleneck in the review process. If you can’t easily verify that the code works, you’re left either trusting blindly or spending significant time setting up the ability to check.
How your relationship to the codebase changes
You used to write most of the code yourself, or at least be present when it was written. You could hold it in your head — not line by line, which was never really possible for large systems, but structurally, conceptually. You understood the architectural choices because you’d made them. You knew why certain modules existed because you’d created them to solve problems you’d encountered directly. The code felt familiar because you’d built it, shaped it, lived with it through multiple iterations. Now code arrives from elsewhere. Agents generate features overnight. Automation refactors modules while you’re in meetings. Teammates working asynchronously push changes to shared repositories. You can’t hold it all in your head the same way because you weren’t there for the decisions. The context isn’t embedded in your memory through the act of creation.
This changes what you rely on. You become more dependent on tools to reconstruct context when you need it. Blame annotations showing who changed what and when. Commit messages explaining why decisions were made. Documentation — which suddenly matters much more when you can’t just remember the reasoning. The IDE shifts from being a canvas where you paint to being a lens through which you look, trying to understand what’s already there. You’re navigating someone else’s mental model as much as your own. The codebase becomes less like an extension of your thinking and more like a shared artifact that you supervise and maintain alongside others — human and otherwise.
A different posture, a different pace
The shift resembles the transition from individual contributor to manager — moving from doing the work yourself to supervising others doing it. The parallel is obvious: less direct execution, more oversight and direction. But there’s a key difference that makes the comparison incomplete: the pace and density of decisions. Management operates on timescales of days or weeks. You assign tasks, check in periodically, review outcomes when they’re complete. You have thinking time between decisions. The feedback loop is long enough that you can step back, reflect, consult with others before choosing a direction. AI-assisted work operates on timescales of minutes or hours. An agent produces changes overnight. You review them in the morning over coffee. You approve some, redirect others, request clarifications on a few. Another task starts before lunch. By afternoon, you’re reviewing again. The volume of change to process is much higher than traditional management. The cognitive load is different — less time to reflect between decisions, more rapid-fire judgment calls, a constant stream of “yes, no, maybe, try again” flowing through your attention.
This creates a tension that’s showing up in how people experience the shift. Some engineers still feel the craft of writing software strongly — the tactile satisfaction of building something line by line, the intimacy of direct creation. For them, reviewing agent-generated code can feel like a loss, a step away from what made the work meaningful. Others, especially those who transitioned to leadership years ago, recognize they’ve been creating less directly for a long time already. They shaped systems through delegation, through architectural decisions, through setting direction for teams. They still feel like creators of code, just with more layers between them and the keystrokes. Both perspectives are real, both are held strongly across the engineering community, and both face different challenges as the world shifts. The first group is grappling with a change in daily practice that threatens something fundamental about why they chose this work. The second group is watching the abstraction layer move again, wondering whether the new distance is qualitatively different from the old one, whether the skills that mattered in human-to-human delegation still apply when the delegation is human-to-AI.
You might spend an hour reviewing changes an agent made, correcting course where needed, approving what looks right, redirecting what doesn’t quite match your intent. It’s productive work. It’s valuable. The output is meaningful, sometimes more than you could have produced yourself in the same time. But it feels different from spending an hour building something from scratch, seeing it take shape under your hands, making each small decision consciously as you go. The IDE is open either way. The screen looks similar. But the posture is different. You’re evaluating rather than creating, supervising rather than executing, making judgment calls about work that happened without you rather than shaping work in real time through direct action. Both are skilled activities. Both require expertise. They’re just not the same thing, and pretending they are misses something important about how the work has changed.
Not a loss, but different
This isn’t about replacement or reduction. Intervention is still creation. Deciding what to keep, what to change, and what direction to take next — that’s creative work, demanding just as much skill and judgment as writing code from scratch. In some ways it demands more, because you’re operating at a higher level of abstraction, shaping systems rather than functions, directing outcomes rather than implementing steps. The workshop is becoming a checkpoint. Both matter. Both require skill. Both contribute value. They’re just not the same thing, and the transition from one to the other changes what your day feels like, what expertise means, where your attention goes.
There’s still creation happening, just at a different layer. Instead of creating implementations, you’re creating constraints, directions, specifications that agents elaborate into working systems. Instead of crafting each line, you’re shaping the intent and then refining the result. The creative act hasn’t disappeared; it’s moved upstream. What you make isn’t code directly — it’s the frame within which code gets made, the boundaries that guide automated work toward useful outcomes. This is creation too, just of a different kind. The value you provide hasn’t diminished; it’s concentrated differently, focused more on judgment and less on execution.
Where this leaves us
The IDE hasn’t stopped being central. It’s still where you go when work needs attention, when something requires a decision, when an outcome needs evaluation. But the nature of that attention is changing. You’re intervening more than originating. Reviewing more than drafting. Deciding more than doing. The tools we use, the skills we cultivate, and the rhythms we develop will need to adapt to this reality. Not because the old ways were wrong, but because the work itself is shifting underneath us — not gradually, not subtly, but fast enough and substantially enough that the old patterns no longer fit quite right.
There’s another shift worth naming: the breadth of what you can tackle concurrently expands dramatically when you’re directing rather than implementing. When you had to write every line yourself, you worked on one thing, maybe two. Your capacity was bounded by your keystrokes, your working memory, the hours in your day. Now agents can work in parallel on multiple tasks while you supervise all of them. You might have five features in flight, three refactorings underway, two bug investigations running simultaneously. The theoretical productivity is enormous. But the cognitive load is different in kind, not just degree. Your attention fragments across more contexts. Your headspace never fully clears because there’s always something waiting for review, always another decision pending. The boundaries between work and rest blur because you can always check in, always respond to one more completed task, always redirect one more agent. Personal satisfaction shifts too — the deep focus of building something start to finish, the clear sense of completion, these become rarer. You’re juggling more, accomplishing more in aggregate, but feeling it less. This isn’t inherently bad, but it’s also not free. The impact on ourselves and on others — on rest, on satisfaction, on what it feels like to do this work — deserves attention as much as the productivity gains do.
This is worth paying attention to, not because it requires immediate action, but because understanding the shift helps navigate it. The IDE as intervention surface isn’t the future; it’s already here for many of us, showing up in small ways throughout the day. Recognizing it, naming it, sitting with what it means — that’s the first step toward working with it intentionally rather than simply reacting as the ground moves.