Article 010: Social Media

URL: https://thelongrun.work/articles/010-what-long-running-agents-change


Medium

Tags (5 max)

  1. Technology
  2. Artificial Intelligence
  3. Future Of Work
  4. Software Engineering
  5. Productivity

Subtitle

“Persistence matters more than intelligence. Agents extend work across time, not capability.”


LinkedIn Posts

Post 1: Publish-day

The conversation about AI agents is almost entirely about capability.

Can it write production code? Can it reason through complex problems? Can it handle a full feature end to end?

These are fair questions. But they’re not the most interesting ones.

The more important shift is temporal. Long-running agents change when work happens, not just how well it’s done.

A capable agent working overnight doesn’t replace your judgment. It moves your judgment earlier: into the spec you approve, the tests you lock, the constraints you document before you step away.

But here’s what most people miss: you also need to review the agent’s assumptions, not just its output. An agent that documents its assumptions as it works — what it interpreted, what trade-offs it made, what it decided when the spec was silent — gives you something concrete to check. Wrong assumptions compound. Clean code that solves the wrong problem still passes tests.

The practical discipline: have agents write their assumptions to a file continuously, with the impact of each choice. When you find one that’s wrong, you can trace exactly what it affected.

What extends across time isn’t execution capacity. It’s the quality of the artifacts — and the visibility of the assumptions made along the way.

Persistence matters more than intelligence. Agents extend work across time, not capability.

New article: https://thelongrun.work/articles/010-what-long-running-agents-change

Visual idea: Timeline showing a working day split into three phases:

  • Left: “4pm — Setup” (human writing spec, locking tests, documenting constraints)
  • Center: “2am — Execution” (agent working, with an assumptions.md file growing alongside the code)
  • Right: “9am — Review” (human reading assumptions.md, tracing one incorrect assumption to its impact)

Post 2: Insight (target: 2026-03-30)

When an agent works overnight and you review the output in the morning, what are you actually checking?

Most people check whether the code works. Does it compile? Do the tests pass? Does the output look right?

That’s necessary but insufficient.

The harder question: did the agent understand the problem the same way you do? Every ambiguity in a spec is a place where the agent made an interpretive choice. Every decision it didn’t escalate is an assumption about what you would have chosen.

Wrong assumptions compound. An agent that misreads a requirement early produces internally consistent work that looks correct but solves the wrong problem. The tests pass. The code is clean. But the premise was off.

The fix: have agents document their assumptions continuously — not just upfront, but as they encounter decisions during execution. What did they interpret? What did they choose? What was the impact?

Review the assumptions, not just the output.

https://thelongrun.work/articles/010-what-long-running-agents-change

Visual: Screenshot of an assumptions.md file with 4-5 entries — each showing the assumption, the decision made, and the impact on the implementation. One entry highlighted/crossed out as “incorrect” with an arrow pointing to the code it affected.


Post 3: Reflection (target: 2026-04-20)

[Draft post content]

Visual: Photo of a morning review session — coffee, laptop showing a git log and an assumptions file side by side. The mundane reality of the new review rhythm.