Are we building agents or going to be replaced by them?
- Chris Green
- 4 days ago
- 2 min read
A lot of the work many of us do can already be done by AI agents. Not perfectly, not end-to-end, but well enough to be useful. For a long time that gap between “useful” and “reliable” was where human job security lived.
What’s different as we move into 2026 is that the capability is finally catching up with the hype.
The tooling is better. Models are better. Orchestration is better. Memory, retrieval, tool use, scheduling, retries, and guardrails are no longer research projects. They’re products. The promise of automation is now strong enough that it’s not going to be politely ignored or deferred to “later”.
That creates an uncomfortable fork in the road.
One option is to opt out. To decide this is all overblown, unreliable, or philosophically distasteful. The problem is that refusing to take part doesn’t pause the system. It just means decisions about how work is automated will be made without you. In many cases, that leads not to protection, but replacement.
The more pragmatic response is to understand how to work with agents rather than pretending they’re not coming.
In the short to medium term, people who can build, adapt, and maintain agents are in a commanding position. Not because they are “AI experts”, but because they understand how to translate messy real-world work into systems that can operate with partial information, failure modes, and trade-offs.
Crucially, this advantage doesn’t disappear just because more tools become available.
Yes, anyone can now spin up an agent or copy-paste an n8n workflow. That lowers the barrier to entry, but it doesn’t eliminate the hard part. In fact, it increases demand for agents that are actually good. Thoughtful agents. Boring agents. Agents that fail quietly, log properly, escalate at the right moment, and don’t hallucinate their way into production disasters.
A lot of the workflows being shared today look impressive. They demo well. They sound smart. They’re often 80–90% there. That last 10–20% is where most of the real work lives: edge cases, data quality, ambiguity, incentives, and organisational reality. Without that work, automation is pointless. You’ve just replaced slow failure with fast failure.
Some teams will get away with deploying agents that do mediocre work quickly. In some contexts, that will genuinely be an improvement. But in many cases, all that’s happening is pre-AI slop being replaced by AI slop. Faster, cheaper, and more confidently wrong.
The real leverage comes from understanding not just how to build agents, but how to design good workflows. That means understanding the problem deeply, appreciating the systems around it, and working within constraints rather than complaining they exist. Agents don’t magically fix broken processes. They faithfully automate them.
So the question for 2026 isn’t “will agents replace us?”
It’s whether we’re willing to understand the work well enough to shape how they’re used.
Because if you don’t, someone else will. And they won’t ask for your input.


