How do you manage bots in the age of AI and agents?
- Chris Green

- 2 days ago
- 3 min read
Bot management used to be a relatively contained problem. A small number of well-understood search engine crawlers, a handful of bad actors, and some fairly blunt controls were usually enough. Not any more.
Today, there are more bots that matter for visibility, discovery, and commercial outcomes than ever before. It’s no longer just Google and Bing. It’s a growing ecosystem of crawlers, scrapers, training bots, AI search providers, agents, and hybrid systems operating on behalf of users, platforms, and models.
This shift fundamentally changes how bot management needs to be approached.
A more complex bot landscape
Modern visibility depends on interacting with a much broader range of bots than traditional SEO ever required. Some of these bots drive discoverability. Some inform AI-generated answers. Some are used for training. Others operate as agents acting on behalf of users. They don’t all behave the same way, and they don’t all deserve the same treatment.
At the same time, CDN providers such as Akamai, Cloudflare, and Fastly are increasingly positioned as the default line of defence. They are well equipped to detect, classify, and manage automated traffic at scale. However, their incentives don’t always align with yours.
A CDN’s primary goal is to protect infrastructure, performance, and availability. Your goals may be more nuanced. You might want to allow certain providers to crawl for discovery but not for training. You might want to support transparent, well-behaved agents while blocking opaque scraping. Assuming that a CDN’s default bot rules are “good enough” is increasingly risky.
Bot management is no longer just a technical consideration
Even when you technically can control bots, the harder problem is deciding how you want to.
Bot access now raises business-level questions:
Who do we want to be visible to?
Who are we willing to train?
Which agents do we trust to act on behalf of users?
Where do we draw the line between access and protection?
These decisions can’t be made solely by security or infrastructure teams. They require alignment between engineering, SEO, analytics, legal, and commercial stakeholders.
Without that alignment, bot management becomes reactive, inconsistent, and often counterproductive.
If you don’t have a CDN that supports granular bot controls, and you don’t have the capability to collect, analyse, and interpret bot traffic data, you are already falling behind.
Why the bot management problem accelerates in 2026
This isn’t a static challenge. The number of bots, user agents, and agent-based systems that matter is going to increase significantly. Google has already indicated that managing this landscape is one of the biggest challenges ahead, and they are closer to the problem than most.
If managing crawler behaviour already feels complex, agents make it harder. Some agents may identify themselves clearly in the future, accepting the risk of being blocked in exchange for trust and access. Businesses with mature bot strategies may actively prefer these transparent, well-behaved actors.
Others will not. Agents that pass through end-user user-agent strings are far harder to identify. They pollute analytics by triggering tracking pixels, distort behavioural data, and force sites into an escalating cat-and-mouse game of mitigation.
Captchas, proof-of-work, bot labyrinths, and other advanced defences are expensive, imperfect, and come with real splash damage. They frustrate genuine users, increase friction, could stop real transactions, and still fail often enough to undermine confidence in the data.
The hidden costs of doing nothing
Poor bot management doesn’t just affect visibility. It makes diagnosing crawler issues harder. It reduces the reliability of web analytics. It increases hosting and infrastructure costs. And in the worst cases, it actively impedes real users trying to complete legitimate actions, whether directly or through agents.
Once these problems compound, they are far harder to untangle.
The organisations that will cope best in 2026 are the ones already having grown-up conversations about bots: what they want, what they allow, what they block, and why.
If you’re not having those discussions now, the question isn’t whether this will become a problem. It’s how much damage will already be done by the time you start addressing it.






Comments