The bottleneck has shifted. Most marketing teams are past asking whether to use AI agents — the new question on everyone's mind is why they're not working like they're supposed to. The answers are a whole lot less technical than you think.
At Optimizely's recent Agents in Action event, Daniel Hulme (Chief AI Officer @ WPP) — who has spent 25 years building and deploying AI systems at scale — made an observation that stuck: "We get excited about technology, and then we tend to apply that technology to solving the wrong problem."
Below are 6 mistakes marketing teams make when introducing AI agents. All of them are avoidable.
A new agent capability gets demoed, something clicks, and the question immediately becomes: where can we apply this? Working backwards from a tool almost always leads you to the wrong destination. Start with the friction that's actually costing you. Then ask whether an agent is genuinely the right solution.
In traditional software development, around 80% of total effort goes into testing — not building. That ratio doesn't change with AI agents. Most teams don't realise it. Agents operating inside marketing workflows need to be tested, monitored, and iterated on continuously — not shipped and forgotten.
Teams do QA. They define what failure looks like. What they rarely do is model for what happens when an agent performs exactly as intended — and causes a problem anyway. An agent optimising campaign targeting with perfect precision could, over time, create audiences so tightly defined that they reinforce bias and collapse creative range.
The people who get the most out of AI agents aren't always the most technically fluent — they're the most contextually rich. Someone with a background in art history, anthropology, or geopolitics can surface references and cultural resonances a narrowly trained specialist might miss entirely. The person orchestrating your agent workflows might be your most generalist thinker, not your most technical hire.
Time saved is easy to report but often misleading. In marketing, there is never a shortage of work. Efficiency gains don't produce slack — they produce capacity for more ambition.
The real case for AI agents isn't operational efficiency — it's coverage. The campaigns that didn't run, the content variants that didn't get tested, the audiences that weren't reached.
Teams with clear governance structures deploy faster, not slower — they've already thought through the questions that would otherwise stop them. Daniel's four pre-deployment questions at WPP:
Most marketing teams are past the "should we use AI agents?" conversation. The new question — the one nobody wants to say out loud — is why their agents aren't performing like they were supposed to. Daniel Hulme has spent 25 years deploying AI at scale. His answer is uncomfortable: you're solving the wrong problems.
Modern marketer's guide to agent orchestration
The technology is no longer the hard part. The hard part is everything around it.
Download the AI Agent Playbook →This is the Strategic Flow Method
Failure state before topic. Stats as proof cards, not prose. Cascade logic replaces flat lists. Expert quotes open arguments, not close them. Promo blocks removed, consequential CTAs installed. Before/After makes abstract risk concrete and personal.