There is an uncomfortable truth at the centre of most AI adoption programmes for revenue teams: if your underlying process is broken, AI will make it faster, more consistent, and more broken at scale.
Automating a bad process does not produce a good process. It produces a bad process that runs efficiently. The output is the same. There is just more of it, arriving faster, with greater consistency and lower unit cost.
This should be obvious. It gets ignored constantly.
How it happens
A company decides to use AI for outreach personalisation. They train the tool on their existing messaging. The existing messaging is generic, feature-led, and targeted at anyone who vaguely resembles their ICP. The AI generates more of it, faster, at higher volume.
Response rates don’t improve. In many cases they worsen, because the generic messaging is now arriving at higher frequency and prospects are more efficiently irritated by it. The team concludes that AI doesn’t work for outreach.
AI worked fine. The outreach didn’t. That’s a different problem, with a different solution, and using AI to do more of it faster is not that solution.
The question that needs to come first
Before you automate anything, the question is whether the underlying activity is worth automating. If your proposal template produces mediocre proposals, fix the template before you use AI to generate mediocre proposals faster. If your outreach messaging doesn’t resonate, fix the messaging before you use AI to send more of it. If your discovery process produces shallow qualification, fix the process before you use AI to scale shallow qualification.
This requires a moment of honesty that most teams skip. The assumption going into an AI initiative is usually that the existing process is fundamentally sound and just needs to be made more efficient. Sometimes that’s true. Often it isn’t. And the way to find out is to examine the process critically before touching the technology — which is not how most AI pilots are designed.
What good looks like
Good AI adoption in a commercial context starts by asking whether the process is worth accelerating. What are we trying to produce? Is our current approach to producing it actually good? If not, what would good look like?
Only once those questions are answered does the technology question become productive: given that we know what good looks like, where does AI create leverage in producing it consistently and at scale?
The process quality question and the AI integration question happen close together, but the thinking about quality has to come first. Otherwise you are optimising the wrong thing — and the optimisation makes the wrongness harder to see, not easier.
The competitive implication
This matters beyond your own P&L because the companies getting this right are compounding a qualitative advantage. Better outreach gets better response rates. Better proposals win more deals. Better pipeline data produces better forecasting. These improvements compound quarter on quarter in ways that are increasingly difficult to replicate.
The companies getting it wrong are running their mediocre process at higher volume and wondering why nothing is changing. Some of them are also, not incidentally, annoying more prospects more efficiently than before.
AI is a multiplier. Before you deploy it, the question worth asking is: what exactly is it multiplying? Make sure the answer is something worth multiplying. Because it will scale whatever it touches — the good and the bad with equal enthusiasm.
Still here? Good. You might be exactly my kind of client.