Attribution roulette.
One customer. One sale. Six attribution models. Six entirely different stories about which channel did the work.
Your next budget meeting will be decided by whichever story your dashboard happens to be telling. Spin the wheel below and see what's at stake.
Same sale. Six verdicts.
Pick a journey. Hit spin. Watch the same £180, £1,240 or £95 sale get sliced six different ways by the six attribution models any agency or platform will hand you. The "winner" channel changes. The CFO's budget conversation changes. The conversion does not.
The game you didn't sign up for
Attribution is not measurement. It is allocation. The sale is real. The split of credit across the channels that touched the customer beforehand is a hypothesis dressed up as a number. Every model in your dashboard has a built-in opinion about how influence flows. None of them can prove they're right.
Last Click thinks the closer matters most. First Click thinks the discoverer matters most. Linear pretends every touch is equal. Time Decay punishes anything older than a week. Position Based splits the difference politically. Data-Driven hides its weights inside a black box and tells you to trust it. Each one will, for the same conversion, hand a different channel a different cheque.
Why this is not academic
Budget allocations are made on the basis of attributed revenue. If Last Click says Google Brand drove £400k and Time Decay says it drove £120k, those two reports produce two completely different conversations about whether to invest more, hold steady, or cut. The same brand. The same quarter. The same money already spent.
This is the structural reason brands over-fund their closers and under-fund their builders. Last Click is the default. Brand search and direct traffic look heroic. Meta, TikTok and YouTube look expensive. Budget moves accordingly. A year later, demand softens, and nobody understands why.
"Data-Driven" isn't truth, it's marketing
Google's Data-Driven Attribution is the model most agencies will recommend. It sounds objective. It isn't. It uses a proprietary algorithm you cannot inspect, fed by data Google can see, weighted in ways Google has chosen, and validated against outcomes Google has selected. It systematically credits Google channels because Google channels are what it can observe.
This is not a conspiracy. It is a structural conflict of interest. The platform that sells you the media also grades the media's homework. You wouldn't accept that arrangement from any other supplier. Don't accept it here.
What to do instead
- • Pick the model that matches the decision. Use Last Click for closing efficiency. Use First Click for discovery investment. Never use one model for both.
- • Run holdout tests. Switch a channel off in a defined geo for a defined period. Measure total revenue, not attributed revenue. This is the only method that doesn't depend on a model's assumptions.
- • Triangulate. Compare what your platform attribution says, what your media-mix model says, and what your holdout test says. If three methods disagree, you've found a real question. If they agree, you've found a real answer.
- • Distrust convenient answers. If your attribution model tells you the channel that costs the platform money is also the channel doing all the work, ask harder questions.
- • Anchor budget on contribution margin, not attributed revenue. See the contribution margin problem.
Next steps
Related Reading
More on the gap between what your dashboard reports and what actually drove the sale.