Skip to main content
    ← Insights|Uncomfortable Truths

    NobodyCanProveGoogleAdsWorks(IncludingUs)

    January 202614 min read

    Attribution is fundamentally broken. Every "proven" ROAS figure is partially fiction. We are all making educated guesses with incomplete data, and agencies that claim certainty are lying.

    This is the most uncomfortable truth in digital marketing. It threatens the foundation of performance measurement, agency accountability, and budget allocation. Which is why nobody wants to say it clearly.

    We will say it anyway.

    An Uncomfortable Admission

    When we show clients ROAS reports, we are presenting educated estimates, not measured facts. The reported numbers reflect what Google's attribution model believes happened, filtered through assumptions that cannot be verified.

    Every agency in the industry operates under the same limitation. The difference is whether they admit it.

    "We do not measure what happened. We measure what the tracking systems believe happened, filtered through models designed by platforms with their own interests."

    This is not nihilism. The data is useful. But useful is different from true.

    Why All Attribution Models Are Wrong

    Attribution attempts to answer: "Which marketing touchpoints caused this sale?" The problem is that causation cannot be directly observed.

    Last-Click Attribution

    Credits the final touchpoint before conversion. Ignores everything that built awareness and consideration. Favours brand search and remarketing while undervaluing prospecting.

    Data-Driven Attribution

    Uses machine learning to distribute credit. Better than rules-based models, but still trained on biased data and makes assumptions about counterfactuals it cannot observe.

    Multi-Touch Attribution

    Spreads credit across touchpoints. Better philosophy, but the credit distribution is arbitrary. Why 40/20/40? Why not 30/40/30? Nobody knows.

    Every model makes assumptions that cannot be tested:

    • • What would have happened without this ad?
    • • Did the ad cause the sale, or was the customer already decided?
    • • How should we weight touchpoints we cannot track?
    • • What about influences outside the tracking window?

    These questions have no empirical answers. Every attribution model is a set of guesses pretending to be measurements.

    The Incrementality Gap

    The fundamental measurement problem: we cannot observe the world where the ad did not run.

    The Counterfactual Problem

    If someone clicks an ad and buys, was the ad necessary? Would they have bought anyway through organic search, direct visit, or another channel?

    Attribution counts the conversion. Incrementality asks: was this conversion actually caused by the ad, or just claimed by it?

    Studies consistently show that attributed conversions overstate true incrementality by 20% to 60%. That 8:1 ROAS your report shows might represent 4:1 actual impact.

    This is not pessimism. It is the gap between what we can measure and what actually happened.

    Why Agencies Claim Certainty

    Despite these limitations, most agencies present reports as if the numbers were factual. Why?

    Client expectations

    Clients want certainty. "We think this campaign probably works" is a harder sell than "This campaign delivered 6.2:1 ROAS."

    Accountability theatre

    Precise numbers create the appearance of accountability. "Results may vary significantly" sounds like a disclaimer for failure.

    Competitive positioning

    If competitors claim certainty and you do not, you appear less capable. The race to precision claims is a prisoner's dilemma.

    Self-deception

    Many agency professionals genuinely believe the numbers. They have not examined the assumptions underlying attribution.

    The industry has a collective incentive to maintain the fiction of measurability. Breaking that fiction threatens everyone's business model.

    What Honest Reporting Actually Looks Like

    Instead of pretending certainty, honest reporting presents:

    1. Ranges, Not Points

    "ROAS is likely between 4:1 and 7:1, with our best estimate at 5.5:1." This reflects the uncertainty inherent in measurement.

    2. Explicit Assumptions

    "This assumes 60% incrementality based on our holdout test. If actual incrementality is 40%, the effective ROAS drops to 3.7:1."

    3. Multiple Signals

    "Attributed ROAS shows 6:1. Blended CAC has improved 15%. Holdout suggests 55% incrementality. Overall revenue correlation is 0.7. Together, these suggest positive but uncertain contribution."

    4. Acknowledged Limitations

    "We cannot track 30% of conversions due to privacy restrictions. These numbers represent what we can measure, not necessarily total performance."

    The Honest Alternative

    Instead of: "Your campaign delivered 6.2:1 ROAS."

    Try: "According to Google's data-driven attribution, attributable ROAS is 6.2:1. Adjusting for estimated incrementality (our holdout suggested 55%), the probable true ROAS is between 3.4:1 and 6.2:1. This is a good result, but the precision is illusory."

    Making Decisions with Imperfect Data

    Acknowledging uncertainty does not mean paralysis. It means making decisions appropriately calibrated to confidence levels.

    High Confidence Decisions

    When multiple signals align (attributed results, blended metrics, holdout tests, correlation with revenue), act decisively. Uncertainty is lower.

    Medium Confidence Decisions

    When signals partially align, proceed with monitoring. Increase investment gradually; pull back if contradictory signals emerge.

    Low Confidence Decisions

    When signals conflict or data is sparse, run experiments rather than making permanent changes. Use holdouts and geo tests to build confidence.

    The key insight: the decision framework should match the confidence level. Treating uncertain data as certain leads to overconfident decisions.

    Radical Honesty

    We cannot prove Google Ads works for you. We can show you:

    • • What the tracking systems report
    • • How that compares to overall business metrics
    • • What incrementality tests suggest
    • • Where the uncertainty lies
    • • What decisions the evidence supports

    This is harder to sell than certainty. But it is honest. And honesty, in the long run, makes for better decisions.

    "The goal is not to prove that Google Ads works. It is to make the best possible decisions with imperfect information. That requires admitting the imperfection."

    If your agency claims they can prove their work delivers specific, precise returns, they are either lying or do not understand the limitations of their data.

    We would rather tell you the truth and work within those constraints than pretend to certainty we do not have.

    Want honest reporting instead of false certainty?

    We will tell you what we know, what we estimate, and what we cannot know. The truth is more useful than precision theatre.

    Stay informed

    Get our latest insights on Google Ads strategy delivered to your inbox. No fluff, no spam, just honest thinking.

    Unsubscribe anytime. No spam, ever.

    We use cookies to improve your experience. Privacy Policy