Skip to main content
    Insights/Measurement

    The Search Term Report
    Is a Rearview Mirror.

    You're optimising for where Google went. Not where it's going. In a Smart Bidding world, that's a dangerous distinction.

    ·7 min read

    The Obsession

    Every agency review deck has the same slide: "Here are the search terms we found. Here are the negatives we added." It feels productive. Granular. Controlled.

    For fifteen years, this was legitimate optimisation. You controlled match types. You sculpted queries. You built elaborate negative keyword lists that shaped exactly which searches triggered which ads. It was manual, painstaking, and it worked.

    That version of Google Ads no longer exists. The search term report is still there. But the system it was designed to control has fundamentally changed.

    What the Report Actually Shows

    The search term report shows you a subset of queries that triggered your ads - after the fact. Google doesn't show every query. The threshold for inclusion is opaque. In PMax, you see search queries but not Display, YouTube, or Discovery placements that consumed the same budget.

    Partial Data, Full Confidence

    The report shows you 60-70% of search queries and 0% of non-search placements. Making account-level decisions from this data is like navigating with a map that only shows half the roads.

    The report isn't wrong. It's incomplete. And incomplete data treated as complete data leads to confidently wrong decisions.

    The Broad Match Reality

    Google has systematically pushed accounts toward broad match. Smart Bidding is designed to work with broad match. The algorithm evaluates intent, context, and user signals in real-time - not keyword strings.

    In this world, a search term that looks "irrelevant" on paper may be converting because the user behind it has strong purchase signals that the algorithm detected and you can't see. Adding it as a negative doesn't just block that query - it blocks the cluster of similar signals the algorithm was learning from.

    The Over-Negation Problem

    Aggressive negative keyword lists in broad match accounts constrain the algorithm's learning. You're not sculpting queries - you're clipping the algorithm's wings based on data it has already moved past.

    Query sculpting was the right approach when you controlled match types. With broad match and Smart Bidding, it's often counterproductive.

    Where Control Actually Lives

    If the search term report is the rearview mirror, what's the windshield? The inputs that shape where Google goes next:

    • 1
      Conversion value rules. Tell Google what a valuable conversion looks like - margin-weighted, not revenue-weighted. The algorithm chases whatever number you give it.
    • 2
      Feed quality. Titles, pricing, imagery, and attributes determine which auctions you enter and how competitive you are. This is the new keyword strategy.
    • 3
      Audience signals. First-party data, customer lists, and exclusions shape who the algorithm targets. Without them, it optimises for the easiest conversions - usually your existing customers.
    • 4
      Conversion actions. Which events you tell Google to optimise toward - and how you weight them - directly shapes bidding behaviour. This is the most under-used lever in most accounts.

    The Windshield, Not the Mirror

    These inputs shape where the algorithm goes next. Search term reports show where it's already been. One is strategic. The other is historical.

    The Negative Keyword Trap

    Negatives still have a role. Blocking genuinely irrelevant categories - "free," "jobs," "DIY" - is reasonable hygiene. But the weekly ritual of mining search terms and adding dozens of negatives has become counter-productive in most Smart Bidding accounts.

    Each negative you add constrains the algorithm's query space. In exact match, that was precision. In broad match, it's restriction. You're telling the algorithm "don't explore here" based on yesterday's data, while the algorithm is making real-time decisions based on today's signals.

    "The agencies still proud of their negative keyword lists are optimising for a version of Google Ads that no longer exists."

    This doesn't mean abandon negatives entirely. It means recognising that adding negatives is no longer synonymous with "optimisation." Sometimes the most productive thing is to leave the algorithm alone and improve the signals it reads.

    Forward-Looking Signals

    The shift is from reactive to proactive. Instead of reviewing what happened and adding blocks, you engineer what happens next by shaping the inputs the algorithm consumes before the auction.

    The Management Shift

    Old model: wait for data → review search terms → add negatives → repeat. New model: engineer feed → set value rules → layer audiences → test creative → monitor outcomes. One is reactive. The other is strategic.

    If your agency's primary weekly activity is still pulling search term reports, they're driving by looking in the rearview mirror. The road ahead is shaped by different inputs entirely.

    Next Steps

    Ask your agency what percentage of their optimisation time goes into search term reviews versus feed quality, value rules, and audience architecture. If the answer skews backward, the management approach is outdated.