Conversion Isn't Linear

Your Funnel Strategy Needs To Catch Up

You're still buying the click, but the decision already happened.

That's the uncomfortable truth sitting underneath Google AI Mode, ChatGPT's move to CPC bidding, and every other AI search development that landed in your inbox this month. The purchase journey didn't just get shorter. The part you could see, the research, the comparison, the consideration, moved somewhere your analytics can't follow.

The Middle of the Funnel Went Dark

Google AI Mode has 75 million daily users, and ads are appearing in 25.5% of results. By the time a shopper sees a sponsored listing, they've already had a full conversation with Google about what they need, what it costs, and whether it's worth it. The ad is showing up at the end of the decision, not the middle of it.

This isn't a Google-specific problem, the same dynamic is playing out across ChatGPT, Perplexity, and Gemini. Buyers are doing their research inside AI conversations, then surfacing already decided, or close to it, while your attribution model logs a click as if nothing unusual happened.

The funnel is still there, it's just happening somewhere you're not measuring.

Fixating on Cost Is the Wrong Reaction

When we started testing Google AI Mode ads at Direct Agents, the first reaction from almost everyone was predictable: CPCs are running 35% higher than traditional search placements. That's expensive.

It's also the wrong frame entirely.

Cost per click means very little when you're meeting a buyer who's already done the consideration work. You're not interrupting their research, you're showing up at the moment they're ready to act. For most categories, that's a straightforward trade, and early results from the brands we're testing with back it up, with revenue lifts averaging 60 to 85%.

The real question isn't whether AI Mode CPCs are high. It's whether your creative, your feed, and your measurement are built for a buyer who already knows what they want.

What Actually Needs to Change

If the decision happens before the click, three things have to shift.

Creative needs to close, not educate. If a buyer is already sold on the category and narrowing to a product, your ad copy needs to meet them there. Specific titles, accurate product data, and creative that matches what the AI just recommended, that's what surfaces and converts.

Measurement needs a new baseline. Last-click attribution was always a simplification, but in an AI search world, it's actively misleading. If you're not separating AI referral traffic in your analytics, you can't see what's converting or why, and you're optimizing against numbers that no longer reflect reality.

Budget allocation needs to follow intent density, not just volume. A channel with fewer impressions and deeper intent signals is worth more than a high-volume channel catching people at the top of a journey they'll finish somewhere else.

What the Testing Actually Showed

We've seen this play out firsthand with the brands we work with, the difference isn't budget, it's whether the product feed is good enough for the AI to work with. If it can't accurately match your product to what a buyer just described in their conversation, you don't surface, regardless of what you're bidding.

For Google AI Mode specifically, brands already running Performance Max auto-qualify. There's no separate campaign to build. What we're focused on with our clients is feed quality, asset accuracy, and making sure measurement accounts for where this traffic is actually coming from.

It's less about adding something new and more about making sure the foundation is solid enough for the AI to do its job. That's where the 60 to 85% revenue lift comes from, not a bigger budget, but cleaner inputs.