What Actually Drives Brand Visibility in AI Search
Good Morning,
AI continues to show up in parts of consumer life that didn’t have it until recently: bank account integrations, memory chip supply chains, even the typing layer of your phone.
Last week, we sized the $34B GEO market and noted that most vendors are selling monitoring rather than doing the harder work of changing the answer. This week’s big picture looks at five new studies on what actually drives brand visibility in AI search.
The findings help explain why monitoring-first tools don’t move the needle: brand strength does most of the work, and the engines largely agree on which brands matter.
Let’s get into it.
Vas-
This Week’s Signals
AI & Consumer
1. Perplexity launched a finance hub for consumers. A new Plaid integration lets users connect bank accounts, credit cards, and loans directly to its Computer agent. Search engines are moving past “tell me” into “do it for me,” inserting themselves between consumers and their money. (Read more)
2. Apple’s Mac Mini and Studio went out of stock. A DRAM shortage tied to AI infrastructure demand is delaying the M5 refresh. Memory chips needed for consumer products are being absorbed by GPU clusters training and serving AI models. AI’s resource demand is now visible at the consumer electronics counter. (Read more)
3. Google made AI dictation a system feature. Voice input has been “almost good enough” for a decade. With LLMs cleaning up output in real time, “almost” becomes the default. (Read more)
AI at Large
4. Europe is rolling back AI rules to compete with the US. The EU is loosening regulations it spent years designing. The cost of being the world’s regulator turned out higher than the political cost of rewriting the rules. (Read more)
5. Publishers are facing record AI bot and scraper activity. Third-party scrapers are increasingly hitting publisher sites to feed LLM pipelines, with little traffic returning. It’s the same dynamic as the lead essay, viewed from the supply side: AI is extracting value from the open web faster than it returns visits. (Read more)
6. Lovable has introduced built-in payments powered by Stripe and Paddle/It allows users to monetize apps through subscriptions and one-time payments directly via chat prompts. (Read more)
BIG PICTURE
This week we feature research in two important areas: AI search visibility and AI adoption enterprise priorities
What the Research Says Drives AI Search Visibility
When an AI summary appears on Google, users click a traditional result 8% of the time, down from 15% (Pew, July 2025). About 60% of searches now end without a click (Bain, February 2025). Zero-click rates rise to 83% when AI Overviews are present (Similarweb). Being mentioned in the AI answer is becoming the new unit of brand presence.
Five studies have now tried to measure what drives brand visibility in AI answers:
Ahrefs analyzed 75,000 brands
Profound analyzed 680 million AI citations
Similarweb broke down six consumer sectors (March)
Seer ran 10,000 questions through GPT-4o
Digital Bloom synthesized the field
The core finding: engines converge on brands more than they converge on sources.
Ahrefs measured a 0.75 -0.82 correlation across engines on which brands they mention. Digital Bloom reports only ~11% domain overlap between what ChatGPT and Perplexity cite. Different measurements, same direction.
The brands that win are predictable:
Highest Google rankings (Seer: 0.65 correlation with GPT-4o mentions in finance and SaaS)
Most branded web mentions (Ahrefs: 0.66 - 0.71)
Strongest YouTube presence (Ahrefs: 0.737 - the strongest single factor)
Backlinks don’t matter directly. Ahrefs and Seer both found this independently.
Most GEO vendors are selling engine-by-engine tactics. The research says the engines are more alike than different. (Read the full essay)
AI Priorities Shift by Stage (Predictably)
When organizations first deploy AI, success metrics are defensive: don’t break anything, reduce costs, manage risk. That’s the right starting point,but those metrics stop being the ones that matter surprisingly early.
A recent maturity survey tracked how AI goals shift across three stages: pilot, scaled, and fully adopted.
At the pilot stage, 66.7% of organizations prioritize risk and quality
By full adoption, only 28.1% do,the largest shift in the dataset
Revenue growth moves the opposite direction:
48% at pilot
70% at scaled
60% at full adoption
Customer experience and productivity both exceed 64% in mature stages
Practical takeaway: AI programs have an invisible graduation deadline. If you keep evaluating mature programs with pilot-stage metrics, they look like they’re plateauing when they’re actually progressing. (Read more)