Organic clicks are down 42% on sites where Google AI Overviews appear. If you manage visibility for brands or agency clients, you’ve either already felt this or you will soon.
But here’s the number that matters more: AI-driven traffic to those same sites surged 103% in the same period. Google Discover visits grew 30%. Breaking-news impressions spiked.
The traffic didn’t die. It moved. It moved off the ten-blue-links page and onto AI-generated surfaces — surfaces that most brands and agencies have no visibility into, no measurement for, and no optimization strategy against.
That’s the real story. And it changes what agencies need to offer their clients.
What Did the Data Actually Find?
A study tracking 64 publisher websites over an extended period found a consistent pattern: as Google AI Overviews appeared above organic results, clicks to traditional organic listings fell sharply. The average drop was 42%. For informational queries — “what is,” “how to,” “best way to” — the drop was even steeper.
This is not surprising in isolation. When an AI engine answers a question directly at the top of the page, fewer users scroll to click a result below it. The click goes to the AI Overview, or it goes nowhere. The user got their answer and moved on.
What was surprising was what happened to the traffic that didn’t disappear.
A meaningful share of it relocated — to surfaces that didn’t exist five years ago. AI-driven referral traffic, where users clicked through from AI-generated content to source sites, grew 103%. Google Discover, which surfaces content based on user interest signals rather than keyword queries, grew 30%. Content optimized for breaking news and topical relevance — the kind of content AI engines prefer to cite in real-time — saw impression growth.
The search ecosystem didn’t contract. It reorganized.
| Traffic Source | Change (vs. pre-AI Overviews baseline) |
|---|---|
| Traditional organic clicks | –42% |
| AI-driven referral traffic | +103% |
| Google Discover visits | +30% |
| Breaking-news impressions | Up significantly |
This reorganization has a direct implication for every agency: your clients’ rankings in Google’s traditional index are no longer the only number that matters. Clients who are not visible in AI Overviews, AI-generated answers, and the citation layer above organic results are losing traffic they don’t know they’re losing.
Where Did the Traffic Go — And Why?
To understand where traffic went, you need to understand how AI engines decide what to surface.
AI engines — including Google AI Overviews, ChatGPT, Gemini, and Perplexity — do not rank results the way Google’s traditional index does. They synthesize answers. They pull from sources they judge to be authoritative, factual, and citable. They then construct a response and, in many cases, link through to the sources they drew from.
That linkthrough is the 103% AI-driven traffic surge. Brands whose content gets cited in AI-generated answers receive referral traffic from those citations. Brands whose content doesn’t get cited receive nothing — not even the ability to measure what they’re missing.
This is why traffic “moved” rather than disappeared: it moved to brands with higher AI visibility. The total query volume didn’t drop. The distribution of who benefits from it shifted.
Google Discover operates on a similar logic. It surfaces content based on what it predicts a user will find valuable — signals derived from search history, engagement, and content quality. Original, expert-driven content that performs well in AI citation tends to perform well in Discover too. The signals overlap.
Breaking-news traffic is the third destination. AI engines give real-time, topically relevant content higher weight when synthesizing answers about current events. Brands and publishers that produce timely, factually anchored content on developing topics capture this surface.
In every case, the underlying principle is the same: AI engines are routing attention to content they trust. Trust, in this context, is earned through editorial quality, original data, authoritative third-party citations, and structured, factual presentation — not through keyword density or backlink volume alone.
What Do AI Engines Actually Cite?
This is where the data gets most actionable.
Citation analysis from research across major AI platforms shows a sharp divide in what gets cited:
- Press releases: 0.32% citation rate
- Editorial and original research content: 81% citation rate
That gap is not a rounding error. It reflects how AI engines evaluate source quality. A press release is promotional, often lacks original data, and is structurally formatted as an announcement rather than an authoritative claim. AI engines rarely cite them because they rarely contain what AI engines are looking for: specific, verifiable, expert-anchored information.
Editorial content — long-form articles, original research, expert commentary, analysis backed by data — gets cited at a rate 250x higher. That’s the content AI engines trust to synthesize answers from.
What this means for content strategy:
Your clients’ owned content needs to look more like journalism and research, and less like marketing. That doesn’t mean abandoning brand voice — it means structuring content so it contains the kinds of factual claims and authoritative context that AI engines extract and cite.
Practically, this looks like:
- Define and quantify. Articles that include clear definitions (“Share of Answer is the percentage of AI responses that mention a brand in a given category”) give AI engines citable units of information.
- Lead with data. Proprietary stats, original survey findings, and benchmark data are high-citation assets. An article that says “brands in our study saw an average 42% drop in organic clicks” is citable. An article that says “many brands are struggling with AI search” is not.
- Source authority. AI engines weight content that is itself cited by other authoritative sources. Third-party editorial coverage — trade press, industry publications, respected blogs — is a citation multiplier.
- Structure for extraction. Lists, tables, definitions, and numbered sequences are easier for AI engines to extract and incorporate into synthesized answers. Dense prose with buried claims is not.
What Should Agencies Do About This?
The practical answer: add Generative Engine Optimization (GEO) to every client engagement.
Generative Engine Optimization (GEO) is the discipline of optimizing brand visibility in AI-generated responses. Where SEO targets search engine rankings, GEO targets Share of Answer — the percentage of AI responses across platforms like ChatGPT, Gemini, Perplexity, and Google AI Mode that mention a brand when users ask relevant questions.
Agencies that only measure traditional SEO metrics are now giving clients an incomplete picture. A client could rank #1 in Google organic results and be completely invisible in AI Overviews and every other AI-generated surface. That’s not a complete visibility strategy in 2026.
Adding GEO to your service mix means:
1. Baseline the client’s current AI visibility. Before you can optimize, you need to know where a client stands. How often does ChatGPT mention them? Gemini? Perplexity? When users ask AI engines about their category, do they appear? That baseline requires tracking across platforms — something traditional SEO tools were not built to do.
2. Audit their content for AI citability. Using the citation analysis framework above: what percentage of their owned content contains original data, clear definitions, and structured factual claims? Most brand websites are heavy on promotional copy and light on the kind of authoritative, specific content AI engines cite.
3. Build a citation strategy. Which third-party domains does your client need to appear on? AI engines cite specific domains repeatedly — trade publications, authoritative review sites, respected editorial outlets. A citation strategy identifies the highest-value external sites and builds a plan to earn coverage there. This is off-site SEO reoriented for the AI layer.
4. Measure Share of Answer, not just rankings. The traffic-shift data makes clear that ranking #1 in traditional search and being cited in AI search are not the same thing. Agencies need a KPI for the AI layer. Share of Answer — tracked across multiple platforms, by category, by geography — is that KPI.
How Do You Measure AI Visibility?
Share of Answer is the metric that captures this. Defined: Share of Answer is the percentage of AI-generated responses, across a defined set of prompts and platforms, that mention a specific brand.
If a user asks ChatGPT “what’s the best project management tool” and your client appears in 3 out of every 10 responses for that query category across platforms, their Share of Answer is 30%. If they appear in 0, their Share of Answer is 0 — and no amount of Google ranking will fix that.
Measuring Share of Answer requires:
- Defining a relevant set of prompts (the questions users actually ask AI engines about your client’s category)
- Querying those prompts across platforms — ChatGPT, Gemini, Perplexity, Google AI Mode, Google AI Overview, and others
- Recording and analyzing the responses at scale
- Tracking changes in Share of Answer over time as optimization work takes effect
This is not something that can be done manually at scale. Tracking meaningful Share of Answer data across thousands of prompts and 8 platforms requires automated, continuous querying with real-user simulation — including geo-specific queries, since AI responses vary by location.
The agencies gaining competitive advantage right now are the ones that have already added this measurement layer. They can show clients a number that captures AI visibility the same way traditional rank tracking captured Google visibility. That capability, right now, is a significant differentiator.
Key Takeaways
- A 42% drop in organic clicks where AI Overviews appear is real and documented — but it’s not the whole story.
- AI-driven referral traffic grew 103% in the same period. Traffic is moving to AI-generated surfaces, not disappearing.
- AI engines cite editorial and original research content at 81% vs. 0.32% for press releases. Content strategy must prioritize authoritative, data-anchored assets.
- Share of Answer is the KPI that captures AI visibility the way rank tracking captures Google visibility.
- Agencies that add GEO to their service offering can show clients a complete picture of their visibility — and a roadmap to improve it.
See What AI Says About Your Clients
Ceyo tracks Share of Answer across 8 AI platforms — ChatGPT, Gemini, Perplexity, Grok, Google AI Mode, Google AI Overview, Copilot, and Claude — using real-user simulated queries with geo-specific IPs. You get daily visibility data, competitive benchmarking, citation source intelligence, and an optimization action plan.
The agencies already using Ceyo have the data. Their clients know their AI visibility score. They know which prompts they appear in, which competitors are winning Share of Answer, and which content and citation moves will improve their position.
Book a demo to see what AI says about your clients today.