TL;DR · The 30-second version
AI search citation share is now 17% of all branded discoveryfor B2B SaaS — up from 4% a year ago. The brands winning aren't the highest-ranked; they're the ones with named authors, primary research, and entity coverage. Wikipedia, Reddit, and original-research domains accounted for 64% of cited sources.
Five findings that matter.
We ran 3,200 commercial-intent queries — software shortlisting, vendor comparison, “best X for Y” — across ChatGPT 4o, Perplexity Pro, and Google AI Overviews. Each query was logged with cited sources, query type, and topical category. The full dataset is downloadable under CC-BY 4.0. Five things stand out.
1. AI search is now a real channel.
17% of B2B SaaS discoverynow comes through AI search, up from 4% in Q1 2025. For consumer queries the share is lower (around 9%) but growing faster — 5× YoY. Enterprise software is the most-affected category; commodity DTC is the least.
2. The citation pool is concentrated.
64% of all citations across our query set came from just three source clusters: Wikipedia, Reddit threads, and primary-research domains. Brand-owned blog content accounted for 11%. Forum and community sites a further 14%. News and review sites split the remainder.
3. Named authorship is the strongest single signal.
Pages with a named, schema-marked author were cited 2.4× more often than anonymous pages on otherwise identical topics. Pages where the author had a Wikipedia entry, an Org-schema affiliation, or a verified sameAs profile were cited 4.1× more. This is the cheapest, fastest fix in the report.
The single most-correlated variable with citation share wasn't backlinks, traffic, or domain authority — it was whether the page named a human author and linked them to a verifiable identity graph.
4. Query-type matters more than topic.
AI search engines behave very differently depending on what the user is asking for. Comparison queries pull heavily from Reddit and review sites. Definition queries lean almost exclusively on Wikipedia. “How-to” queries pull from named experts and YouTube. Knowing your query mix is the prerequisite to a meaningful AI-search strategy.
| Query type | Sample | Wikipedia | Brand | Research | |
|---|---|---|---|---|---|
| Definitional “What is …” | 612 | 62% | 8% | 9% | 14% |
| Comparison “X vs Y” | 880 | 11% | 38% | 17% | 14% |
| Shortlist “Best X for Y” | 748 | 7% | 29% | 15% | 22% |
| How-to “How do I …” | 516 | 14% | 19% | 21% | 9% |
| Quantitative “How much / many …” | 444 | 19% | 6% | 7% | 44% |
5. The three engines behave differently.
ChatGPT cites the fewest sources per response (median 3) but weights brand-owned content most heavily. Perplexity cites the most (median 7) and pulls hardest from Reddit. Google AI Overviews favours large, indexed publishers and review sites. One strategy doesn't fit all three.
What this means for your stack.
The implication isn't that classical SEO is dead. It's that citation engineering is now a separate, parallel discipline. Three priorities, ranked by ROI:
One — name your authors, properly.
Add a real human byline to every commercial page. Add Person schema. Cross-link to LinkedIn, Twitter, Wikipedia where they exist. This is a one-week sprint that compounds for years.
Two — publish primary research.
One survey or audit per quarter, with a downloadable dataset. Use Dataset schema. Make the methodology page citable on its own URL. Quantitative queries (the highest-converting bucket in our sample) lean on research more heavily than any other source type.
Three — engineer for query types.
Audit your top 50 commercial queries. Classify each by type (definitional, comparison, shortlist, how-to, quantitative). Map your content surfaces to each. Comparison-heavy categories need Reddit-friendly community presence; quantitative-heavy categories need datasets.
Appendix
Methodology — how we ran this benchmark.
3,200 queries were sampled from production keyword lists across 14 client engagements (B2B SaaS, e-commerce, financial services, healthcare). Each query was run on three engines, with cited URLs captured via API where available and manual scraping where not. Source types were coded by two analysts independently (Cohen's κ = 0.87). Full dataset, query list, and code are linked below.
Quick answers.
Is AI search replacing Google?Q.01
No — at least not in our data. Google still drives 71% of B2B SaaS discovery. But AI search is the fastest-growing channel by a wide margin, and Google's own AI Overviews now appear on around 38% of commercial queries we tracked. The right framing is parallel, not replacement.
Can I rank in AI search the way I rank in Google?Q.02
No. Classical ranking is about pages; AI citation is about sentences. The unit of optimisation is a quotable, attributable statement — short paragraphs, named authors, schema, datasets. The good news: the same content can serve both, with structural changes rather than a rewrite.
What’s the single highest-ROI fix?Q.03
Named authorship with schema and verified sameAslinks. In our data, this single change was the strongest correlate of citation share — 2.4× lift on average, 4.1× when the author had a Wikipedia entry. It's also a one-sprint project for most teams.
How do I measure AI-search visibility?Q.04
Pick 50 commercial queries. Re-run them weekly across the three engines, log cited URLs, and compute citation share over time. There are now several SaaS tools (Profound, Otterly, Athena Intelligence) that automate this — but a spreadsheet is enough to start.
Can I download the raw dataset?Q.05
Yes. The 3,200-query dataset, including engine, query, cited URLs, and source-type coding, is published under CC-BY 4.0. Link below — attribution required, no commercial restrictions.
What to do next.
Read our companion piece: The hub-and-spoke model, rewritten for 2026 — the architectural template that maps to these findings. Or run a free 30-day citation audit, where we'll show you what your brand is being cited for in ChatGPT, Perplexity, and Google AI Overviews, and where the gaps are.
Why trust this report.
E-E-A-T · Experience · Expertise · Authoritativeness · Trust
Quarterly benchmark, 4th edition. Sample sizes have grown from 800 queries (Q2 ’25) to 3,200 today.
Two senior analysts, inter-rater agreement κ = 0.87. Reviewed by Head of Strategy before publication.
Cited by Search Engine Land, SparkToro, and the Marketing Brew newsletter. Covered in 19 newsletters Q1 ’26.
Methodology open. Dataset CC-BY 4.0. Last reviewed 04 May 2026. Errata logged on this page.