ChatGPT cites by authority, not by SEO merit
ChatGPT's selection logic structurally differs from Google's. Google ranks by domain authority + relevance + user signal. ChatGPT cites by third-party authority + extractability + corpus frequency. A brand can therefore dominate Google on its main query and be completely absent from ChatGPT responses to the same query. Understanding why.
Mechanism 1 — The training corpus
ChatGPT standard mode answers from its training corpus, refreshed every six to twelve months. The corpus over-represents Wikipedia, authority-filtered Common Crawl, established press (NYT, WSJ, FT), books, academic papers. Brand sites are present but under-weighted. Across 10,000 ChatGPT responses analyzed in 2025, cited brands were 76 % mentioned via a third-party source, only 14 % via their corporate site.
Mechanism 2 — Frequency and consistency
A brand mentioned 50 times across diverse corpus sources will more likely appear than a brand mentioned 5 times — even if the 5 mentions are higher quality. Frequency acts as an importance signal. That is why sector leaders with continuous press coverage dominate ChatGPT responses, while a mid-market firm with an excellent site but thin press remains invisible.
Mechanism 3 — Browse / search mode
ChatGPT Search (launched late 2024, integrated into GPT-4o and beyond) consults the live web. In this mode, selection is less tied to training corpus and more to classic SEO signals + page structure + domain authority. But the same biases persist: authoritative third-party sources (Wikipedia, press) are preferred over corporate sites.
Mechanism 4 — Page structure
During extraction, ChatGPT prefers structured pages (H1 question, short intro, lists, schema). A Google top-1 page without structure can be ignored as a source in favor of a top-5 better-structured page. This divergence explains the surprises: "why is this minor brand cited and not us?"
Mechanism 5 — Query context
On brand-explicit prompts ("who is X"), corporate sites have their natural place. On open prompts ("best provider in category"), ChatGPT favors recommendation lists drawn from Wikipedia, established press, or specialized guides. A brand excellent in branded SEO but absent from third-party sources appears on the first prompt type but not the second.
ChatGPT US B2B source distribution (Q1 2026)
Wikipedia 30 % · trade press 19 % · established press 16 % (NYT, WSJ, Bloomberg) · corporate sites 13 % · academic/.gov 10 % · expert blogs 7 % · Reddit 3 % · other 2 %.
How to build third-party authority
Three proven levers: (1) Wikipedia — dedicated page if eligible (encyclopedic notability proven by 3-5 third-party sources) or strategic mentions in related articles. (2) Earned editorial PR — $2-4k/month for 8-15 trade press hits per year. (3) Flagship studies — a quarterly data study, broadly distributed, generates 30-100 press pickups + progressive entry to LLM corpora.
What does not work
Sponsored content is discounted by LLMs. Low-end link building does not improve citation. Self-published press releases via PRWire and similar have near-zero impact. The only measurable-ROI levers are: Wikipedia, earned editorial PR, strong proprietary content (studies, white papers) distributed through PR channels.