When someone asks ChatGPT "What's the best project management tool for remote teams?" the model decides which brands to mention in 200 milliseconds.
It is not a ranking. It is a choice.
ChatGPT does not return a list where you can see everyone. It gives you 2-3 recommendations, maybe a couple honorable mentions. If your brand is not in that synthesised answer, your customer never knows you exist.
The same moment, someone asks Perplexity the same question. Different answer. Different brands cited. Different logic entirely.
And Gemini? Also different.
The brutal fact: 61.9% of the time, these three platforms recommend different brands for the same query. They are not disagreeing on ranking — they are making different decisions about who is worth mentioning at all.
Understanding which signals each platform uses is the difference between being recommended and being invisible.
Signal 1: Consensus Mentions (ChatGPT's Signal)
ChatGPT works like a voting system. It looks across its training data for how many times your brand is mentioned positively by credible sources.
Wikipedia mentions count heavily. Industry news. Third-party reviews. Reddit recommendations. Academic citations. The more times independent sources mention you positively, the higher your "entity score."
This is not about links. It is about mentions.
A brand mentioned by 847 independent sources scores higher than a brand with 50 high-authority backlinks. Volume of credible mentions beats authority of links. ChatGPT wants consensus proof: "Multiple trustworthy voices recommend this brand."
73% of brands are invisible to ChatGPT specifically because they lack distributed mentions. They might have a strong website and good rankings, but they are not mentioned anywhere else.
Signal 2: Fresh, Citation-Ready Sources (Perplexity's Signal)
Perplexity is built on citations. Every answer includes numbered links to sources. The model only recommends brands it can cite directly.
What matters: Recent, credible sources where your brand is discussed or mentioned.
85% of Perplexity's citations are from the last 2 years. Freshness matters because Perplexity pulls from current, active sources. A blog post from 2022 mentioning your brand? Mostly ignored. A Reddit thread from last month recommending your product? High weight.
Perplexity also weights niche authority heavily. A mention in a top-tier industry publication gets more weight than a mention on a general news site. Context matters. If you are an enterprise CRM and Forbes calls you "the best CRM for Fortune 500s," that is a citation.
Signal 3: Structured Data and Schema (Gemini's Signal)
Gemini (Google's AI) is the brand-site-first engine. 52% of its citations come from brand-owned websites. It trusts what you declare about yourself — if it is technically correct.
Structured data (JSON-LD schema) signals to Gemini: "Here is what I am, verified by machine-readable markup."
FAQ schema. Article schema with author. Organisation schema with NAP data. Product schema. These tell Gemini: "I know who I am, I have proven it with markup, I am trustworthy."
Pages with FAQPage schema get cited more often in AI Overviews. YouTube presence gets cited more. Google Business Profile completeness matters.
It is not subjective. Gemini evaluates technical signals: Is your site fast? Is content mobile-friendly? Are title tags clear? Is schema markup present and valid?
Signal 4: E-E-A-T (All Three Platforms)
Experience. Expertise. Authoritativeness. Trustworthiness.
Every AI system weights E-E-A-T before deciding whether to recommend you.
It is the filter. If you fail the E-E-A-T check, you do not get recommended, even if all other signals are strong.
- Author credentials (published bylines with real names, titles, verifiable background)
- First-person experience language ("we built this," "we tested," "our customer told us")
- Original data (proprietary research, internal metrics, case studies)
- Third-party validation (expert endorsements, media coverage, academic citations)
- Consistency across platforms (LinkedIn, website, YouTube all tell the same story)
Signal 5: Brand Mentions and Sentiment (The Multiplier)
All three platforms monitor what people say about you online.
Positive Reddit mentions. Positive Twitter/X mentions. High review ratings. Positive sentiment in news coverage. All of this moves your recommendation likelihood up.
Conversely, negative sentiment (complaints, criticism, poor reviews) signals distrust to AI.
Ahrefs found that branded web mentions have the highest correlation with AI Overview visibility. More than technical signals. More than backlinks. How people talk about you matters most.
But here is the critical part: Quality over volume. One recommendation from a recognised expert outweighs 100 positive mentions from nobodies.
How to Score Your Current Signals
| Signal | How to Test | If Zero Score |
|---|---|---|
| ChatGPT | Ask "best [category]" — are you in top 3? | Score = 1/10 |
| Perplexity | Same query — count fresh citations (last 2 years) | If 0 = 1/10 |
| Gemini | Check schema with Google's Rich Results Test | If none = 2/10 |
| E-E-A-T | Author credentials? Original data? Expert badges? | If none = 1/10 |
| Sentiment | Search brand on Reddit, Twitter, G2 | If negative = 2/10 |
Add the five scores. Divide by 50.
That is your current AI Visibility /10.
Most brands score between 2-4/10 because they optimised for Google rankings, not AI recommendations. The average brand does not show up in ChatGPT at all.
Scores of 7/10+ are rare. Those brands are winning AI visibility.