AI recommendation has replaced search engine results pages as the first stop for purchase decisions. When someone asks ChatGPT "what's the best CRM for a small agency," or Perplexity "best accounting software for freelancers," or Gemini "recommended project management tools" — they get 3-5 curated recommendations, not 10 blue links. If your brand isn't in that shortlist, you don't exist for a growing share of your buyers.
Most brands have no idea where they stand. This guide walks you through a 30-minute manual audit that tells you exactly how visible your brand is across ChatGPT, Perplexity, Gemini, and Claude — and what your results actually mean.
"If your brand doesn't appear in AI recommendations, you don't exist for a growing share of purchase decisions."
The 30-Minute Audit
You need access to four platforms. Open them in separate tabs: chatgpt.com, perplexity.ai, gemini.google.com (or the Google app), and claude.ai. For each platform, use the same queries — this is how you build a comparable picture.
In each platform, type: "What is [your brand name]?"
Record the response verbatim. Specifically note:
- Is your brand mentioned at all?
- If yes, is the description accurate? (No hallucination is a high bar — we're looking for meaningful accuracy)
- Is your brand cited with a link back to your website?
- What category is your brand placed in? (This reveals how AI has categorised you — which may not match how you've positioned yourself)
If AI doesn't mention your brand at all, you have a category problem — AI hasn't found enough signals to associate your name with your space.
In each platform, type: "What are the best [your category] brands?" or "best [your category] for [your target customer type]?"
For example: "best project management tools for creative agencies" or "best accounting software for UK freelancers."
Record whether your brand appears in the recommendations — and if so, in what position. Position 1 vs. position 5 is a meaningful difference in conversion rate. Also note: is the recommendation explained with reasons, or just named?
Pick your 2-3 most obvious competitors. In each platform, type: "[your brand] vs [competitor]"
Record what AI says. Does it know both brands well enough to compare them? Does it give accurate information about your brand? Does it say something that's wrong — and if so, what?
AI that gives accurate, detailed comparisons indicates high brand visibility. AI that gets basic facts wrong indicates your brand exists in the training data but lacks the depth of signals needed for reliable citation.
For each platform, rate your brand on these four dimensions using a simple Yes / Partially / No:
- Mentioned? — Is your brand referenced in any of the responses above?
- Accurate? — Is the description of your brand correct? (Check product description, founding year, category, key differentiators)
- Recommended? — In the category and comparison questions, is your brand in the actual recommendation shortlist, not just mentioned in passing?
- Cited with link? — Does AI cite your brand with a working URL back to your website?
Combine across all four platforms. That's your audit score.
What Your Results Mean
Most brands score lower than they expect. After auditing 50+ brands across our platform, the most common result is "not mentioned at all" for the identity question — particularly for brands under 10 years old that built their presence primarily through Google SEO rather than through the broader ecosystem signals that AI platforms weigh.
This is the most common result. AI has either not encountered enough signals to know your brand exists, or the signals it found don't connect your brand name to your category. You need more structured citations, review activity, and third-party mentions before AI can recommend you.
Your brand exists in AI's knowledge base, but the information is wrong. This is worse than invisible — wrong information spreads and hard-bakes into AI responses that are difficult to update. You need entity clarification: Wikipedia corrections, authoritative content that explains what you actually do, and active review management to correct the narrative.
Your brand is known and correctly described. Now the work is optimisation: earn more recommendations, earn citations with links, build the depth of signals that moves you from "mentioned" to "recommended." Read our analysis of the 5 signals that convert mention to recommendation.
Common Patterns We See
After running these audits for brands across industries, a few patterns recur:
The geography gap
UK and EU brands that appear prominently in UK/European AI queries are often invisible in US-based AI responses — even with the same brand name. AI platforms localise recommendations. Multi-market brands need signals in each market.
The category misplacement
Many brands are placed in the wrong category by AI — sometimes a step removed ("payment processor" when they should be "expense management"), sometimes completely wrong. Category misplacement means you're never in the shortlist for your actual market.
The comparison gap
When AI is asked "[Brand] vs [Competitor]" for well-known competitors, it knows both well. For smaller competitors it may know one and miss the other entirely. Brands that appear alongside well-cited competitors benefit from the association.
The "founded in" problem
AI frequently hallucinates founding dates, funding rounds, or team members for mid-market brands. This isn't just trivia — it affects perceived credibility and recommendation confidence. Accuracy here requires active Wikipedia maintenance and authoritative content.
Across 50+ brand audits, the median score is under 20 out of 100. Most brands are invisible in at least two of the four major platforms, and even the brands that score well on "mentioned" score poorly on "cited with link" — which means they appear in AI answers but without a direct path back to their website. That's a conversion leak.
From Audit to Action
The audit tells you where you stand. The next question is what to do about it.
AI platforms cite brands that appear in structured data, authoritative backlinks, community mentions, review platforms, and press coverage. The signals are specific and buildable — there's no mystery to it, just work. Brands that score poorly on the audit have a clear improvement path: study the patterns of brands that score well, then engineer the same signals.
The first thing most brands discover after running the manual audit: it's tedious. Four platforms, consistent queries, recording results. It's also revealing — you now know exactly where you stand, which is worth the 30 minutes.