You ask ChatGPT for the best tools in your category. Three competitors come up. Your product doesn't. You've been in the market for years. You have customers. You have a real product. And yet the model acts like you don't exist.
I've seen this with companies in cybersecurity, developer tools, fintech, HR tech. Across all of them the pattern is the same: the companies that appear in ChatGPT didn't optimize for ChatGPT specifically. They accumulated the kind of third-party presence that the model absorbs during training and interprets as legitimacy. The fix is working backwards from that.
These 9 steps are in the order that works. Skipping to step 5 without completing step 2 is why most of these efforts stall.
- ChatGPT won't cite you if it isn't confident about you. Confidence comes from consistent third-party mentions, not your website.
- The sequence matters: authority building before content optimization. Most companies get this backwards.
- Timeline: Perplexity in 2-4 weeks. ChatGPT and Claude in 60-180 days. Start all of them now.
- The fastest single lever: getting into independent comparison articles on Trust Hub platforms in your niche.
- GEO rarely drives referral clicks. It drives pre-formed perception. Buyers arrive already knowing your name.
Why ChatGPT Ignores You
ChatGPT works in two ways. Most of what it knows comes from training data. Everything it absorbed before its cutoff date. But it also has a browsing mode that can pull live content from the web. Both matter.
If you weren't mentioned enough in authoritative sources before training, you don't exist in the model's world. And if you block its crawlers, browsing mode can't find you either. In both cases, the result is the same: the model isn't confident about you, so it doesn't recommend you.
If you only exist on your own site, you don't exist in the model's world. If you exist in a few places with inconsistent descriptions, the model can't form a reliable picture of what you do. It hedges. It defaults to the competitor it has seen more often, from more trusted sources, with a cleaner category association.
This is an entity authority problem, not a content problem. The fix requires building authority first, then optimizing how that authority is read. Learn more about how this works in our analysis of why competitors get cited and you don't.
First, figure out where exactly you stand. The diagnosis changes which steps matter most right now.
The 9 Steps
Run a Proper Baseline Before Touching Anything
Most companies skip this and spend months optimizing for the wrong thing. Run 30 queries before making any changes. You need to know: does the model know you exist, what does it say when it mentions you, which competitors appear in queries where you don't, and which platforms does it cite when it answers questions in your category.
Five prompts that give you a complete picture in 30 minutes:
Log the results. Do this in ChatGPT, Claude, and Perplexity. The gaps between platforms tell you where to invest. Run the same 30 queries every two weeks from this point forward so you can measure movement. For a deeper framework on this, see our guide on how ChatGPT rates your brand authority.
Unblock AI Crawlers in 10 Minutes
Check your robots.txt now. In our audits, roughly one in three companies has at least one major AI crawler blocked — often without knowing it. Some have all of them blocked. Cloudflare's "Bot Fight Mode" blocks AI crawlers at the network layer even when your robots.txt is correct.
Every crawler you need to explicitly allow:
User-agent: GPTBot
Allow: /
User-agent: OAI-SearchBot
Allow: /
User-agent: ClaudeBot
Allow: /
User-agent: PerplexityBot
Allow: /Write One Canonical Definition and Deploy It Everywhere
LLMs build their picture of what you do by aggregating signals from every source they've seen. If your G2 profile calls you a "workflow automation platform," your LinkedIn says "AI-powered business tools," and your website says "the operating system for modern teams," the model can't form a confident picture of what you actually are.
Specific category means one thing. Not "comprehensive," not "all-in-one," not "next-generation." The definition should be nearly identical across every platform.
| Platform | Where to update | Time |
|---|---|---|
| Website | Homepage meta description, About page, footer tagline | 30 min |
| Company page tagline and About section | 10 min | |
| G2 | Profile description and category tags | 15 min |
| PeerSpot | Company overview (claim profile first if needed) | 20 min |
| Crunchbase | Short description and long description | 15 min |
| Press release boilerplate | Boilerplate paragraph at end of every future PR | 10 min |
Find and Close Your Trust Hub Gaps
A Trust Hub is the cluster of platforms LLMs pull from when answering questions in your category. It's not universal. From a 150-query study across cybersecurity, fintech, dev tools, and general SaaS, the top citation sources are completely different by niche.
*From Growtika's 150-query study, March 2026 — citation frequency across best-of, comparison, and definition query types*
| Niche | Top citation sources | Most underused |
|---|---|---|
| Cybersecurity | Dark Reading, Bleeping Computer, PeerSpot, Reddit (r/netsec) | PeerSpot — outperforms G2 but rarely prioritized |
| Developer Tools | dev.to, Stack Overflow, Hacker News, GitHub | GitHub discussions — cited more than most expect |
| Fintech | Forbes Advisor, Investopedia, Wikipedia, Reddit | Investopedia — high authority for fintech definitions |
| General B2B SaaS | Wikipedia, PeerSpot, TechRadar, Reddit | PeerSpot — cited 3x more than G2 in best-of queries |
| All niches | Wikipedia | Definition queries — Wikipedia dominated 27 of 50 |
PeerSpot appears consistently across every B2B niche we've tracked. It outperforms G2 in citation volume in most categories. It requires genuine customer reviews — 5 detailed reviews from real customers with real outcomes.
Identify the 3 Trust Hub platforms where your competitors appear that you don't. Those are your priority gaps. Close them before worrying about content structure or schema markup.
Not sure where your Trust Hub gaps are?
We'll run the 150-query audit for your category and show you exactly which platforms to prioritize — and which competitors are already there.
Book a Call →Get Into Independent Comparison Articles
Think about how trust works between people. If you tell someone you're the best at something, they're skeptical. If someone else tells them — someone with no stake in the outcome — they believe it. LLMs work the same way. One editorial placement in a niche publication carries more citation weight than dozens of posts on your own domain.
This is the mechanic behind the 70/30 rule: 70% of your brand mentions should come from independent third-party sources, 30% from self-published.
Three ways to get into comparison articles:
- Find published articles comparing your competitors, email the author directly, explain why your product belongs
- Reach out to niche publications and offer a "how to choose a [category]" piece
- Publish a "[Your Product] vs [Competitor]" page on your own domain with genuine, balanced analysis
Echo a Topic Across Every Content Layer You Own
LLMs assign authority by topic, not by company. The way to trigger that: go narrow and echo the same topic across multiple layers of content you control.
Pick one specific niche, use case, industry, or problem your product solves. Then build the full stack around it:
For a practical example of this strategy in action, see our Reddit marketing playbook for AI visibility — one of the most effective community layers.
Restructure Your Key Pages for Extraction
LLMs extract structured content. They paraphrase prose. "Reduces MTTR by 40%" becomes "helps teams respond faster." Your exact benchmark — the thing that makes you different — disappears.
The rule that matters most: bullets under 15 words get extracted directly. Over 15 words and the model paraphrases.
| Format | What happens | Fix |
|---|---|---|
| Bullets over 15 words | Paraphrased — your specific claim becomes generic | Cut to under 15 words. Remove qualifiers. |
| Vague claims ("significantly reduces") | Disappears or becomes meaningless filler | Replace with a number: "reduces by 40%" |
| Long prose description | Compressed into a generic category summary | Replace with the canonical definition (step 3) |
| Data in prose | Paraphrased or dropped | Put in a table — numbers in cells extract verbatim |
| No FAQ schema | Q&As on page are ignored as citation candidates | Add FAQPage schema markup |
Prioritize your homepage, main product page, and any page targeting your primary category keyword. For a comprehensive list of content formatting tactics, see the 15 content strategies that rank in search and get cited by AI.
Build Your LLM Sitemap and llms.txt
An LLM Sitemap at /llm-sitemap is a single page that tells AI systems exactly what your company is, what category you're in, what you're not, and who you're for. Read our LLM Sitemap guide for a full walkthrough.
A /llms.txt file at your root is the plain-text equivalent. What each should include:
- Your canonical definition (from step 3)
- Your specific category and target buyer
- Your 3-5 primary use cases with one-sentence descriptions
- What you're not (reduces misclassification)
- Key differentiators with specific metrics
- FAQs sourced from your actual sales conversations
- Links to your most authoritative third-party citations
Add FAQ Schema to High-Intent Pages
Every FAQPage schema markup you add turns each Q&A pair into a discrete citation candidate. The model can pull a single answer without quoting the whole page.
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [{
"@type": "Question",
"name": "What is [Your Product] used for?",
"acceptedAnswer": {
"@type": "Answer",
"text": "[Your Product] is a [category] for [ICP]. It [primary function]. Unlike [alternative], it [differentiator with specific metric]."
}
}]
}Also add dateModified schema to every page you update. Retrieval systems treat recency as a reliability proxy. For a complete checklist of schema and structural optimizations, see the 51-Point AI Visibility Checklist.
With the rise of LLMs, people have started running AI audits on their websites constantly. Ask Claude to audit your site and it will always find something. The question is: will fixing this actually move citations?
The real standard is simple. Your site needs to be clean, load fast, speak your brand clearly, and have high-quality content. Beyond that, for AI visibility specifically:
- AI crawlers can access it
- Your canonical definition is consistent everywhere
- Key pages have structured content — bullets under 15 words, specific numbers, FAQ schema
- You have an LLM Sitemap and llms.txt
That's the list. Everything beyond it is diminishing returns. The companies that get cited fastest close those fundamentals and then spend their energy on Trust Hub presence — not endlessly re-auditing their schema.
For years, content cannibalization was one of the things we obsessed over in SEO. When you target very similar keywords across multiple pages, Google gets confused about which one to rank.
LLMs work differently. They are quite good at understanding the intent behind a query and matching it to the most relevant piece of content — even when you have multiple pages covering closely related topics. This changes the calculus on the echo strategy from step 6.
The practical upside: you can go narrow and specific without worrying that one page will cannibalize another. Google might struggle to choose between them. ChatGPT will surface whichever is more relevant to the specific query.
Timeline Expectations by Platform
The sequence above applies to all platforms, but the timelines differ significantly. Perplexity retrieves in real time — steps 2, 6, and 7 show results in weeks. ChatGPT updates its training data periodically, so steps 3, 4, and 5 take months to surface.
| Platform | Mechanism | Which steps help most | Timeline |
|---|---|---|---|
| Perplexity | Live retrieval | Steps 2, 6, 7 — structure and crawlability | 2-4 weeks |
| Google AI Overviews | Search index + retrieval | Steps 6, 7, 8 — schema and structure | 4-8 weeks |
| Claude | Training data | Steps 3, 4, 5 — authority and entity clarity | 60-120 days |
| ChatGPT | Training data + web browse | Steps 3, 4, 5 — entity authority and consensus | 60-180 days |
Set realistic expectations internally before you start. The bi-weekly query tracking from step 1 is what gives you the data to show progress at the pace each platform allows.
Run the 5 baseline prompts in ChatGPT, Claude, and Perplexity. Check your robots.txt for OAI-SearchBot. Write your canonical definition. These three actions take under two hours and unblock everything that follows.
The companies that get cited in ChatGPT at the 6-month mark started their authority work on day one. The clock starts when you start, not when you decide GEO matters.
Want us to run this playbook for you?
We've done this for 50+ SaaS companies across cybersecurity, developer tools, fintech, and HR tech. Let's diagnose your situation and build the 90-day plan.
Book a Free Audit →FAQs

Yuval Halevi
Helping SaaS companies and developer tools get cited in AI answers since before it was called "GEO." 10+ years in B2B SEO, 50+ cybersecurity and SaaS tools clients.
Related Articles
How To Get Your SaaS Product Mentioned in ChatGPT
The authority-first framework we use to get clients cited by AI.
AI VisibilityChatGPT SEO: 10 Best Practices to Get Cited by ChatGPT
Field-tested practices for appearing in ChatGPT answers.
AI VisibilityWhy Your Competitors Are Getting Cited in AI Answers (And You're Not)
When someone asks ChatGPT for the best tools in your industry, your competitors show up. You don't.