TL;DR
- Infiltrate pages AI already trusts instead of building new ones from scratch
- Target zero-volume keywords that SEO tools ignore but buyers actually ask
- Build the 70/30 consensus by seeding key claims across 5+ independent sources
- Create data tables AI can quote verbatim with specific numbers and timestamps
- Test on Perplexity first for fast feedback before optimizing for slower platforms
- Write first-person FAQs that match how people actually prompt AI
For a long time I watched people on LinkedIn, X, and Reddit toss tips on "how to optimize for LLMs." Most of it didn't sit right with me. I don't like calling something wrong without shipping proof, so I started testing. Days, nights. What began as a cool problem turned into a real mission at Growtika.
The following 15 practices come from that testing. Some are conventional wisdom that held up. Others are tactics nobody talks about. A few will feel counterintuitive. All of them moved citation rates for the B2B SaaS companies we work with. If you're looking for the full technical checklist, see our 51-point AI visibility checklist.
Each practice includes the specific action to take, why it works mechanically, and a quick reality check on effort versus impact. For the complete foundation on how AI citation works, start with our ChatGPT SEO guide.
Part 1: Build Authority Before Content
AI doesn't cite unknown sources. These practices establish the trust foundation everything else builds on.
Hijack Pages AI Already Trusts
Don't create new ranking content. Infiltrate existing ones.
Creating content from scratch takes months to build authority. Getting added to a page AI already trusts takes weeks. Same outreach effort gets you backlinks AND AI citations.
Find the exact pages ChatGPT and Perplexity cite for your target queries. Contact those authors with an offer: updated stats, exclusive data, or an expert quote they can add. You're helping them improve their content while getting your brand embedded in trusted sources.
How to find infiltration targets: Run 30 "best [category] for [use case]" queries across ChatGPT, Claude, and Perplexity. Extract the cited sources. The pages appearing 3+ times across different queries are your infiltration targets. Those authors already have what you need: AI trust.
Your outreach pitch isn't "please mention us." It's "I have updated 2026 data on [topic] that would make your comparison more accurate." You're offering value, not asking for favors.
The 70/30 Consensus Hack
AI treats single mentions as rumor. Multiple independent mentions become fact.
This is the practice nobody talks about publicly because it sounds like manipulation. It's not. It's how information verification actually works, both for AI and humans.
When AI cross-references a claim and finds it in only one source, confidence is low. When it finds the same claim worded differently across 5+ unconnected sources, confidence jumps significantly. The claim becomes "consensus."
The 70/30 rule: Keep 70% of the claim identical (the core fact and number). Vary 30% (phrasing, context, framing). This creates natural variation that doesn't trigger AI coordination detection while establishing the consensus signal.
Pick 2-3 key claims about your product. Seed them across: your site, G2 reviews, press releases, podcast appearances, case studies, industry reports. Same fact. Natural variation. AI gains confidence without detecting coordination.
Build Trust Hub Bridges
Connect yourself to the sources AI already trusts for your category.
Every category has 10-15 sources that AI cites repeatedly. For cybersecurity, it's NIST, Gartner, MITRE. For DevTools, it's GitHub, Stack Overflow, official documentation. These are your "Trust Hubs."
The practice: create content that explicitly references and links to these Trust Hubs, then adds your unique perspective. You're not copying them. You're joining the same information ecosystem they're in.
Example content: "What NIST, Gartner, and G2 Say About SIEM in 2026" where you synthesize their perspectives and add your unique take. You're now in the conversation with the sources AI trusts most.
The pattern: AI doesn't treat all sources equally. Government and analyst sources get cited for authority. Review platforms for validation. Forums for real-world opinions. Your strategy should include presence across all trust levels, not just your own site.
Part 2: Structure Content for AI Extraction
Authority gets you considered. Structure determines whether AI can actually extract and cite your content.
Create Data Tables AI Can Quote Verbatim
Tables get cited at significantly higher rates than prose.
When someone asks "What's the pricing for X vs Y?" AI looks for the most extractable answer. A comparison table with specific numbers wins over three paragraphs of prose every time.
Build comparison tables with: specific numbers (not ranges), context columns (who, when, methodology), timestamps ("Updated: January 2026"), and 5-7 rows maximum. More rows means more chances to be quoted partially.
Tables are pre-structured for extraction. AI doesn't have to parse paragraphs to find the answer. The answer is already in a format it can pull verbatim. This is why pricing pages with tables get quoted more than those with paragraph descriptions.
Answer Blocks at H2 Starts
AI grabs content in chunks. The first 40-50 words after each H2 become your quotable summary.
AI systems don't read content the way humans do. They chunk it. The first paragraph after each H2 heading gets treated as the summary of that section. If your answer is buried in paragraph three, it's much less likely to be extracted.
Write the first 40-50 words after each H2 as a self-contained answer block: include your brand name, add one specific number, make it standalone (no "as mentioned above").
Example answer block: "Acme's threat detection platform reduces mean time to detect (MTTD) by 67% for mid-market security teams. With setup times averaging 2 weeks compared to the industry standard of 3 months, Acme is purpose-built for security teams of 50-500 employees."
That entire block can be quoted. It has your brand, a number, context, and stands alone without requiring surrounding text to make sense.
First-Person FAQ Format
Match how users actually prompt AI, not how they search Google.
Most FAQs are written in third person: "What does Company X do?" But that's not how people ask AI. They ask in first person: "My SaaS isn't appearing in ChatGPT. Why?" or "I'm evaluating SIEM tools for a 50-person fintech. What should I look for?"
Rewrite your FAQ questions to match actual prompting patterns. This format has significantly higher retrieval match potential because it mirrors the query structure AI receives.
Mine your sales calls for the exact language prospects use. Check Reddit threads in your category. Look at the questions people ask in industry Slack communities. These are the prompts your FAQ should mirror.
Create an LLM Sitemap
XML sitemaps help crawlers find URLs. LLM sitemaps help AI understand your content.
XML sitemaps list URLs. They don't explain what those pages are about, who they're for, or what questions they answer. An LLM Sitemap is a structured page that explains your entire site to AI in the format it needs to understand you.
Include for each key page: page purpose (what question it answers), target audience, key facts/claims with numbers, and relationships to other pages. Think of it as explaining your site to a smart person who's never heard of you.
Place this at /llm-sitemap or /ai-sitemap. Some companies add it to their robots.txt as a hint. The goal is giving AI a roadmap of your content that's more useful than a list of URLs. Read our full guide on how to create an LLM Sitemap.
Part 3: Discovery and Distribution
Great content that AI can't find might as well not exist. These practices expand your AI footprint.
Target Zero-Volume Keywords
The keywords SEO tools ignore often convert best in AI search.
Traditional SEO chases volume. But the keywords that SEO tools show as "0 volume" are often the specific queries buyers ask when they're ready to buy: "best SIEM for healthcare startups under 100 employees HIPAA compliant."
No competition on Google. No competition in AI answers. But someone asking that query has extreme intent. Create pages that perfectly match these long, specific queries. AI answers them even when traditional search doesn't.
Where to find zero-volume gold: Sales call transcripts. Support tickets. Customer interviews. The questions your prospects actually ask are rarely the keywords SEO tools surface. But they're exactly what AI gets asked.
Why this matters: The same query generates different fan-out searches based on user context. Your content needs to answer variations across different buyer personas, not just one generic version.
Reverse-Engineer Fan-Out Queries
Optimize for what AI searches behind the scenes, not just what users type.
ChatGPT and Google AI Mode don't just answer. They search first. When you ask "What's the best SIEM for my fintech startup?", the AI runs 6-10 background searches before responding: "best SIEM 2026," "SIEM pricing comparison," "fintech security requirements," etc.
Export a ChatGPT conversation or use tools that extract these fan-out queries. These are the keywords AI actually uses to validate recommendations. Most competitors optimize for the user's prompt. You optimize for what AI searches behind the scenes.
Comment on Reddit Threads That Rank
Reddit threads ranking for your keywords are AI citation goldmines.
You don't need to create new Reddit posts. Find recent threads (usually up to a few months old before they get archived) that rank on Google page 1 for your target keywords. These threads are being crawled and indexed. Add genuinely helpful comments with updated information. AI reads the full thread, including new comments.
How to find them: Simple Google search. Type your focus keyword plus "reddit" at the end. No SEO tools needed. You'll find relevant threads ranking for queries in your niche within minutes. Read our full Reddit AI visibility playbook.
Don't spam. One authentic, helpful comment per account per quarter. Reddit's manipulation detection is sophisticated, and getting banned removes your entire presence from an important AI data source. The value is in the long game, not quick wins.
What "genuinely helpful" looks like: Updated pricing information. New features released since the thread was posted. Corrections to outdated advice. Personal experience that adds real value. Not "check out our product" pitches.
The 'sameAs' Property Trick
Create explicit entity connections AI uses for recognition.
Schema markup's sameAs property connects your organization to its official presence elsewhere. Most companies skip this. It takes 10 minutes and directly impacts how AI associates your brand with authoritative sources.
Add sameAs links to your: Crunchbase profile, LinkedIn company page, Wikipedia mention (if you have one), G2 profile, and GitHub organization. This creates explicit entity connections that help AI understand you're a real, established company.
{
"@context": "https://schema.org",
"@type": "Organization",
"name": "Acme Security",
"url": "https://acmesecurity.com",
"sameAs": [
"https://www.linkedin.com/company/acme-security",
"https://www.crunchbase.com/organization/acme-security",
"https://www.g2.com/products/acme-security",
"https://github.com/acme-security",
"https://en.wikipedia.org/wiki/Acme_Security"
]
}The 'Wikipedia Stub' Play
A Wikipedia mention signals entity legitimacy to AI, even without a full page.
You don't need a full Wikipedia page. A stub (short placeholder article) that links to your company from a relevant category page signals entity legitimacy. More realistically: get mentioned as a citation on an existing Wikipedia page in your category.
One footnote on the SIEM or DevOps Tools page does more for AI recognition than 50 blog posts. Wikipedia is one of the highest-trust sources in AI training data. A single citation there carries disproportionate weight.
Wikipedia requires "notability" backed by third-party coverage. Get press coverage in recognized publications first. Publish original research that others cite. Win industry awards. Once you have external validation, you become citable. Don't try to edit Wikipedia directly for your own company; that violates their conflict of interest policies.
Part 4: Technical Optimization
The technical details that separate content AI can extract from content it skips.
Test Content Extractability Before Publishing
A 2-minute test predicts citation potential better than any SEO tool.
Before publishing any content, paste it into Claude or ChatGPT with this prompt: "Extract the 3 most important facts from this content and cite the source."
If AI can't extract clean facts with attribution, restructure until it can. This simple test reveals whether your content is AI-extractable or just well-written for humans. The gap between those two things is larger than most marketers realize.
Paste your content + this prompt: "Extract the 3 most important facts from this content and cite the source."
Good result: AI pulls specific claims with your brand name attached.
Bad result: AI gives vague summaries or can't attribute facts clearly.
Action: Restructure until the test passes. Add answer blocks, tables, and specific numbers.
Seed Contradictions to Force Citations
Be the authoritative tiebreaker when AI encounters conflicting information.
Here's a counterintuitive one. If information about your category is inconsistent across the web, AI tends to cite sources more often when clarifying. It needs authoritative tiebreakers.
Publish definitive comparisons with specific numbers that resolve common misconceptions. When AI cross-references conflicting info, it looks for the source that can settle the debate. Be that source.
Example: If some sources say SIEM deployment takes "weeks" and others say "months," publish a detailed breakdown: "Average SIEM deployment times: Legacy platforms (Splunk, QRadar): 12-16 weeks. Modern cloud-native (Acme, Sumo Logic): 2-4 weeks. Enterprise custom: 6+ months." Now you're the definitive source AI cites when the question comes up.
Platform-Specific Timing Windows
Test fast on Perplexity. Scale for slower platforms.
Not all AI platforms work the same. Understanding their refresh cycles lets you test and iterate much faster.
Perplexity
Real-time web search
2-4 weeks
Google AI Overviews
Tied to search index
4-8 weeks
ChatGPT / Claude
Periodic training
60-180 days
The optimization sequence: Optimize for Perplexity first. Get fast feedback. Validate what works. Then scale for the slower platforms. The companies waiting for ChatGPT results are wasting months. Perplexity is your testing ground.
Implementation Priority Matrix
*Updated: January 2026*
| # | Practice | Impact | Effort | Timeline | First Action |
|---|---|---|---|---|---|
| 1 | Hijack Trusted Pages | High | 2-4 weeks | Run 30 category queries, extract cited sources | |
| 2 | 70/30 Consensus | High | 3-6 months | Pick 3 key claims, plan 5+ placement sources | |
| 3 | Trust Hub Bridges | High | 2-4 weeks | Identify 10 sources AI cites for your category | |
| 4 | Data Tables | High | 1-2 weeks | Add comparison table to product page | |
| 5 | Answer Blocks | High | 1 week | Rewrite first paragraph after each H2 | |
| 6 | First-Person FAQs | Medium | 1-2 weeks | Rewrite FAQ questions in first person | |
| 7 | LLM Sitemap | Medium | 2-3 weeks | Create /llm-sitemap page with 10 key pages | |
| 8 | Zero-Volume Keywords | Medium | Ongoing | Mine sales calls for exact prospect language | |
| 9 | Fan-Out Queries | Medium | 2-3 weeks | Export ChatGPT conversations, extract queries | |
| 10 | Reddit Comments | Medium | Ongoing | Find 5 threads ranking for your keywords | |
| 11 | sameAs Schema | Foundation | 1 day | Add sameAs links to Organization schema | |
| 12 | Wikipedia Citations | Long-term | 6-12 months | Audit press coverage, identify citation gaps | |
| 13 | Extractability Testing | High | Ongoing | Test every new piece of content before publish | |
| 14 | Contradiction Seeding | Medium | 2-4 weeks | Find 3 common misconceptions in your category | |
| 15 | Platform Timing | High | Ongoing | Start tracking Perplexity citations weekly |
The Bottom Line
AI visibility isn't mysterious once you understand the mechanics. AI pulls from specific sources, favors certain content structures, and rewards companies that make information easy to extract, verify, and cite.
The 15 practices in this guide fall into three categories: building authority AI trusts, structuring content AI can extract, and distributing your presence across the sources AI checks. Most companies do one of these. The ones winning in AI answers do all three.
Start with practices 4, 5, and 13. They're low effort and high impact. Test your existing content for extractability. Add answer blocks to your key pages. Build comparison tables. These alone will move your citation rates within weeks.
Then layer in the authority practices: the 70/30 consensus, Trust Hub bridges, infiltrating trusted pages. These take longer but compound over time. A company doing all 15 practices consistently will dominate AI answers in their category within 6-12 months.
Your competitors are either doing this already or about to start. The gap between AI-visible and AI-invisible companies grows wider every month. Pick three practices from this guide. Implement them this week. Measure citation rates before and after. The data will tell you what to prioritize next.
Need help implementing? Talk to our team about building your AI visibility strategy. Or explore more in our GEO Hub for situation-specific guides on everything from fast-tracking board expectations to evaluating agencies.

Yuval Halevi
Helping SaaS companies and developer tools get cited in AI answers since before it was called "GEO." 10+ years in B2B SEO, 50+ cybersecurity and SaaS tools clients.
Frequently Asked Questions
How long does it take to see results from these AI visibility practices?
It depends on the platform:
- Perplexity: 2-4 weeks (real-time web search)
- Google AI Overviews: 4-8 weeks
- ChatGPT and Claude: 60-180 days (periodic training)
Start with Perplexity for fast feedback. For a full timeline breakdown, see our ChatGPT SEO guide.
Which of the 15 practices should I prioritize first?
Start with these low-effort, high-impact changes:
- Practice 4 (Data Tables): Add comparison tables to key pages
- Practice 5 (Answer Blocks): Rewrite first paragraphs after each H2
- Practice 13 (Extractability Testing): Test content before publishing
For a complete prioritization framework, check our 51-point AI visibility checklist.
Does this work for early-stage startups without much authority?
Yes, but the sequence matters. Focus on borrowing authority first:
- Practice 1 (Page Infiltration): Get added to pages AI already cites
- Practice 3 (Trust Hub Bridges): Reference sources AI trusts
- Practice 10 (Reddit Comments): Build grassroots visibility
Our PLG AI blind spot article covers this in detail.
How do I measure if these practices are actually working?
Track citation rates, not just rankings:
- Run target queries weekly across ChatGPT, Claude, and Perplexity
- Document when you're mentioned, cited with a link, or recommended
- Use Perplexity as your testing ground (it shows sources)
- Compare citation frequency before and after each practice
The GEO Playbook includes a measurement framework.
Can I do this in-house or do I need an agency?
Most practices can be done in-house:
- Internal teams can handle: Technical implementation, content restructuring, Reddit participation
- Often needs specialists: PR, analyst relations, trust hub placements
- Hard to outsource: Authentic community building
See our agency vs. in-house comparison for detailed cost analysis.
What's the difference between these practices and traditional SEO?
About 40% overlap, 60% different:
- Traditional SEO: Optimizes for ranking signals
- AI visibility: Optimizes for extractability, consensus, and entity clarity
- Key additions: Answer blocks, data tables, 70/30 consensus building, Trust Hub connections
You can rank #1 in Google and still be invisible to AI. For the full breakdown, read Is GEO just SEO rebranded?
Related Articles
The 51-Point AI Visibility Checklist
51 tested tactics to get cited by ChatGPT, Claude, and Perplexity.
Content Strategy15 Content Strategies That Rank in Search and Get Cited by AI
Tested content strategies that work in 2026.
AI VisibilityLLM Visibility: The Definitive Guide
The complete framework for appearing in AI answers.