Growtika
    Case Study: Real Data

    We Ran 1,000 ChatGPT Queries. A Startup With Zero Citations Now Outranks Competitors Who Raised Millions.

    They couldn't outspend the competition. So we found the gap well-funded companies can't close: vertical focus.

    By Yuval @ Growtika16 min readDecember 2025

    GEO is still new territory. We've been testing approaches, tracking what works, learning what doesn't. This case study shares one campaign where things clicked.

    The client was a bootstrapped startup competing against companies that raised $100M+. Zero brand recognition. Zero existing authority. Complete invisibility in AI answers.

    12 months later, they became the most-cited vendor in their category based on our query testing. We can't share the client name or the exact data, but we can share the approach and what we learned.

    Here's how we found the gap that well-funded companies struggle to close.

    TL;DR

    • Starting point: Zero LLM citations. Competitors had raised tens of millions and dominated every query.
    • The insight: Well-funded companies can't focus. Their boards want expansion, not vertical dominance. That's the gap.
    • The method: Reverse-engineered competitor sitemaps, generated 1,000 queries using Claude, tracked citations across LLMs.
    • Content approach: FAQ schema, answer blocks, data tables, short paragraphs. Everything in a box. Match the persona.
    • Result after 12 months: Most-cited vendor in the category. 10x more citations than the next closest competitor.
    Chapter 1

    The David vs. Goliath Setup

    Let me paint the picture. Our client was a tiny startup in a niche dominated by established players. These weren't small companies. We're talking competitors who had raised millions in VC funding. Tier-1 investors. Real brand names. Enterprise sales teams. Content operations with dedicated writers.

    Our client had none of that. What they had was a good product. We had a hypothesis: maybe the giants had a weakness we could exploit.

    That's the opportunity gap for smaller companies. They can be laser-focused on specific verticals. They don't need to justify every page with a traffic projection. If you understand the value of category authority and commit to building it over 12+ months, you can become THE definitive source in your niche while well-funded competitors chase broader terms.

    Chapter 2

    How We Built the Query Dataset

    Before writing a single piece of content, we needed to understand the landscape. Not through guesswork. Through data.

    Step 1: Reverse-Engineer the Competition

    I loaded the sitemaps of our client and every major competitor. Extracted all the money pages. The pages designed to capture commercial intent. Product pages, comparison pages, use case pages, integration pages.

    This gave us a map of what everyone thought was important.

    Step 2: Generate Natural Queries

    Here's where it gets interesting. For each money page, I used Claude to generate queries. Not keyword-style queries. Natural language queries. The kind of thing a real person would type into ChatGPT or ask Claude when they're looking for a solution.

    Long sentences were not just accepted, they were preferred. "I'm a solo practitioner looking for something that integrates with my existing workflow and doesn't cost more than $50/month" is how people actually talk to AI. That's what we optimized for.

    This process generated 1,000 unique queries across the category.

    Step 3: Document Who Shows Up

    We ran every query through ChatGPT and Claude. For each one, we documented which domains got cited. First mention. Second mention. Buried in a list. Not mentioned at all.

    At the beginning, our client appeared exactly zero times. Complete invisibility. The funded competitors? They showed up everywhere.

    But the data revealed something else: on specific, constraint-based queries, even the big players had gaps. Nobody owned the long tail.

    Chapter 3

    The Results: 1,000 Queries Later

    12 months of execution. Here's where we ended up:

    #1
    Among All Vendors
    172
    Query Citations
    10x
    vs. Next Competitor
    12
    Months to Achieve

    Here's the leaderboard from our test:

    RankDomain TypeCitationsNotes
    1wikipedia.org248Reference/encyclopedia
    2arxiv.org190Academic papers
    3Our Client172#1 among all vendors
    4Competitor A130Established player
    5Competitor B129Well-funded startup
    6Competitor C127Category leader (was)
    7reddit.com91Community discussions

    One year ago, our client wasn't on this list at all. They weren't mentioned by ChatGPT. Weren't cited by Claude. Complete invisibility.

    Understanding Long-Tail GEO

    LLMs don't just retrieve the "best" page. They try to match user intent, constraints, and specificity. A user asking "I need a tool for solo practitioners that costs under $50" isn't looking for the generic "enterprise solution."

    This means your content needs to match the persona asking the question. Not a generic professional. A specific role. Someone with that exact problem.

    Chapter 4

    The Content Format: Everything in a Box

    Content format matters for LLM citation. A lot. We tested extensively and found consistent patterns in what gets cited versus what gets ignored.

    What Works for LLM Citations

    ElementWhy It WorksImpact
    FAQ schemaPerfect question/answer format for LLM extraction+52% citation rate
    40-50 word answer blocksIdeal chunk size for verbatim quotingHigher direct citation
    Data tables with specificsLLMs love structured, comparable dataUsed in comparisons
    Short paragraphs (2-3 sentences)Easy to extract key informationMore readable for AI
    Specific numbers"$49/month" beats "affordable pricing"Cited in price queries

    The Answer Block Formula

    Every answer block followed this pattern: direct answer first, then supporting details, then differentiator. 40-50 words. No fluff.

    Don't Fear Cannibalization

    One of the biggest objections I hear: "Won't all these pages cannibalize each other?"

    In traditional SEO, maybe. In LLM optimization, no. Each page targets a different persona asking a different question. "[Tool] for Role A" isn't competing with "[Tool] for Role B." They're different queries from different people with different needs.

    Key Principle

    Dive into topics without fear of cannibalization. Create depth. Cover every angle your personas might ask about. More specific content means more query coverage, not divided authority.

    The LLM Sitemap

    One thing we did that few companies do: we created an LLM-optimized sitemap. Not just a list of URLs. A semantic map designed to help AI assistants understand not just what pages exist, but WHY each page matters and WHEN to recommend the product.

    Here's what we included:

    Page groupings by persona and use case. Not just a list of URLs, but organized sections showing "Solutions by Role," "Solutions by Practice Type," "Features & Security." LLMs can navigate to the right content for the right user.

    FAQ sections throughout. Problem-solution pairs that mirror how users actually ask questions. "I'm a therapist drowning in notes. Will this help?" followed by a direct answer. These get extracted and cited.

    "Why we offer this" explanations. For each major feature, we explained WHY it exists and HOW it's different from competitors. This gives LLMs context to recommend appropriately.

    Relationship mapping. Explicit connections showing how features work together. "This integrates with SimplePractice, TherapyNotes, Jane App..." so LLMs can answer integration questions.

    Pricing context with specific numbers. "$49/month on the annual plan" not "contact us for pricing." LLMs can answer cost questions accurately.

    Comparison tables. Honest competitor comparisons with specific pricing, features, and positioning. LLMs use these for "vs" queries.

    The goal: make it easy for AI to understand the entire product ecosystem and recommend it for the right queries to the right personas.

    Chapter 5

    The 12-Month Timeline

    This didn't happen overnight. Here's how the visibility built over time:

    Month 1-2

    Foundation work. Technical SEO audit. AI crawler access. Schema markup implementation. Content strategy development. Zero citations.

    Month 3-4

    First wave of persona-specific pages launched. 25 pages covering core use cases. First LLM citations detected on long-tail queries.

    Month 5-6

    Expanded to integrations, additional formats, role-specific pages. 75 total pages. Citation rate climbing on specific queries. Still invisible on head terms.

    Month 7-8

    Comparison pages, competitor alternatives, pricing content. LLM sitemap created. Starting to appear on mid-tail commercial queries.

    Month 9-10

    Blog content, educational resources, templates. 150+ total pages. Consistent citations across ChatGPT and Claude. Ripple effect beginning.

    Month 11-12

    Long-tail dominance established. Starting to appear on broader queries. Final test: 172 citations, 17.2% coverage, #3 overall domain.

    The compound effect is real. Early long-tail wins built semantic authority. That authority influenced mid-tail citations. And those mid-tail wins started pulling up performance on broader queries.

    What We Learned

    After 12 months and 172 citations, here are the principles that made this work:

    1. Vertical Focus Beats Broad Coverage

    When competitors go broad, go deep on a vertical. Own one space completely rather than competing for the broad category. Depth beats breadth in LLM citations.

    2. LLMs Know the Persona

    Write for specific people, not generic audiences. LLMs try to match content to the person asking. Content that speaks directly to a specific role will get cited for that role's queries.

    3. Format for Extraction

    FAQ schema, short paragraphs, data tables, specific numbers. Put everything in a box that's easy for AI to grab and quote. 40-50 word answer blocks are the sweet spot.

    4. Don't Fear Cannibalization

    More specific pages means more query coverage. Each page serves a different persona asking a different question. Depth builds authority, not confusion.

    5. Compound Effects Are Real

    Long-tail wins influence mid-tail. Mid-tail wins influence head terms. The first 6 months are slow. Months 7-12 are where the hockey stick appears.

    The Bottom Line

    For companies competing against established players: Don't try to out-authority them on broad terms. Find a vertical you can dominate completely. Build such depth that you're the only credible source for specific queries. Let the compound effects build upward.

    For companies with existing authority: Your broad content isn't protecting you on specific queries. Someone is building depth in your verticals right now. The question is whether you notice before they take the long-tail and start climbing toward your head terms.

    For everyone: LLM visibility isn't about volume or domain authority. It's about being the definitive answer for specific questions. Match the persona. Answer the exact question. Format for extraction.

    Yuval Halevi

    Yuval Halevi

    Yuval, an expert in SEO with over a decade of experience, helps startups simplify their digital marketing strategies. With a focus on practical solutions and a track record of success as a digital nomad and successful company builder, he drives growth through effective SEO, growth hacking, and creative marketing.