Growtika
    GEO Guide

    How To Get Your SaaS Product Mentioned in ChatGPT

    The authority-first framework we use to get clients cited by AI. No hacks. No shortcuts. Just what actually works.

    By Yuval Halevi  |  Growtika  |  March 2026  |  18 min read

    More than half your buyers ask ChatGPT for software recommendations before they hit Google. From 50+ SaaS GEO campaigns: most products that own page one of Google are completely invisible when someone asks ChatGPT the same question. Different game. Different winner.

    I started testing why. Days, nights. Dozens of accounts across industries. Patterns emerged and held across verticals. What I found broke assumptions I'd carried from a decade in B2B SEO. The framework: authority-first, 70/30 consensus, the LLM Sitemap.

    TL;DR

    • ChatGPT doesn't index the web in real time. It pulls from its training data, where authority signals and multi-source consensus determine who gets mentioned.
    • Traditional SEO metrics (domain authority, backlink count) correlate weakly with AI citations. Source quality and content format matter more.
    • The authority-first approach outperforms the content-first approach. You need Trust Hub presence before content optimization pays off.
    • Data tables, structured definitions, and sub-15-word bullet points get extracted by AI at significantly higher rates than narrative paragraphs.
    • The 70/30 consensus rule: 70% of your mentions should be factual, third-party citations across independent sources. 30% can be self-published content.
    • Most SaaS products see first citations within 60-90 days when the framework is applied correctly. Consistent, broad citation usually takes 4-6 months.

    What's In This Guide

    1. Why ChatGPT Ignores Most SaaS Products

    ChatGPT pulls from a knowledge base built during training, weighted by credibility signals. Sources that were consistent and independently corroborated got baked in. Everything else got ignored.

    You could have 10,000 backlinks and a DA of 80 and still be invisible if your product didn't appear in the right sources before the training cutoff. When browsing is enabled, ChatGPT runs live retrieval passes. Building presence now feeds the next update cycle.

    The second reason companies get ignored: category ambiguity. If your positioning is vague (the all-in-one platform for teams), the model has no anchor for when to mention you. This is especially common in product-led growth companies where the product speaks for itself in demos but says nothing to an AI model.

    The Three Failure Modes

    From auditing 50+ companies asking "why isn't ChatGPT mentioning us," the problem falls into four buckets. The fourth gets missed most, because the company sees a citation and assumes it's working.

    Failure ModeWhat It Looks LikeHow CommonFixable In
    No authority footprintProduct barely mentioned outside owned channels. No third-party reviews, no press, no community references.Very common90-120 days
    Wrong content formatGood coverage, but all narrative prose. No structured data, no comparison tables, no defined FAQ content.Common30-60 days
    Category mismatchProduct is described differently across sources. ChatGPT can't build consistent understanding of what it does.Less common60-90 days
    Bad framingProduct appears in ChatGPT but is described inaccurately, negatively, or in a context that hurts rather than helps. A mention like "used by teams that can't afford [competitor]" is a citation. It's not a win.Underestimated90-120 days

    Most companies have all three. Fix them in order. Authority first, then format, then consistency.

    2. How ChatGPT Actually Decides What to Mention

    After tracking citation patterns across 50+ client campaigns and hundreds of test prompts, the signals that move citations became predictable. We mapped six.

    The Citation Probability Stack: What Actually Moves the Needle

    HIGHMEDLOWCITATION PROBABILITYTrust Hub PresenceWikipedia · Reddit · PeerSpot · TechRadar · Vendor DocsThe sources LLMs trust most. Get here first.+++ ImpactConsistent Product DefinitionSame description, same wording, everywhere you appearConflicting descriptions confuse the model. It stops mentioning you.+++ ImpactMentioned in Competitor ComparisonsIndependent "X vs Y vs Z" articles that include your productShows the model which category you belong in.++ ImpactStructured On-Site ContentTables, FAQ schema, short bullets, LLM SitemapAffects how accurately you're described, not whether you appear.++ ImpactEditorial CoverageIndustry blogs, newsletters, trade press, analyst reportsGood to have. Not enough on its own.+ Impact

    Trust Hub and consistent positioning matter most. Structured content affects how accurately ChatGPT describes you, not whether it mentions you. Editorial coverage only helps after the foundation exists.

    Warning

    Fake reviews on G2 or Capterra will hurt you. ChatGPT detects consensus built on thin, templated reviews. It favors fewer detailed reviews that show genuine product knowledge over many generic ones.

    The Comparison Context Signal

    When ChatGPT retrieves information to answer best X for Y or compare A vs B vs C, it relies heavily on pages that already structure the comparison. If your product appears in three independent comparison articles across different domains, the model has enough signal to confidently include you in its answer.

    If you only appear in your own us vs. them page, the model discounts it as promotional. Third-party comparisons carry disproportionate weight.

    3. The Authority-First Framework

    The biggest insight from 50+ campaigns: the order matters more than the tactics. Content optimization before authority build-out wastes money. Authority before content lets the content work.

    Authority-First vs Content-First: The A/B Test

    Content-First (Slower)Month 1-3: Write optimized content→ No authority foundationMonth 4-6: Build Trust Hub presence→ Content exists but isn't citedMonth 7-9: Citations begin→ Avg. 140 days to first citationResult: 140 days to first citation.Content was ready. Authority wasn't.Authority-First (Faster)Month 1-2: Build Trust Hub presence→ Reviews, profiles, communityMonth 2-3: Optimize content format→ Authority already establishedMonth 3: Citations begin→ Avg. 67 days to first citationResult: Citations begin within 60-90 days.Authority came first. Content amplified it.

    What "Trust Hub" Means — and What the Data Actually Shows

    Trust Hub is the cluster of third-party platforms LLMs lean on when building knowledge about your product. There is no universal list. The sources that dominate for a cybersecurity tool differ from those for fintech, developer tools, or logistics SaaS. It also shifts over time as LLMs update and new sources accumulate authority in your space.

    Most GEO advice publishes a generic platform list and calls it done. Your Trust Hub is specific to your niche, your buyer, and the query patterns they use. Identifying it rather than assuming it is step one.

    How to Find Your Trust Hub

    Run 30+ queries across all three types: best [category] for [buyer], [product] vs [competitor A] vs [competitor B], what is [category]. Log every cited domain. Sort by frequency. Top 10 = your Trust Hub. Missing from 7+ of them = your gap list.

    The data below covers B2B SaaS broadly — a useful baseline, not a blueprint. Your niche will surface vertical publications, analyst sites, and community forums that won't appear here. In March 2026, we ran 150 queries across three query types through ChatGPT and logged every cited domain.

    Test 1: Best-Of Queries (50 queries)

    Queries like best SIEM for startups, best CRM for small business, best DevOps monitoring tool

    *150 queries run via ChatGPT GPT-4.1, March 2026. 275 unique domains, 364 total citations.*

    RankDomainCitationsCoverage %Type
    1wikipedia.org236.3%Encyclopedia
    2techradar.com143.8%Editorial tech media
    3peerspot.com61.6%B2B peer review
    3reddit.com61.6%Community
    5gartner.com41.1%Analyst
    6g2.com30.8%Review platform
    7medium.com30.8%Publishing platform
    8capterra.com10.3%Review platform

    Test 2: Comparison Queries (50 queries)

    Queries like Notion vs Asana vs Monday.com, Datadog vs New Relic vs Dynatrace

    *284 unique domains, 359 total citations.*

    RankDomainCitationsCoverage %Type
    1reddit.com174.7%Community
    2peerspot.com82.2%B2B peer review
    3forbes.com61.7%Business media
    4medium.com51.4%Publishing platform
    5wikipedia.org51.4%Encyclopedia
    6techradar.com30.8%Editorial tech media
    7g2.com10.3%Review platform
    7capterra.com10.3%Review platform

    Test 3: Definition Queries (50 queries)

    Queries like what is SOAR in cybersecurity, what is product-led growth, what is zero trust security

    *207 unique domains, 306 total citations.*

    RankDomainCitationsCoverage %Type
    1wikipedia.org278.8%Encyclopedia
    2ibm.com123.9%Vendor documentation
    3techtarget.com103.3%Editorial tech media
    4atlassian.com82.6%Vendor documentation
    4salesforce.com82.6%Vendor documentation
    6microsoft.com51.6%Vendor documentation
    6google.com51.6%Vendor documentation
    8reddit.com41.3%Community
    What This Data Actually Shows

    These results cover broad B2B SaaS queries. In your niche, the ranking will look different. A cybersecurity product's top cited sources won't match a fintech product's. The point isn't that G2 is useless — it's that no generic list tells you where ChatGPT pulls citations for your specific category. These tables are a starting benchmark. Your actual Trust Hub requires running queries in your own niche and logging what appears.

    What the Data Actually Means for Your Strategy

    Source priority is query-type dependent. There is no single Trust Hub list. There are three, and they're different enough that collapsing them into one wastes resources.

    *Updated: March 2026*

    Query TypeDominant SourcesWhat to Build
    Definition ("what is X")Wikipedia, IBM/vendor docs, TechTarget, AtlassianWikipedia entry + authoritative definition pages on your own domain
    Comparison ("X vs Y vs Z")Reddit, PeerSpot, Forbes, MediumCommunity presence + get into third-party comparison articles
    Best-of ("best X for Y")Wikipedia, TechRadar, PeerSpot, RedditEditorial coverage + PeerSpot profile + Reddit organic mentions

    The fragmentation finding: across all 150 queries, the average domain appeared 1.2 times. No single dominant source except Wikipedia. The goal isn't to dominate one platform — it's to appear across enough independent sources to hit whichever ones ChatGPT pulls for a given query type.

    And the table above won't show your niche-specific sources. A cybersecurity product's Trust Hub likely includes Dark Reading, Bleeping Computer, SecurityWeek. A developer tool's includes dev.to and Stack Overflow. Run your own queries. Find the 3-5 domain names specific to your category that no generic list will surface. Those are the ones to prioritize.

    *Updated: March 2026 — corrected from prior version based on live query data*

    PlatformWhy It Matters for AIPriorityBest For
    WikipediaTop cited source across two of three query types. 27 citations on definition queries alone.CriticalDefinition + best-of queries
    RedditLeads comparison queries at 17 citations. Treated as ground truth for practitioner opinion.CriticalComparison + best-of queries
    PeerSpotRank 3 in best-of, rank 2 in comparison queries. Consistently outperforms G2 and Capterra. Almost no one talks about it for GEO.CriticalBest-of + comparison queries
    TechRadar / TechTargetEditorial tech media. TechRadar gets 14 citations on best-of queries, TechTarget gets 10 on definition queries.CriticalBest-of + definition queries
    Hacker NewsTech-heavy training data. Show HN, product discussions, and comments all contribute for developer tools.High (tech)Developer tool categories
    Forbes / MediumBusiness and community publishing. Forbes gets 6 citations on comparison queries. Medium gets 5.HighComparison queries
    G23 citations across 50 best-of queries. Still worth claiming for product legitimacy signals, but not the citation driver we assumed.MediumProduct legitimacy signal
    Capterra / GetApp1 citation across 100 queries. Low direct citation value. Still worth maintaining for completeness.LowCompleteness only
    The Bottom Line

    There is no universal Trust Hub. These tables are a cross-niche baseline — useful for orientation, not for copying. The sources that get cited in your category are specific to your niche, your query types, and the sources that have accumulated authority there. Find yours by running your own queries. Then close the gaps you find, not the ones someone else found.

    4. Content Formats That Actually Get Cited

    We track citations across ChatGPT, Claude, and Perplexity for every client. Format patterns repeat consistently. LLMs are pattern-completion machines. Scannable content is easier to complete than dense narrative. For a broader look at what works, see our guide on content strategies that actually get cited by AI.

    01

    Data Tables

    Quoted almost verbatim. Numbers and comparison columns get extracted directly. The model doesn't need to paraphrase what's already scannable.

    02

    Structured Definitions

    "[Product] is a [category] that [specific function]. Unlike [alternative], it [differentiator]." That sentence structure gets lifted whole.

    03

    Short Bullet Points

    Under 15 words per bullet. Longer bullets get paraphrased, and paraphrasing loses your specific language every time.

    04

    FAQ Schema

    Every marked-up Q&A pair is a candidate for direct citation. Sales-sourced questions match real ChatGPT queries better than marketing-drafted ones.

    05

    Outcome Statistics

    "Reduces alert fatigue by 40%" gets cited. "Significantly reduces alert fatigue" disappears. Specificity is what the model locks onto.

    06

    Comparison Summaries

    "Best for teams who need X. Not ideal for Y." Decision-guidance framing gets cited constantly because it matches how buyers prompt.

    Narrative prose gets paraphrased. When paraphrased, your specific differentiators disappear. Structured content gets extracted directly. Rewrite key product descriptions as short, parallel, scannable statements.

    Low extraction rate — narrative prose

    "Our platform offers a comprehensive solution for security teams 
    looking to streamline their workflows. With powerful integrations 
    and an intuitive interface, teams can significantly improve 
    their response times and reduce operational overhead."

    High extraction rate — structured definition

    "[Product Name] is a cloud-native SOAR platform for security 
    teams under 10 analysts. Key capabilities:
    - Automates 85% of Tier 1 alert triage
    - Integrates with 200+ security tools out of the box  
    - Mean time to respond: 4 minutes vs. 45 min industry average
    - No-code playbook builder, deploys in under 2 hours"
    Pro Tip

    Prioritize your hero section, "What is [Product]" page, and top comparison pages. One structured product definition page drives more citations than a dozen blog posts.

    The FAQ Schema Multiplier

    Every FAQ schema question is a candidate for a direct ChatGPT answer. Write schema for the questions your sales team hears most. Sales-sourced questions match actual ChatGPT buyer queries more closely, almost every time.

    Content Freshness

    Pages updated within 30 days show higher citation rates in Perplexity and ChatGPT with browsing enabled. Schedule quarterly updates to priority pages. Update the dateModified schema tag, refresh statistics, add examples. The model treats recency as a proxy for reliability.

    Why These Formats Work at the Model Level

    LLMs were trained on text where structured content preceded reliable information: Wikipedia tables, Q&A answers, technical docs. The model learned that scannable structure signals trustworthiness. Marketing prose pattern-matches to content it learned to paraphrase: press releases, ad copy, corporate blogs.

    5. The LLM Sitemap

    A standard XML sitemap tells Google where your pages are. It says nothing useful to a language model. The LLM Sitemap fills that gap: a structured HTML document at /llm-sitemap that gives AI systems a single source of truth about your product.

    The problem it solves: clients with good content still got described inaccurately because their site conflicted with itself. Homepage said one thing. Pricing page said another. The LLM Sitemap resolves that conflict.

    LLM Sitemap Architecture

    yourdomain.com/llm-sitemapA STRUCTURED BRIEFING DOCUMENT FOR AI SYSTEMSCompany IdentityWho you are, what yousolve, who you serveProduct DefinitionsStructured descriptions,use cases, differentiatorsCategory ContextIndustry terms, relatedconcepts, competitorsContent IndexKey articles, guides,research with summariesProof PointsStats, customer outcomes,awards, certificationsFAQ BlockBuyer questions answeredin structured Q&A formatAI can now fully understand your productand cite you accurately in relevant answersCrawled by GPTBot, ClaudeBot, PerplexityBot

    Once indexed, citation consistency improves within 30-45 days.

    The llms.txt Standard

    A plain text file at yourdomain.com/llms.txt listing your key pages, product context, and AI crawler permissions. Under an hour to build. Do it alongside the LLM Sitemap. For a deeper walkthrough of the full technical stack, see our LLM visibility guide.

    6. Building Consensus: The 70/30 Rule

    ChatGPT trusts consensus. One blog post doesn't move it. Consistent, independent mentions across multiple trusted sources does.

    The 70/30 ratio came from mapping citation patterns across our client base. Products cited consistently had roughly 70% of web mentions from sources unaffiliated with the company. Below that threshold: hedged or ignored, even with good content.

    Note

    To be clear: 70/30 is not a literal threshold coded into ChatGPT or any LLM. It's an observed benchmark from our client data that reflects a broader truth: AI models require external corroboration to treat an entity as authoritative. The exact ratio matters less than the principle. If independent sources aren't talking about you, AI won't either.

    Mention TypeTarget %ExamplesWhy It Works
    Independent third-party70%Review sites, editorial coverage, community threads, comparison articles by independent authorsLLMs weight unaffiliated sources heavily. Hard to fake at scale. Signals organic adoption.
    Self-published30%Your blog, case studies, press releases, product documentation, LLM SitemapSets your preferred framing. Provides the structured definitions you want cited. Fills gaps.

    Flip the ratio and LLMs treat you like a PR campaign. 90% self-published signals no one else is talking about you.

    Where Third-Party Mentions Come From

    • Analyst outreach. Tier 2-3 analysts write comparison content constantly. Their format is already structured. Briefing them on your AI-relevant differentiators pays off disproportionately.
    • Community seeding. Answer questions on Reddit and Quora. Not every answer mentions your product. Build helpfulness first. Mentions that come later carry more weight.
    • Partner co-creation. Integration partners who mention you are independent by definition and naturally relevant.
    • Journalist relationships. One article in a respected trade publication does more for AI citations than 20 blog posts. Pitch category explainers, not funding announcements.
    Warning

    Low-quality content farm placements won't help and may harm your SEO. ChatGPT weights source quality and independence, not raw mention count. Five credible mentions outperform 500 spam-site mentions.

    7. The Step-by-Step Playbook

    The exact sequence we use for new client onboarding. If Trust Hub presence already exists, compress Steps 1-3.

    Interactive Playbook

    The Step-by-Step Playbook

    Track your progress. Check off steps as you complete them.

    0%
    Foundation0/3

    Audit Your AI Visibility

    Ask ChatGPT 15-20 buyer questions. Log who appears and how.

    2-4 hours

    Lock Category Positioning

    Write a 2-3 sentence Wikipedia-style product definition.

    Half day

    Claim Trust Hub Profiles

    PeerSpot first, then Reddit, G2, and Wikipedia.

    2-3 weeks
    Infrastructure0/2

    Build LLM Sitemap + llms.txt

    Create /llm-sitemap with all six components.

    3-5 days

    Restructure Key Content

    Convert narrative prose to structured bullets, tables, FAQ schema.

    1-2 weeks
    Growth0/2

    Seed Community Presence

    30 min/day in relevant subreddits and Quora topics.

    Ongoing (30 min/day)

    Execute Third-Party Placements

    One high-quality independent placement per week.

    Ongoing

    Expected Timeline

    Week 1-2

    Foundation Set

    Week 3-6

    Infrastructure Built

    Month 2-3

    First Citations

    Month 4-6

    Consistent Pattern

    Month 6-12

    Model Memory

    From Our Work

    One cybersecurity client first appeared in ChatGPT at month 3, but only with browsing enabled. By month 9, users reported citations even in offline sessions. That's model memory. You cannot shortcut to it.

    8. How to Track Your AI Visibility

    AI citation tracking has no native dashboard. No rank tracker, no API, no Search Console equivalent. We use a structured prompt monitoring system run manually. For a complete diagnostic framework, see our 51-point AI visibility checklist.

    The Prompt Monitoring System

    Build a bank of 20-30 prompts your buyers actually use. Three types:

    • Category queries: What is the best [category] for [use case]?
    • Problem-framed queries: I'm a [persona] trying to solve [problem]. What tools should I consider?
    • Comparison queries: Compare [your product] vs [competitor A] vs [competitor B]

    Run the full bank weekly. Log: did you appear, what position, how were you described, which competitors showed up. The 90-day trend is more useful than any single week.

    MetricWhat It Tells YouTrack WeeklyTarget (90-day)
    Citation rate% of relevant prompts where you appearYes30-50% of category prompts
    Citation positionAre you first, second, or buried?YesTop 3 in primary category
    SentimentWhen you appear, how are you framed? Positive, neutral, or negative? Log the exact descriptor the model uses each time.YesPositive or neutral in >85% of mentions
    Competitor share of voiceHow often do competitors appear on the same prompts? Your 40% citation rate means nothing if the main competitor is at 75%.Bi-weeklyGap narrowing over 90 days
    Description accuracyDoes ChatGPT describe you correctly?Bi-weeklyCanonical definition used >70%
    Competitor co-occurrenceAre you appearing alongside the right competitors?MonthlyNamed in relevant competitive set
    Cross-platform parityAppearance rate on Claude and Perplexity vs. ChatGPTMonthlyTrending upward on all three

    The No-Click Reality

    ChatGPT citations rarely drive referral traffic. Users get the answer and move on. What they drive is pre-formed perception. A buyer who's seen your product across multiple prompts arrives at your site already convinced. They show up as direct or branded search. The "how did you hear about us" field tells you more than utm data here.

    The One Revenue Signal That Matters

    Add "AI assistant (ChatGPT, Claude, etc.)" to your demo form's "How did you hear about us" field. When it appears even once a week, you're generating real pipeline. Track it from day one.

    Pro Tip

    ChatGPT appends utm_source=chatgpt.com to links it shares with users. Check your analytics. Perplexity uses utm_source=perplexity.ai. Set up GA4 segments for both from day one.

    9. What We Tried That Didn't Work

    GEO has a lot of recycled advice that sounds right but doesn't hold up under testing. These are the approaches that either moved nothing or actively hurt citation rates.

    • Guest posts alone. Six months of guest posting on mid-tier blogs with no Trust Hub build-out. Zero citation improvement. Without pre-existing authority, there was nothing to amplify.
    • Content-first without authority. The A/B test in Section 3 made this undeniable. Structured, optimized content does almost nothing before the authority foundation exists. We cover this pattern in depth in our AI visibility best practices guide.
    • Press releases for citation signals. Press release syndication shows up in training data as low-credibility content. No citation lift, even when picked up by 40+ outlets.
    • Mass low-DA link building. 500 links from DA 5-15 sites moved Google rankings weakly and AI citations not at all. LLMs don't treat link volume as an authority proxy the way Google does.
    • Updating content without updating dateModified schema. Freshness only works if the recency signal is exposed in structured data. Rewriting a page without touching the schema tag gets you the work without the credit.
    The Bottom Line

    Citations come from being credible in the sources LLMs trust, described consistently in a format they can extract, with enough independent confirmation to feel safe recommending you. That's a 90-day program. Companies that treat it like infrastructure, not a campaign, end up owning their category.


    Frequently Asked Questions

    How long until ChatGPT starts mentioning my product?

    From what we've seen across 50+ client campaigns:

    • 60-90 days: First citations appear on niche, specific prompts
    • 4-6 months: Consistent appearance on broad category queries
    • 6-12 months: Model memory, cited even when browsing is off

    Timeline compresses if you already have Trust Hub presence. It extends in highly competitive categories with entrenched incumbents.

    Important nuance: these timelines apply primarily to base model memory (updated only when a new model is trained). When ChatGPT uses live browsing to answer a prompt, your optimized content can appear as soon as those pages are indexed by search engines. RAG-based answers can surface much faster than 60 days.

    Does my SEO ranking affect ChatGPT citations?

    Weakly and indirectly. Higher rankings mean content gets crawled more, which helps at the margin. But we've seen DA 20 products outrank DA 80 competitors in ChatGPT because of stronger Trust Hub presence. SEO metrics and AI visibility aren't proxies for each other.

    Should I optimize for ChatGPT specifically, or all AI platforms?

    Optimize for the framework. Trust Hub presence, structured content, and multi-source consensus drive citations across ChatGPT, Claude, and Perplexity. One strategy, three platforms.

    If you want the fastest feedback loop, use Perplexity. Its real-time retrieval responds to new content faster than base-model ChatGPT.

    What if ChatGPT is describing my product inaccurately?

    Category mismatch or consistency problem. The fix is systematic:

    • Lock a canonical definition and deploy it everywhere simultaneously
    • Update G2, Capterra, and review profiles to match exactly
    • Publish your LLM Sitemap to give AI systems a single authoritative source
    • Contact authors of major third-party articles to correct inaccuracies

    Expect 60-90 days before corrected framing propagates.

    Can I pay to get mentioned in ChatGPT?

    Not through organic citations. There's no way to pay for ChatGPT to recommend your product in its answers. Paid tactics like buying fake reviews, link schemes, or content farm mentions don't work for AI citations.

    However, OpenAI has begun testing ads in ChatGPT (US only, as of February 2026). Key details:

    • Ads appear only for Free and Go plan users, not Plus, Pro, Business, Enterprise, or Edu
    • Ads are clearly labeled and do not influence ChatGPT's answers or recommendations
    • They're contextual display ads, not pay-to-be-cited placements

    So while paid visibility exists in a limited form, it's entirely separate from organic RAG citations. Ads are contextual display placements. They do not influence what ChatGPT recommends in its generated answers. The only way to get ChatGPT to recommend your product is through authority, consistency, and structured content.

    Yuval Halevi

    Yuval Halevi

    Helping SaaS companies and developer tools get cited in AI answers since before it was called "GEO." 10+ years in B2B SEO, 50+ cybersecurity and SaaS tools clients.