Probably not as often as you'd like — and the reasons are usually fixable. The main barriers are technical (AI crawlers blocked in your robots.txt), structural (content that's vague or marketing-heavy rather than factual and specific), and distributional (your company only appears on your own site rather than being cited across third-party sources). Nobody can guarantee AI search visibility, but you can remove the obstacles that prevent it.
- How ChatGPT, Claude and Perplexity actually find information about companies
- What we know for certain influences AI visibility — and what's still unclear
- The robots.txt issue that's blocking many B2B companies from AI search completely
- Six practical actions B2B teams can take right now
- How to test whether your company currently shows up — a simple monthly method
Something shifted in B2B buying behaviour over the past twelve to eighteen months. A growing number of buyers are opening ChatGPT, Claude, or Perplexity and typing questions like "what's the best CRM for a mid-size manufacturing company?" or "which visitor identification tool integrates with HubSpot?" — and making vendor shortlists based on what comes back.
They're not Googling the same question. They're getting a synthesised answer that recommends specific tools, explains the tradeoffs, and often doesn't include a list of links to explore. If your company isn't in that answer, you weren't considered. The buyer moved on without ever finding your website.
According to research by HG Insights, nearly half of B2B buyers now use AI platforms for vendor research. Gartner has predicted traditional search volume will drop 25% by 2026 as AI tools replace Google for product research. Whether those exact numbers prove accurate or not, the directional shift is real and already visible in how B2B marketing teams are thinking about visibility.
The honest question for any B2B company is: do we actually show up when a buyer asks one of these tools about our category? And if not — what can we do about it?
After fifteen years in industrial B2B marketing and four of those running a HubSpot-centric stack, I've spent time researching this properly. Here's what the evidence actually supports — and where I'm going to be honest that nobody knows yet.
How ChatGPT, Claude and Perplexity Actually Find You
The mechanics matter here because they directly determine what you need to do.
Most AI tools with search capabilities use a technique called Retrieval-Augmented Generation — RAG for short. When someone asks a question, the AI doesn't just pull from its training data. It runs a live web search, retrieves relevant pages, synthesises the information, and generates an answer that cites those sources. This is how ChatGPT Search, Claude's web search mode, and Perplexity all fundamentally work.
The practical implication is significant: these tools are essentially running a search engine before giving you an answer. Which means pages that rank well on Google for a given query are generally more likely to be retrieved and cited by AI tools for the same query. Traditional SEO remains the foundation.
Each major AI platform has its own web crawler that indexes content for this purpose. Claude uses Claude-SearchBot. ChatGPT uses OAI-SearchBot. Perplexity has its own crawler. These bots visit your site, index your content, and make it available for retrieval when relevant queries come in. If your robots.txt file blocks these crawlers — accidentally or deliberately — your content simply won't appear in AI search answers, regardless of how good it is.
💡 Blocking Claude-SearchBot in your robots.txt is the equivalent of blocking Googlebot for Google Search — your pages won't appear when Claude's search surfaces answers. According to Anthropic's own documentation, sites blocking Claude-SearchBot will see reduced "visibility and accuracy in user search results."
What We Know — and What We Don't
This space is less than two years old. There's a significant amount of confident-sounding advice circulating that is actually speculation. Being honest about the boundary between what's established and what's still unclear matters, especially before you spend marketing budget on it.
✓ What we know with confidence
- Traditional SEO ranking is still the foundation — AI search tools retrieve from the indexed web
- Blocking AI crawlers in robots.txt prevents your content from appearing in AI search answers
- Factual, clearly structured content gets extracted and cited more reliably than vague marketing copy
- Being mentioned across multiple third-party sources (G2, LinkedIn, industry publications) strengthens AI model confidence in citing you
- FAQ schema markup helps AI models extract specific answers cleanly
- Content that directly answers a question performs better than content that circles around it
- Traffic from AI referrals (chatgpt.com, claude.ai, perplexity.ai) converts at significantly higher rates than organic Google traffic
⚠ What's still genuinely unclear
- The exact weighting of different signals in AI citation decisions
- Whether optimising specifically for AI search produces measurable results distinct from good SEO
- How AI models handle brand mentions that appear only on low-authority sites
- Whether training data cutoffs affect real-time search retrieval (they shouldn't, but behaviour varies)
- How quickly AI models update their understanding of a company after content changes
- Whether there's a meaningful difference between optimising for ChatGPT vs Claude vs Perplexity
One finding worth noting from a real case study: a B2B SaaS company that was ranking #1 on Google for "best CRM for small businesses" was being cited in neither ChatGPT nor Claude for the same query. The content ranked perfectly for traditional SEO but lacked the structural clarity AI models need to extract and cite specific information. Good Google ranking does not automatically translate to AI citation — but it's still the best foundation you have.
The robots.txt Problem Most B2B Sites Have
This is the most actionable and most overlooked issue. Many B2B company websites are inadvertently blocking AI crawlers through overly broad robots.txt rules that were written before AI search existed.
Check your robots.txt file right now by going to yourdomain.com/robots.txt. If you see a blanket Disallow: / for all user agents, or if you don't see explicit entries allowing the main AI crawlers, your content may not be indexed for AI search at all.
The crawlers you want to explicitly allow are:
Note the distinction: ClaudeBot is used for training data, Claude-SearchBot is used for real-time search indexing. You may legitimately want to block ClaudeBot if you don't want your content used for AI training. But blocking Claude-SearchBot means you won't appear in Claude's search answers. These are separate decisions.
Why Your Content May Not Get Cited Even When Found
AI models don't read content the way a human does. They parse structure, identify factual statements, extract relationships between concepts, and synthesise answers. Content that was written for emotional resonance or to build brand awareness often fails this test — not because it's bad writing, but because it doesn't give an AI model confident, extractable facts to work with.
The problem shows up most clearly on homepage and product pages. Consider the difference between these two descriptions of the same company:
"We help B2B manufacturing companies grow faster with smarter digital tools."
"We provide visitor identification software for B2B manufacturers that reveals which companies are viewing your website and integrates directly with HubSpot CRM."
The first is marketing copy. An AI model can't confidently cite it to answer "what tools identify anonymous B2B website visitors?" The second gives the model everything it needs: what the product does, who it's for, what problem it solves, and what it connects to.
This is the most common fixable gap on B2B company websites. Review your homepage, product pages, and about page. Replace vague positioning language with specific, factual statements that directly answer the questions your buyers would ask an AI tool.
Why Third-Party Mentions Matter More Than You Think
AI models are more likely to cite a company they've encountered across multiple credible sources than one that appears only on its own website. This is essentially how trust works for any information source — a claim made by one person is an assertion; a claim corroborated by multiple independent sources is more likely to be accurate.
For B2B companies, the practical sources that matter most are review platforms (G2, Capterra, Trustpilot), LinkedIn company presence and employee activity, industry publications and trade media, comparison and roundup articles on credible sites, and community mentions in relevant forums and discussions.
If your company has a detailed G2 profile with verified reviews, appears in industry comparison articles, and is mentioned regularly in LinkedIn content by your team — you're building the multi-source citation pattern that makes AI models more confident about including you in answers.
If your company exists only on your own website and nowhere else, even well-structured content may not generate citations — because AI models have no corroboration for who you are and what you do.
"One client discovered their company was appearing in ChatGPT with incorrect information about how they compare to competitors — because the AI had no authoritative source to pull from and was synthesising from indirect mentions. The fix was creating dedicated product comparison pages that gave the model accurate information to cite directly."
Six Things B2B Teams Can Actually Do Right Now
Check and fix your robots.txt
Go to yourdomain.com/robots.txt and verify that ClaudeBot, Claude-SearchBot, GPTBot, OAI-SearchBot, and PerplexityBot are allowed. This is the single highest-leverage technical fix — if you're blocking these crawlers, nothing else matters until you fix it.
Rewrite your homepage and product pages for factual clarity
Replace marketing language with specific, factual statements. What does your product do? Who is it for? What problem does it solve? What does it connect to or replace? Each answer should be a clear sentence, not a tagline. Aim for language that could be extracted and cited verbatim without context.
Add FAQ sections to your key pages — with schema markup
FAQ schema is one of the clearest signals you can give AI models about what questions your content answers. Write FAQ sections that use the exact language your buyers would use in an AI prompt. "What does X integrate with?" "How much does X cost?" "How is X different from Y?" Then implement FAQPage schema in JSON-LD so the question-answer structure is machine-readable.
Build your presence on third-party platforms
Claim and complete your G2, Capterra, or relevant industry review platform profile. Encourage verified customer reviews. Make sure your LinkedIn company page is complete and active. Get your company mentioned in industry roundup articles. Each credible third-party mention is a corroboration signal for AI models — they're more confident citing companies that appear across multiple sources.
Create content that directly answers category-level questions
Think about what your buyers would ask an AI tool about your industry, your product category, and the problem you solve. Then write articles that answer those exact questions clearly and factually. "What are the best visitor identification tools for B2B?" "How does HubSpot integrate with sales intelligence tools?" Content that directly addresses these questions is more likely to be retrieved and cited for those queries.
Test monthly and track what you find
Create a list of 10-15 questions your target buyers might ask an AI tool — about your category, your specific use case, and your company by name. Run them through ChatGPT, Claude, and Perplexity once a month. Note whether you appear, how you're described, and whether the description is accurate. Also check your Google Analytics referral traffic for sessions from chatgpt.com, claude.ai, and perplexity.ai. Volume will be modest now but the trend is the signal.
What to Be Cautious About
⚠️ A growing number of agencies and tools are now selling "AI search optimisation" services. Some of what they offer is legitimate — the technical and content work described above. But be sceptical of any service that guarantees citation rates, promises specific placement in ChatGPT or Claude answers, or charges a premium for practices that are essentially good SEO with different terminology. The field is too new for anyone to have proven proprietary methods. The fundamentals — structured content, third-party citations, technical accessibility — are not secret knowledge.
The Honest Bottom Line
AI search visibility for B2B companies is real, growing in importance, and partially within your control. The steps that make the biggest difference — fixing your robots.txt, making your content factually clear, building third-party presence — are not exotic or expensive. Most of them overlap significantly with good SEO practice you should be doing anyway.
What you can't do is guarantee results. AI models don't publish citation criteria. The field is young. What works today may behave differently in six months as these platforms evolve. Anyone telling you otherwise is selling certainty they don't have.
The right approach for most B2B companies right now is to remove the obvious barriers, build the right content foundations, and monitor what happens — without betting your entire marketing budget on it before the evidence is more mature.
Check robots.txt and allow AI crawlers. Audit your homepage and product pages for factual clarity. Add FAQ schema to your most important pages. Claim and complete your G2 or Capterra profile if you haven't. Set up a simple monthly test — 10 queries across ChatGPT, Claude, and Perplexity — to track where you stand and measure progress. That's a full month of meaningful work before you need to think about anything more sophisticated.