The 5-Level AI Visibility Audit Every Company Should Run

By Rodrigo Murguia
Abstract dithered artwork in warm orange and red tones

A founder of a customer service startup asks ChatGPT which CRM works best for early-stage companies. The answer names three competitors, explains their strengths, and recommends one. Their company, which has served that exact market for four years, doesn’t appear at all. The founder then runs the same query on Google and gets an AI Overview. This time, their company does show up, but with a pricing model they changed eighteen months ago.

This is what happens when AI doesn’t know your brand. And an AI visibility audit is the only way to find out if it’s happening to you.

AI visibility audit levels illustration

Why You Need an AI Visibility Audit

The way people find products is changing. Instead of searching Google, clicking through results, and comparing options, buyers are increasingly asking AI directly. They type a question, get a recommendation, and move on. If AI doesn’t mention you, or mentions you incorrectly, you never had a chance.

Why the shift? Because AI gives answers, not links. It synthesizes information and delivers a recommendation tailored to the question. For buyers, it feels less like research and more like asking a knowledgeable friend.

That convenience is why adoption is growing so fast. 50% of consumers now intentionally seek out AI-powered search, and 44% say it’s their primary source for buying decisions, ahead of traditional search at 31%. Meanwhile, Gartner predicts traditional search engine volume will drop 25% by the end of 2026 due to AI chatbots and virtual agents.

And it’s not just standalone AI tools. Search engines themselves are changing. Google now surfaces AI Overviews at the top of results for most queries, giving users direct answers before they ever see a link. So even if you’re ranking on page one, buyers might never scroll past the AI-generated summary.

Yet many founders and marketing teams continue to assume that if they rank well on Google, they’re covered. But AI recommendations work differently. AI models form opinions based on what they’ve learned from across the web, and that learning doesn’t always match your current positioning, pricing, or product.

This means the only way to know where you stand is to check. And you can do that with just a couple of steps.

The Five Audit Categories

A complete AI visibility audit covers five categories, each building on the last. Think of them as levels: you can’t reach Level 5 if you’re failing at Level 1.

For each category, you’ll ask a set of questions across multiple AI models and compare the responses to what’s actually true about your brand. The goal is to identify where AI gets it right, where it gets it wrong, and where it doesn’t know you at all.

1. Existence

The most basic question: Does AI know your company exists?

Ask the following across different AI models:

  • “What is [your company name]?”
  • “Tell me about [your company name].”
  • “Have you heard of [your company name]?”

If AI returns accurate information about who you are and what you do, you pass. If it confuses you with another company, returns outdated information, or says it doesn’t know, you’re at Level 0.

2. Product Awareness

Let’s say AI recognizes your brand; that’s a good first step. But does it know what you actually sell?

Ask:

  • “What products does [your company] offer?”
  • “What does [your company] do?”
  • “What services does [your company] provide?”

Check whether the response matches your current offerings. AI might describe products that no longer exist, miss your core offering entirely, or conflate your product with a competitor’s. If the response is accurate, you pass.

3. Accuracy

Does AI understand (and describe) your offerings correctly?

This goes deeper than awareness. You’re checking whether AI understands your positioning, pricing model, target audience, and key differentiators.

Ask:

  • “How much does [your product] cost?”
  • “Who is [your product] designed for?”
  • “What makes [your company] different from [competitor]?”
  • “What are the main features of [your product]?”

Compare the responses to your actual positioning. If AI gets the details right, you pass. If it describes you as enterprise-only when you serve startups, or lists features you deprecated a year ago, you have an accuracy problem.

4. Competitive Inclusion

This is where visibility starts to translate into the pipeline. Does AI include you when buyers ask for recommendations in your category?

To assess this, ask category-level questions:

  • “What are the best [your category] tools?”
  • “What [your category] do you recommend for [your target audience]?”
  • “Compare the top [your category] options.”

If you appear in these lists alongside your competitors, you pass. If AI recommends your category but leaves you out, you have a competitive inclusion problem.

5. Recommendation

Does AI recommend you ahead of competitors?

This is the highest level. You’re not just included; you’re the answer.

Ask:

  • “What’s the best [your category] for [your target use case]?”
  • “If I need [problem you solve], what should I use?”
  • “Which [your category] would you recommend?”

If AI names you as the top choice, or positions you favorably against competitors for your target use cases, you’ve reached Level 5. This is where AI visibility turns into serious revenue.

How to Score Your AI Visibility Audit

Level 0-1: You’re invisible. AI either doesn’t know you exist or has you confused with something else. This is the most common starting point for growth-stage companies. The fix requires foundational work: getting your brand mentioned on sources AI trusts.

Level 2: You’re known, but not understood. AI has heard of you, but the details are wrong or incomplete. This usually means your first-party content is reaching AI, but third-party validation is missing or outdated. The fix involves both onsite optimization and offsite citation building.

Level 3: You’re accurately represented. AI gets your brand right when asked directly. This is a solid foundation, but it doesn’t mean buyers are finding you. The next step is improving your competitive positioning so AI includes you in category recommendations.

Level 4: You’re in the conversation. When buyers ask for recommendations, you appear. This is where AI visibility starts generating a pipeline. The focus now shifts to moving from “one of the options” to “the recommendation.”

Level 5: You’re the answer. AI recommends you by name for your target use cases. This is the goal. Maintaining it requires ongoing monitoring and continued investment in both onsite and offsite presence.

What to Do After Your Audit: The Next Steps

Once you have your baseline, the work begins. Improving your AI visibility score requires effort on three fronts:

Onsite Optimization

Make sure your own content is structured for AI consumption. This means clear product descriptions, explicit audience statements, current pricing and feature information, and structured data that helps AI parse your pages correctly.

If your website says one thing and AI says another, start by fixing your website. AI models look for consistency across your pages, so your homepage, pricing page, and product docs should all tell the same story. Outdated or conflicting information on your own site makes it harder for AI to represent you accurately.

Offsite Citations

AI trusts third-party sources more than first-party claims. Getting accurate information about your brand published on authoritative sites is how you shift AI’s perception. This includes industry publications, review platforms, comparison sites, and analyst coverage.

The brands winning in AI visibility are the ones investing in their offsite presence. A single mention on a trusted outlet can carry more weight than dozens of posts on your own blog. This is also where most companies underinvest, which creates an opportunity if you move early.

Ongoing Monitoring

AI’s understanding of your brand isn’t static. Models update, new content gets indexed, and competitors make moves. While an initial audit gives you a snapshot of where you’re at, ongoing monitoring shows you whether you’re improving, stagnating, or losing ground.

What’s accurate today might be outdated in three months. Regular check-ins let you catch problems before they cost you deals, and they show you whether your efforts are actually working.

How Lectern Turns Your Audit into Action

Running a manual audit tells you where you stand. But turning that snapshot into a strategy is harder. You might know the problem, but not how to improve, or you have an idea of what needs to happen, but don’t have the time or relationships to make it happen. Plus, running queries manually across multiple models, documenting responses, and tracking changes over time is tedious work that most teams can’t sustain.

That’s where Lectern comes in.

Lectern is a content intelligence agent that continuously tracks how AI represents your brand across models like ChatGPT, Claude, Gemini, Meta AI, Perplexity, and Grok. It benchmarks you against competitors, searches for the queries that matter for your business, and shows you exactly where the gaps are. Instead of a single snapshot, you get ongoing visibility into how AI’s perception of your brand changes over time.

But while traditional tools in this space hand you a report and leave the execution to you, Lectern handles the work that actually moves the needle: onsite optimization to make your content easier for AI to parse and cite, and offsite publishing across 1,500+ credible outlets to build the third-party validation that AI models trust. That includes publications like Tech Times, GeekWire, and USA Today, the kinds of sources that shape how AI forms recommendations.

Other tools show you the gap, while Lectern closes it.

Frequently Asked Questions

How often should I run an AI visibility audit?

For a manual audit, once a quarter is a reasonable cadence. AI models update regularly, and your competitors are making moves too. If you’ve recently launched a new product, rebranded, or made significant changes to your positioning, run an audit immediately to see how AI is reflecting those changes. Lectern automates this process with continuous monitoring, so you don’t have to remember to check manually.

Do I need to test every AI model?

Focus on the models your customers actually use. For most B2B companies, ChatGPT, Google AIO, and Claude cover the majority of use cases. If your audience skews toward research-heavy buyers, add Perplexity. If they’re active on X, Instagram, or Facebook, check Grok and Meta AI. You don’t need perfect coverage across every model, but you do need to know where your buyers are getting their information.

What if I score at Level 0 or 1?

That’s more common than you might think, especially for growth-stage companies. It means you have foundational work to do: getting your brand mentioned accurately on sources AI trusts. Start with onsite optimization to make sure your own content is clear and structured, then focus on building offsite citations through media coverage, review sites, and industry publications. Lectern helps with both, including access to 1,500+ credible outlets for offsite placements.

Will improving my Google rankings help my AI visibility?

Not directly. SEO and AEO overlap in some areas, but AI models don’t pull from Google’s index the way a search engine does. They form opinions based on patterns across their training data and the sources they consider authoritative. You can rank first on Google for a keyword and still be invisible to AI. Both matter, but they require different strategies.

Why do different AI models give different answers about my brand?

Each model has different training data, different knowledge cutoffs, and different ways of weighing sources. ChatGPT might know your brand well, while Claude has outdated information, or vice versa. This is why testing across multiple models matters. Your AI visibility strategy needs to account for these differences rather than assuming consistency. Lectern tracks your visibility across all these models, so you can spot these gaps without running manual checks on each one.

Start Your AI Visibility Audit Now

Your competitors are likely already tracking their AI visibility. Every day you don’t, they’re building the citation footprint that gets them recommended instead of you, widening the gap between you and them.

Lectern runs your AI visibility audit automatically, tracks how models represent you over time, and handles the execution that improves your score: onsite optimization and offsite publishing across 1,500+ credible outlets.

Sign up for Lectern today to finally know where you stand.

Written by

Rodrigo Murguia

Rodrigo Murguia

Content Writer

Rodrigo is a content writer based in Buenos Aires, Argentina. With bylines in Village Voice and LA Weekly, he helps brands and professionals tell their story. Driven by a passion for amplifying fresh perspectives and giving voice to new ideas.