AI Search Share-of-Voice: Measuring Brand Visibility Across Engines
Share-of-voice was an SEM and SEO concept. AI search makes it harder to measure and more important to track. Here is how to compute it, what to compare against, and how often it should change.
Why share-of-voice matters more for AEO than SEO
In SEO, share-of-voice (SOV) is your share of total ranking opportunity in your category - the percentage of relevant queries where your brand appears in top results. It is useful but somewhat redundant; ranking position numbers tell most of the story.
In AEO, the dynamics are different. AI engines often cite 0-3 sources in an answer. The competitive set in any given answer is small. Your share of those cited slots, across thousands of queries, is the most direct measure of competitive standing.
A brand can be cited frequently in absolute terms but losing share to a competitor that is cited even more often. Absolute citation count misses this. SOV catches it.
Defining the metric
There is no industry-standard formula for AI SOV. We use a simple, defensible definition:
Brand SOV = (Number of tracked-prompt responses where your brand is cited) / (Number of tracked-prompt responses where any brand in your competitive set is cited)
The denominator excludes responses where the engine cites no brands at all (educational content with no commercial recommendations). It includes all responses where at least one brand in your defined competitive set appears.
This metric is meaningful because it measures the contested space. Responses with no commercial recommendations are not contests anyone wins; responses where competitors appear are.
Choosing the competitive set
The metric is only as good as the competitive set. Three rules.
Include real competitors only. Brands that buyers genuinely consider as alternatives. Not the broadest category list - that produces noisy SOV numbers.
5-12 competitors is the right size. Smaller sets miss the field; larger sets dilute the signal. For most B2B SaaS in a defined category, 6-8 competitors covers the meaningful set.
Update annually. Markets shift. New entrants emerge; older ones decline. Review your competitive set annually and adjust. Treat it as a strategic decision, not a tracking detail.
We sometimes recommend two competitive sets: a "primary" set of 5-7 head-to-head competitors and a "broader" set of 12-15 including adjacent categories. SOV against each set tells different stories.
Per-engine vs. blended SOV
Citation rates differ markedly across engines (covered in our engine-comparison article). Blended SOV averages this away; per-engine SOV reveals it.
For most teams, per-engine SOV is the more actionable metric. A brand at 15% blended SOV might be at 30% on Perplexity, 20% on Claude, 5% on ChatGPT, and 8% on Gemini. The diagnosis is "we are weak on ChatGPT and Gemini" - far more actionable than "we are at 15%."
Per-engine SOV also tracks engine-specific shifts. A model update on Claude can move Claude SOV measurably while leaving other engines stable. Blended numbers smear these shifts.
What good SOV looks like
A few benchmarks from our customer data.
Category leader. Typically 25-40% blended SOV in their primary category. Above 40% indicates either a dominant market position or a very small competitive set.
Strong challenger. 10-20% blended SOV. Recognized in the category but not the default answer. This is the most-common segment.
Established niche player. 5-12% blended SOV in the primary category, often 20-30% in a narrower sub-category. Not a category leader but a clear answer for specific use cases.
Emerging brand. Below 5% blended SOV. Either new to the category or under-invested in AEO. Most new SaaS brands start here.
These ranges shift by category. Highly competitive categories (CRM, marketing automation) compress everyone toward lower individual SOV because there are more brands; concentrated categories produce higher SOV for the leader.
How SOV moves over time
SOV changes slowly under stable conditions. Quarter-over-quarter SOV shifts of 1-3 percentage points are typical. Larger shifts (5+ points in a quarter) usually indicate a structural change: a major content investment, a major product launch, a competitor falling off, or an engine model update.
Tracking weekly is too noisy for SOV decisions. Track weekly to detect anomalies; review quarterly for strategic decisions. The strategic question is "what was the trend over the last 90 days" - which a weekly view answers when smoothed but not when read directly.
What moves SOV
Five investments consistently move SOV in our customer data.
Comparison page coverage. Adding vs-pages targeting underserved competitor pairs lifts SOV on those competitor names. Direct, measurable.
Original research publication. A single high-quality industry survey can lift SOV by 2-5 points sustained for 12+ months. The data citations carry across many prompts.
Topical cluster expansion. Building out sub-topic coverage where competitors are weak. Slower to show in SOV but more durable.
PR placements on authoritative sources. Coverage in industry publications creates third-party citations engines respect. Moves SOV on the topics covered.
Product launches. Major new features or product launches generate fresh content and review activity, which engines pick up. Less reliable than the others but real when it happens.
What does not move SOV: content volume without quality, social-only campaigns, paid acquisition, or thin-content pages. Engines optimize for substance, not noise.
How Citevera scores this
Citevera Monitoring computes blended and per-engine SOV against a customer-defined competitive set. The dashboard shows SOV trend lines per engine, weekly, with quarterly summary cards.
The audit complements this by identifying where structural gaps are likely depressing your SOV - missing comparison pages, schema gaps, freshness issues - and prioritizing the fixes by expected SOV impact.
Together they answer two questions: where am I now (monitoring) and what should I do next (audit). The combination drives the most efficient AEO investment.
Track AI share-of-voice across engines with Citevera Monitoring
Frequently asked questions
How does AI SOV relate to traditional SEO SOV?
They measure overlapping but not identical things. SEO SOV measures ranking presence; AI SOV measures citation share. The two correlate (sites that rank well tend to cite well) but the correlation is far from 1.0. Track both for complete competitive visibility.
What is "good" SOV for a new brand?
Anything above zero in your first six months is real progress. Focus on trajectory rather than absolute level - a steady climb from 1% to 5% over 12 months is more meaningful than a flat 5% over the same period.
How many tracked prompts do I need to compute reliable SOV?
Minimum 30 prompts per category per engine. Below that, single-prompt outcomes have outsized influence. Above 100 prompts, the metric stabilizes and weekly tracking becomes meaningful.
Should I track SOV by buyer journey stage?
Yes for B2B. Separate prompts for awareness ("what is X"), consideration ("compare X and Y"), and decision ("how to migrate from X to Y"). SOV by stage reveals where in the journey you are weak. Often awareness SOV is high while decision SOV is low - meaning buyers find you early but pick competitors at the end.
What if my SOV drops suddenly?
First check whether the drop is engine-specific (model update) or universal (content or technical issue). Engine-specific drops resolve themselves over weeks if the engine is recalibrating. Universal drops indicate a real issue - check for crawl blockers, schema breakage, or content removals.
