Monitoring AI brand mentions without a paid tool
A manual and semi-automated workflow to track whether your brand gets cited across ChatGPT, Perplexity, Gemini, and AI Overviews. Zero dollars, 30 minutes per week, honest tradeoffs with what you give up vs a paid monitoring platform.
Why you should measure at all
If you are doing AEO work, you need a feedback loop. Shipping schema and rewriting ledes without measuring is the same as classic SEO work without rank tracking - you will never know if anything changed. The good news: for a typical SaaS or content brand, a usable measurement program takes about 30 minutes per week and costs zero dollars.
The tradeoff against a paid platform (and there are several good ones) is that the free workflow is sampled, not continuous. You catch the trend; you miss the moments. For a team of one or two, the sampled workflow is plenty. For a marketing team that needs alert-level precision, budget for a tool.
This post is the free workflow: what to track, how to track it, and what to do with the data.
Step 1: build a tracked-query list
The foundation is a fixed list of 10 to 30 queries that your brand should be cited for if AEO is working. Three categories:
- Brand-direct: "what is Citevera", "Citevera pricing", "Citevera vs alternatives" (minus the competitor names - just "Citevera alternatives"). These catch mentions when users search specifically for you.
- Category queries: "best AI search optimization tool", "how to audit AEO readiness", "llms.txt generator". These catch mentions when users search the problem space.
- Problem queries: "why is my site not cited by ChatGPT", "how to get quoted by Perplexity", "AI citation monitoring". These catch mentions when users are in the pain-point space your product solves.
Keep the list under 30. More than that and the manual check takes too long and you skip weeks, which is worse than a short reliable list. Fewer than 10 and you miss the topical coverage needed to detect trends.
Store the list in a spreadsheet with columns for: query, category, first date tracked, last date tracked. You will add one more column later.
Step 2: run the weekly manual check
Once a week, in a single sitting, run each query across the major engines. The engines that matter most in 2026, in rough order of reach for most brands:
- Google AI Overviews - the largest volume, triggered on many informational queries.
- ChatGPT - use the web version without the user's session logged in if you can, to reduce personalization.
- Perplexity - run both the default model and the Pro model if you have access.
- Gemini - standalone app, not inside Google Overviews.
For each (query, engine) pair, note:
- Was your brand mentioned in the answer text?
- Was your brand cited in the sources list (if the engine shows one)?
- What other brands were cited?
- What did the answer say about your topic?
The shape of the record is cheap. A simple four-column note works: query | engine | mentioned (y/n) | cited (y/n). The "what others were cited" and "what did the answer say" are free-form notes, not structured data.
This takes 20 to 30 minutes if your list is 15 queries across 4 engines. Block a weekly calendar slot and be rigorous about showing up. The value compounds with consistency.
Step 3: the personalization problem
Answer engines personalize. Your search history, location, and account affect what you see. This is the main reason the manual workflow is sampled, not truth - you are not measuring "what the average user sees" but "what a user who is you sees".
Reduce personalization with three techniques:
- Use private or incognito browsing. Cookies are the biggest personalization vector.
- Log out of consumer accounts. When checking ChatGPT, sign out or use a secondary account that has no history in your vertical.
- Use a VPN occasionally. Not every week - that is overkill - but once a month check from a different region to see whether your presence is geographically skewed.
Over the course of 8 to 12 weeks of consistent measurement, the noise averages out. Single-week data is noisy; 4-week rolling averages are usable; quarter-level trends are robust.
Step 4: interpretation
What does the data tell you?
Your brand is mentioned in 0 of the category queries across 4 engines. This is the starting state for most brands before AEO work. Focus on structural fixes - schema, llms.txt, direct-answer ledes on your pillar pages. Expect to see the first mentions within 6 to 10 weeks.
Your brand is mentioned in the brand-direct queries but not the category queries. The engines know you exist but do not pick you as the category answer. The fix is topical authority - publishing pillar content on the category, adding internal links, building external citations. This is a 3 to 6 month project.
Your brand is mentioned in the brand-direct queries and occasionally the category queries, but the answer is wrong. The engines have old or bad data about you. Audit your on-site content for accuracy, update the stale pages, regenerate your llms-full.txt. Engines re-crawl and the picture usually corrects within 4 to 8 weeks.
Your brand is mentioned reliably across both categories. Congratulations, AEO is working. The next frontier is citation quality - the tone of the mention, what claims the engine attaches to you - and that is where paid tools start to earn their keep.
Step 5: track your own changes
Every time you ship an AEO change - new FAQ block, schema update, rewritten lede, added llms.txt - note the date in the same spreadsheet. Over a few months you should see bumps in citation rate correlated with shipped changes. If you ship and nothing moves after 8 weeks, either the change was too small to matter or something else is blocking you. The data tells you which.
This is the practice that turns AEO from "vague best practices" into "measured work". The feedback loop is the point.
What the free workflow gives up
Honestly: three things.
1. Continuous alerting
A paid platform pings you the moment a major engine starts citing you - or stops. The free workflow has a weekly cadence. If something breaks in between, you find out at the next check.
2. Coverage of niche engines
The paid platforms query You.com, Brave, Kagi, and several enterprise-embedded models that most brands do not think about. The free workflow sticks to the big four. For 95 percent of brands, the big four is enough. For a brand in a specialized vertical, the niche engines might be the whole game.
3. Quote-level tracking
Did the engine quote you verbatim, or did it paraphrase? Which specific sentence from your page got picked? Paid tools capture this. The free workflow sees only the final answer, not the source span.
These are real gaps. They are also gaps most brands can live with for the first year of AEO work. Do the free workflow until the data demands more.
Run a free audit to see what changes to ship before your first measurement
How this fits with Citevera
Citevera scores your site's AEO readiness but does not yet run continuous mention monitoring - that is a roadmap item for the monitoring product, which runs tracked prompts against the major engines on a schedule. Until that ships (and for brands that do not need daily cadence) the free workflow in this post is what we recommend to early-stage customers.
The practical rhythm: audit the site with Citevera monthly to see structural changes. Run the tracked-query workflow weekly to see the outcomes of those changes in engine citations. The two loops run in parallel and compound.
Frequently asked questions about free AI monitoring
How long until I should expect to see citations?
If the site is in a reasonable starting state, 4 to 8 weeks after shipping the first batch of AEO improvements. If the site was previously blocked by robots.txt or had no structured data, add another month for the re-crawl cycle.
Is spreadsheet-based tracking worth it vs just remembering?
Yes. The discipline of writing the observation down is what makes the trend visible. Memory conflates weeks and missed spots. The spreadsheet does not.
What if I see mentions but they spell my brand wrong?
That is a real problem, worth fixing fast. The cause is usually that your brand entity is not clearly declared on the site. Add Organization schema with the correct name and sameAs links to LinkedIn and other authoritative sources, then regenerate your llms.txt. The engines pick up the correction within a cycle.
Should I track competitor mentions too?
If you can do it without getting depressed, yes - noting which other brands show up in your category-query answers tells you who the engine treats as the category. If comparing makes you chase the wrong work, skip it. Your own trend line is the thing that matters.
