Troubleshooting
The common failure modes and how to spot them. If you're hitting something not covered here, email letschat@lumear.ai.
My run is stuck on “queued”
Almost always means the run engine's background trigger didn't reach the executor. If it stays queued for more than 30 seconds, refresh the page; the runs list polls every 3 seconds and will pick up progress as soon as the executor starts. Persistent stuck-queue: ping us with the run ID.
My mention rate is 0%
This is almost always one of two things:
- Aliases too narrow. Open the brand and check the alias list. Add every plausible name an AI assistant might use. See Brands → Aliases.
- Alias list was expanded after the run. The matcher uses the alias list at run time. Old responses are not retroactively re-scored. An admin can trigger a rescore via
/api/admin/rescore, which re-runs the matcher across every stored response using the current alias set.
My recommend rate is 0%
Two possibilities. First — your brand really isn't being recommended yet, which is the expected starting point for most new brands. Build up coverage first via prompt-set runs, then work the Recommendations list.
Second — the dashboard is filtering on a stale signal. As of the latest release we union both the legacy regex-classified “recommends” column AND the v2 entity-extract tracked_brand_recommendedflag, which catches structured rank lists and soft-recommendation phrasing the regex misses. If you're on an older deploy, that fix may not be live yet.
Crawl shows “success” but 0 pages
Firecrawl could reach the domain but couldn't find any pages to scrape. Causes:
- The site has no sitemap and no discoverable internal links.
- The site uses aggressive bot protection (Cloudflare, Akamai, CAPTCHA on first page) that blocks Firecrawl.
- The site is a JS-only SPA where the homepage renders nothing without client-side hydration.
Workarounds: provide a custom sitemap_url when adding the domain, switch to full audit scope to bypass the topical filter, or work with us on a Firecrawl pro tier that handles harder targets.
I see duplicate brands in my list
Most often happens when the AI alias-enrich was re-run and created a second brand. Archive the duplicates; their runs/prompts stay in the DB but stop appearing in the picker. If you want a full reset, contact us and we'll wipe + reseed.
An AI platform's responses look weird
Each platform has quirks:
- Anthropic doesn't natively cite, so its citation count will be lower than other platforms. That's expected.
- Google AI Overview doesn't fire for every query — about 30% of prompts come back with no AI Overview block at all. Those count as
no_contentand don't deflate your visibility score. - Perplexity sometimes appends every citation to the bottom of its answer rather than inline — we parse both shapes.
The dashboard says “data is stale”
Dashboard aggregates are computed from your last 90 days of runs. If you've had no runs in 90 days, the score becomes stale and the “Last refreshed” pill in the header turns warning-colored. Fix it by running a prompt set — the dashboard recomputes the moment the run completes.