Lumear
Docs
← Back to lumear.aiSign in

Glossary

Plain-English definitions for every metric and term used in Lumear’s dashboard, reports, and recommendations.

Core metrics

AI Visibility Score

A 0–100 composite that summarizes how visible your brand is across AI answers. Formula:0.4 · Mention rate + 0.3 · Citation rate + 0.2 · Recommend rate + 0.1 · Sentiment. Banded as Just starting (<25), Building presence (25–49),Strong (50–74), Excellent (75+).

Mention rate

Share of AI responses that mention your brand by name (or any alias). Computed asresponses_with_mention ÷ total_responses. The most basic discoverability signal.

Citation rate

Share of AI responses where the assistant cited a source URL alongside its answer.responses_with_citation ÷ total_responses. Tells you how often AI is grounding its claim in a linkable source — your own pages or someone else’s.

Recommend rate

Share of responses classified as actively recommending your brand for the buyer’s question. Higher bar than mention: “BMW is one of many luxury sedans” counts as a mention but not a recommendation; “The BMW M3 is the best track car under $80k” counts as both.

Coverage

Share of probed AI platforms where your brand was mentioned at least once. If Lumear probes ChatGPT, Claude, Gemini, Perplexity, and your brand surfaced on 3 of 4, Coverage is 75%.

Share of voice

Each brand’s share of total brand-mention volume on the dashboard. Sum across your brand + every tracked competitor = 100%. The tracked brand pins first in the list with aYou badge.

Sentiment score

Average tone of replies that mentioned your brand, scaled 0–100 (0 = very negative, 50 = neutral, 100 = very positive). Per-platform variants are surfaced under Sentiment by AI Platform.

Buyer-journey terms

Intent

What kind of question the prompt is. Five values: awareness,comparison, evaluation, recommendation,support. Used to bucket prompts and to slice metrics on the prompts page.

Funnel stage

Where in the buyer journey the prompt lives: awareness,consideration, conversion, retention,mixed. Helps you design balanced prompt sets — too many comparison prompts and you’ll miss what AI says to top-of-funnel buyers.

Branded vs. unbranded prompt

A branded prompt explicitly names your brand (“is BMW worth it?”). An unbranded prompt asks a category question your brand could plausibly surface for (“best luxury sedans 2026”). Mix matters: branded tracks direct sentiment; unbranded tracks organic discoverability.

Citations + grounding

Citation

A source URL the AI assistant linked to alongside its answer. Lumear stores the URL, title, and snippet for every citation across every run.

Citation buckets

Each cited domain gets bucketed: owned (your domain), editorial (curated review/news outlets), reddit, youtube, social, dealer(automotive marketplaces), other. Helps you see where AI grounds its claims about you.

Pipeline terms

Prompt set

A named bundle of prompts you run together. Most teams have one weekly monitoring set and one monthly deep-audit set per brand. See Prompt sets.

Run

One execution of a prompt set across every selected AI platform. Each run produces aplatform response per (prompt, platform) pair. The dashboard’s “last 30 days” view aggregates across all runs in that window.

Insight

An auto-generated finding about your brand’s AI visibility — e.g. “Blind-spot prompts identified”, “Mentioned, not recommended”. Some insights are clickable and deep-link to the specific prompts they reference.

Recommendation

A specific suggested action — “publish a comparison page”, “add an FAQ block to /pricing”. Generated from the gap between AI behavior + your crawled owned-domain content. See Recommendations.

Industry shorthand

AEO (Answer Engine Optimization)

The discipline of making your content the source AI assistants cite when answering buyer questions. Lumear is fundamentally an AEO tool.

AI Overview

Google’s AI-generated summary block that appears at the top of some search results pages. Treated as a first-class AI surface alongside ChatGPT, Claude, etc.

Grounding

The source(s) an AI assistant cites to back up its answer. Strong grounding from your own domain > strong grounding from editorial > weak grounding from social.

Glossary — Lumear Docs