Citations
A citation is any URL an AI assistant includes when answering one of your prompts. Citations are the most direct measure of which sources are shaping the answers your buyers are reading.
Where citations come from
Each AI platform exposes citations slightly differently:
- Perplexity returns a top-level
citationsarray. - Gemini returns
groundingMetadata.groundingChunks[].web.uri. - Google AI Overview / Bing embed citations in structured
text_blocksvia SerpAPI. - OpenAI (with web_search tool) returns inline URLs we extract from the response text.
- Anthropic doesn't natively cite — we fall back to regex extraction of any URL in the response.
We normalize every citation to a common shape (url, domain, title, position) and store it in the citations table.
The citation mix donut
Every citation gets bucketed into one of six categories: owned, editorial, social, reddit, youtube, other. The donut on Overview shows the share of each. See Reading the dashboard for what a healthy mix looks like.
Cited domains
The Citations page also surfaces the top cited domains across all your prompts — typically a mix of industry publications, review sites, and a handful of competitors. If a domain is cited 9 times across 25 prompts and you've never heard of it, that's a content-marketing opportunity (or a competitor you should be tracking).
What “was_cited” means in matches
On the prompt detail page, each candidate page (yours and competitors') has a was_cited badge. That's a simple URL match: we check whether the page's URL (normalized — lowercase host, strip trailing slash) appears in the citation set for any response to this prompt. Useful for spotting “we have the right page but it isn't being cited” situations.
Exporting
The Exports page can dump every citation row to CSV (one row per citation), or a flattened view that joins citations to their cited domain's rollup count. Use this for ad-hoc analysis in a spreadsheet or for feeding into your own BI tool.