Lumear
Docs
← Back to lumear.aiSign in

Integrations

Lumear stitches together the LLM providers, the search-engine layer, and the crawl pipeline you need to track AI visibility — without you operating any of them.

LLM providers

Every prompt run fans out across six AI surfaces. Lumear calls each one directly using first-party APIs (no scraping), and normalizes the responses into a single schema so the dashboard treats them identically.

  • OpenAI ChatGPT — chat completions API.
  • Anthropic Claude — messages API.
  • Google Gemini — generateContent API.
  • Perplexity — Sonar API (web-grounded model).
  • Microsoft Copilot — Azure-hosted endpoint.
  • Google AI Overviews — fetched via the SERP layer (see below) since Google doesn’t expose AI Overviews via a first-party API.

SERP layer (SerpAPI)

Google AI Overviews and Bing’s Copilot summaries surface inside search results, not as an API. Lumear fetches them through SerpAPI: a Google search engine call returns a parsed JSON document where ai_overview is a first-class field withtext_blocks and references. We normalize those into the same response schema used by the LLM-direct providers, so the matcher and citation extractor treat all six surfaces the same.

Site crawling (Firecrawl)

When you add a domain to a brand, Lumear uses Firecrawl to enumerate pages, fetch HTML, and return cleaned content. Crawls run on three scopes:

  • Topical — Firecrawl /map plus filtered batch scrape of ~50–200 pages most likely to match prompts. The default re-crawl mode.
  • Cited only — fetches the small set of URLs AI assistants cited in recent runs. Used for snap-refreshes.
  • Full audit — every page Firecrawl can find up to your plan’s page cap.

Embeddings + vector search (OpenAI + pgvector)

Crawled pages are segmented by heading, embedded withtext-embedding-3-small, and stored as content blocks in Supabase’s pgvector extension. When a prompt runs, we embed the prompt and use cosine similarity to find the top target pages + competitor pages, then re-rank with a GPT-4o-mini call.

Orchestration (Inngest)

Long-running pipelines — first-time crawls, recommendation generation, large prompt set runs — execute as Inngest functions hosted on Vercel. Each major step is wrapped instep.run(), so a transient Supabase blip doesn’t redo a billable LLM call. For infrastructure-level debugging, admins can open the Inngest dashboard directly.

Data store (Supabase)

Postgres + pgvector + Supabase Auth. Row-level security policies scope every table to the active organization. Brand, prompt-set, and run data live in dedicated tables; PII is limited to the user’s email and is never sent to any external LLM.

Outbound integrations

Today Lumear is read-only: it never pushes data back to your stack. CSV/JSON exports + shareable report links cover the “get this to my team” workflow. A public API and Slack notifications are on the roadmap.

Integrations — Lumear Docs