Lumear
Docs
← Back to lumear.aiSign in

Recommendations

The Recommendations page is where Lumear earns its keep. Each card tells you: which page on your site to edit, what to put on it, and why doing so will move your citation share.

How recommendations are produced

For every prompt where you weren't cited:

  1. We find the page on your site whose content most closely matches the prompt (vector cosine similarity over content-block embeddings).
  2. We find the page on the competitor's site that was cited (also vector similarity, scoped to that competitor's scraped pages).
  3. A rules engine compares the two pages on six axes: direct-answer score, clarity, authority, extractability, schema markup, and overall similarity.
  4. GPT-4o-mini takes the structured signals plus the actual snippet text and writes 1–3 concrete recommendations.

Anatomy of a recommendation card

  • Title — a short imperative. “Add a 2-sentence direct answer block to /services/cardiology.”
  • Detail — 2–4 sentences explaining the change with specifics from the actual page content.
  • Why competitor won — what their page is doing that yours isn't.
  • Priority — high / medium / low. Driven by the rules engine's impact heuristic.
  • Type — one of: add_direct_answer, add_faq, add_comparison_table, rewrite_headings, add_schema, add_proof_points, rewrite_brand_language, create_new_page, improve_internal_linking, add_summary_block.

How to prioritize

The default sort is by impact (descending). For most teams the right read is:

  • High priority + create_new_page — you have no page that even tries to answer this prompt. Fix this first; create a dedicated page.
  • High priority + competitor cited — you have a page, but theirs is winning. These are the cleanest wins because the AI is already willing to surface a category answer; you just need to be the better answer.
  • Medium priority — incremental wins. Bundle these with related work.
  • Low priority — defer. Often these are minor signals (schema markup, internal linking) that compound but don't move the needle in isolation.

Implementing + verifying

Once you implement a recommendation, mark it “in progress” or “done”. On the next scheduled run, Lumear re-evaluates the prompt and updates the citation status. If the AI now cites you, the recommendation is auto-closed.

If you implemented the change and still don't see citation improvement after 2–3 runs, that's a signal the recommendation was directionally right but executionally weak — the new content may need to be more direct, more specific, or have clearer schema. Iterate.

The “create new page” tier

Sometimes the right answer isn't to edit an existing page — it's to create a new one. Examples we see often:

  • A category-level page (“Lease deals 2026”) doesn't exist on your site.
  • An FAQ-style page for a specific high-intent question (“Does X accept Aetna PPO?”) has never been created.
  • A comparison-style page (“X vs Y vs Z”) is the only thing AI assistants reliably surface for comparative prompts.

In those cases the recommendation explicitly says create_new_page and tells you what URL pattern + content structure to use.

Recommendations — Lumear Docs