Running prompts
A run executes every prompt in a prompt set against every selected AI platform, captures the responses, and parses them for visibility signals.
Starting a run
On the Runs page, click + New run. Pick:
- Brand — the brand whose visibility you're tracking.
- Prompt set — the set of questions to send.
- Platforms — ChatGPT, Claude, Gemini, Perplexity, Google AI Overview, Bing. Defaults to OpenAI + Google AIO since those have the highest user reach.
- Track competitors — leave on. The competitor matcher runs in the same pass so it's effectively free.
- Schedule — leave blank for a one-shot run, or pick weekly/biweekly/monthly to track drift over time.
What happens during a run
- A
prompt_runsrow is created with statusqueued. - One
prompt_run_itemsrow is created per (prompt × platform) — so a 25-prompt × 5-platform run = 125 items. - Items execute with concurrency 8. Each item calls its provider, stores the raw response, parses it for brand mentions, competitor mentions, citations, sentiment, and recommendation framing.
- When all items complete, the run status flips to
completed,partial(some items failed), orfailed. - The completion event auto-cascades into the matching pipeline: prompt-match runs per prompt → gap-analysis runs per prompt → recommendations populate.
Duplicate, archive, delete
Each row has four actions:
- Duplicate — opens a new-run form pre-filled with this run's settings (brand, set, platforms). Edit before starting.
- Test — mark/unmark as a test run. Test runs are excluded from dashboard aggregates so you can experiment without polluting your trend.
- Archive — hide from dashboards but keep the responses on disk.
- Delete — permanently remove the run and all child responses. Irreversible.
Costs
A 25-prompt × 5-platform run typically costs $0.50–$1.50 in LLM tokens, depending on which platforms are selected. Gemini Flash and Perplexity Sonar are the cheapest; GPT-4o and Claude 3.5 Sonnet are the most expensive. You can monitor live spend on /admin/usage.
Run status reference
queued— accepted, not yet started. Should flip torunningwithin a few seconds.running— at least one item is in flight. The progress bar fills as items complete.completed— all items succeeded.partial— some items succeeded, some failed (usually rate limits or provider 5xxs). Failed items are visible in the run detail page.failed— every item failed. Almost always a configuration issue (missing API key, exhausted quota).