RIPStartup — Rank In Prompt
Open methodology

How the AI Clout Score works.

No black box. Here's exactly how we ask the engines, count the answers, and turn them into a single number out of 100.

1. The prompts

For every domain we scan, we generate 20–40 buyer-intent prompts — the kinds of questions a real customer types into ChatGPT or Perplexity right before buying. Categories include:

  • Direct discovery ("best tools for X")
  • Comparison ("X vs Y")
  • Problem-led ("how do I fix Z")
  • Brand-named ("is <brand> any good")

Prompts are stored on your project — you can read every one on the AI Visibility page.

2. The engines

Each prompt is sent to a panel of large language models. Currently in rotation:

  • Gemini 2.5 Flash (Google)
  • GPT-5 Mini & Nano (OpenAI)
  • Optional: Claude (Anthropic) on Multi-Model Comparison

We don't query Perplexity's hosted product directly — instead we model the same buyer-intent prompts on the underlying frontier LLMs, since that's what powers the answer layer of every modern AI search.

3. The status of each prompt

Every prompt result lands in one of three buckets:

StrongBrand named clearly, in the first half of the answer, with positive or neutral framing.
WeakBrand mentioned, but late, vague, or alongside many alternatives.
InvisibleBrand not mentioned at all. Competitors got the air time.

4. The score

Your AI Clout Score is a weighted percentage:

score = round(
  (strong × 1.0 + weak × 0.4 + invisible × 0.0)
  / total_prompts × 100
)
  • 0–39: Mostly invisible. AI engines are recommending other people.
  • 40–69: Showing up sometimes. Inconsistent across engines.
  • 70–100: Consistently cited. AI knows what you do and recommends it.

We never call you "Dominating" or "#1". We don't make up tiers we can't defend.

5. Competitors

For each prompt we extract every named brand or product in the response and surface them on your Competitors page. The scrape is a real fetch via Firecrawl — not a synthetic guess.

6. Recommendations

The Action Recommendations engine reads each invisible/weak prompt and produces 4–5 specific fixes (content, schema, backlinks, social, comparison). Effort is estimated in hours so you can prioritize.

7. Limits & honesty

  • Confidence band. Because LLMs are non-deterministic, your score is shown as a range (e.g. 47 ±4). The midpoint is our best estimate; the band reflects how mixed your prompt results are. A "Stable" badge appears when the band is tight and the sample is large enough to trust.
  • The same prompt can produce a slightly different answer twice in a row. We sample at scan time — your score is a snapshot, not gospel.
  • Brand-name detection is string-based with normalization. Very generic brand names may inflate counts. You can flag false positives in Settings.
  • We don't crawl Google. We don't track keyword rankings. This is an AEO tool, not an SEO tool.
  • We never publish your scan. Public report links are opt-in per project.
Questions about the score?

We'd rather show you the math than hide behind a "proprietary algorithm". Email us and we'll walk you through your project's numbers.