CMO briefing deck
Enso Insights
GEO analytics for the answer-engine era — scorecards and narratives your leadership team can defend.
Dual-engine consensus · Live web evidence (~six months, recency-ranked) · Numbered citations [S1]…[Sn]
At a glance
Why marketing leadership needs a GEO scorecard they can defend
GEO analytics with citation-grade live research — measure how ChatGPT and Gemini cite, summarize, and recommend your brand. Evidence is scoped to a rolling ~six months of web, recency-ranked, then Board-screened. Built so marketing leadership can defend the numbers.
The shift
Buyers are asking AI before they open your site
- Category questions are answered inside ChatGPT, Gemini, and AI Overviews — often without a click to your domain.
- Traditional SEO still matters for the SERP, but it does not tell you what the answer engines say about you this week.
- The new job: measure AI citation voice, narrative, and risk — with evidence the CFO and Legal won’t poke holes in.
Why now
Pipeline is moving upstream of your website
67%
of B2B buyers consult an AI assistant before opening a vendor site
Gartner Digital Buyer Survey, 2026 (as cited on ensoinsights.us)
0
legacy SEO suites that measure your share of AI-citation voice the way Enso does
Positioning claim — complementary to SEO, not a replacement for rank tracking
If your Board asks “what is AI saying about us?” — you need a specialist signal, not a keyword pivot.
Definition
GEO = Generative Engine Optimization
- Optimizing and measuring how your brand shows up in AI-generated answers (ChatGPT, Gemini, Perplexity, Claude, AI Overviews, …).
- Success looks like: inclusion in category answers, favorable and accurate narrative, credible citations, and stable story across engines.
- Enso ships a quantified scorecard on five GEO dimensions — not a DIY prompt playground.
Product
What Enso Insights does
- Runs a structured audit (44-prompt suite) on Gemini 2.5 Pro (grounded web search) + GPT-4 class scoring on the same live evidence pack.
- Merges Brave Search LLM context (three parallel intel streams: negatives, competitive poaching, recent news). Every Brave call uses a rolling ~six-month freshness window; we then drop URLs with provably stale publish dates (parsed from title, snippet, or common URL paths) and strip low-credibility hosts so nothing embarrassing becomes an [Sn] row.
- Recency ordering: surviving sources sort newest publication first — so [S1] is your sharpest live signal, not a decade-old hit. The dashboard lists date hints and author bylines when the text allows.
- Returns one executive-ready artifact: scores, competitor deltas, risk & opportunity signals, numbered sources [S1]…[Sn], 30/60/90 plan, PDF + CSV.
Scorecard
Five GEO dimensions → one consensus view
- Awareness — does AI mention you in unbranded category prompts?
- Authority — are claims tied to credible, traceable evidence?
- Sentiment — how positively or skeptically does language read?
- Consistency — do the two engines tell the same story? (Disagreement is a finding.)
- Defensibility — moat / competitive pressure narrative from the suite.
Differentiator ①
Dual-engine consensus — not a single-model guess
- ChatGPT-class and Gemini can diverge on the same evidence; averaging would hide the signal.
- Enso surfaces consensus and dispersion — when engines disagree, that hits Consistency so you see narrative drift early.
- If OpenAI is unavailable, the product degrades gracefully to Gemini-only (clearly labeled).
Differentiator ②
Citation-grade sources — richness without embarrassment
- Every grounded signal can point to [Sn] rows that map to real URLs + snippets your team can open.
- Time box + weighting: Brave research is constrained to roughly the last six months; anything with a dateline that proves it is older is removed. What remains is ordered newest-first so fresher articles naturally outrank older in-window pieces — no fake “AI score” layered on top of the web.
- We drop employer gossip mills, vote-heavy forums, B2C star-rating farms, and ephemeral social — before they become citations.
- We keep B2B buyer venues (G2, Capterra / GetApp / Software Advice, PeerSpot) where CMOs already expect category signal.
Deliverables
What your team walks away with
- Live dashboard — scorecard, confidence, competitor head-to-head, Market sources (rolling ~six months, newest-first, date + author hints where the text allows), historical trends (paid tiers).
- AI executive summary — tight narrative for Monday standups; trend line for the Board.
- PDF + CSV — partner-ready exports; PDF rendered from the live report.
- 30 / 60 / 90 plan — actions tagged Engineering / Marketing / Strategy with impact levels.
Positioning
Specialist GEO — not a bolt-on SEO tab
| Topic | Legacy SEO suite | “Ask ChatGPT yourself” | Enso Insights |
| AI citation & narrative | Not core | Ad hoc, not reproducible | Yes — rubric + engines |
| Dual-engine check | — | No | Yes |
| Curated research URLs | — | No | Yes — filtered pack |
| ~6-month web window + recency order | — | No | Yes — Brave freshness + date QA |
| Trends & exports | Rank-focused | Manual | Yes (paid) |
Honest carve-out: Enso does not replace keyword research, technical SEO crawls, or rank tracking — it answers the AI surface question.
Commercial
Pricing snapshot (self-serve where enabled)
| Tier | Price | Best for |
| Free audit | $0 | One brand, one full-depth run — same engines & rubric as Pro; no reruns / no portfolio |
| Single Audit | $99 one-time | Board deck or competitive snapshot on an extra brand — no subscription |
| Pro | $199 / mo | One brand, locked — unlimited reruns & trend history |
| Team | $499 / mo | Up to 5 brands with swaps — agencies & multi-brand operators |
Trust
Security & data handling (headlines)
- Supabase with row-level security — your audits stay in your tenant.
- No training on your data for model fine-tuning — product + legal stance (see Security & DPA pages).
- Gates on the API — cache, in-flight lock, circuit breaker, and (paid) daily fresh-rerun cap so costs stay predictable.
Roadmap
Engines & surface area
- Today: Gemini 2.5 Pro (grounded) + GPT-4 class scorer + Brave LLM context — ~six-month freshness scope on Brave fetches, with newest-first ordering in the evidence pack.
- On roadmap (as published on /methodology): Perplexity Sonar and Anthropic Claude — leading U.S. answer engines — toward Q3 2026.
- Why it matters: your buyers won’t standardize on one assistant — measurement has to follow the ensemble, not a single vendor chatbot.