Skip to main content
Enso InsightsEnsoInsights
All resources
Product philosophyApril 19, 20268 min read

What we won't ship to your dashboard, on purpose

Most product roadmaps are a long list of what's coming. Ours is shorter on purpose — and more useful is the list of things we've explicitly decided not to build, with the reasoning. Here are seven of them.

Most company blogs in this category publish roadmaps. The roadmap post is reassuring: it tells the buyer that whatever they wish the product did, it’s probably on the way. The roadmap is also, in our experience, the most misleading artifact a SaaS company ships. It implies that every imaginable feature is desirable, that the only question is sequencing, and that “more” is the direction the product should always be moving.

We disagree, hard. The decisions that have shaped Enso most have not been the decisions about what to build — they’ve been the decisions about what to refuse to build. This post is the deliberate version of that list. Seven things we’ve explicitly chosen not to ship, why we made the call, and what would have to change for us to revisit. The point is not to virtue-signal restraint. The point is to be on the record about the edges of the product so that buyers know what they’re actually getting.

1. Multi-brand tracking on a single subscription

We won’t let one subscription monitor more than one brand at the depth we promise. The reflexive reaction from agencies is “but we manage twelve brands.” We know. The depth-by-design tier exists because every time we’ve seen a tool offer multi-brand at a single-brand price point, the depth collapses: the prompt sets become generic, the citation analysis becomes shallow, and the executive artifact becomes a templated report that doesn’t actually reflect any individual brand’s position. Agencies who need real multi-brand depth need real multi-brand pricing. We have a tier for that. What we won’t do is fake it.

2. An “AI keyword tool”

The single most-requested feature we’ve heard from SEO-trained marketers is some version of “give me the list of AI prompts I should optimize for, ranked by volume.” The implicit ask is that GEO should be keyword research, but for AI. We won’t build it, for one structural reason: there is no “volume” signal for AI prompts the way there is for search keywords. Search engines tell you exactly how many people searched a term. AI assistants don’t and likely never will, because they treat conversations as private. Anyone shipping an “AI keyword volume” tool is making the numbers up — usually by inferring them from search-engine keyword volumes, which defeats the whole point of the GEO discipline.

3. An AI-content generator inside the product

Diagnostic tools that bolt on a content generator do it because content-generation usage is sticky and inflates seat-time metrics. We have no plans to ship one. Our product’s job is to tell you what to fix and where the gap is — the writing of the actual content is its own craft, done by your team or your agency, with whatever tools they prefer. Bolting on a generic generator would let us claim a fuller workflow, but it would also let every customer ship lower-quality content under the badge of our methodology. Not a trade we want to make.

4. Real-time alerts for every citation event

Citation events are noisy. A single ChatGPT answer can cite you on Tuesday and not on Thursday for reasons that have nothing to do with your brand — different prompt phrasing, different grounding-source freshness, different random seed. Shipping a real-time alert for every citation event would train customers to react to noise. We send periodic digest summaries instead, and we surface anomalies that survive a smoothing window. The product philosophy here is that good measurement protects the customer from over-reacting, not the other way around.

A measurement system that fires an alert every time a number wiggles isn’t a measurement system. It’s a slot machine that keeps your team logged in.

5. A public competitor leaderboard

Customers occasionally ask if we’ll publish public rankings of which brands are winning AI search in a category. We won’t. Public leaderboards in this space have three predictable consequences: they invite gaming (vendors specifically optimizing to show well in the leaderboard methodology rather than for actual buyers), they inflame defensive responses from brands that score poorly (and we don’t want to be in the public-shaming business), and they shift our product from being a measurement tool the customer trusts to being a press-release engine that customers tolerate. We give competitive comparisons inside the customer’s own dashboard. We don’t publish them.

6. “Every engine” coverage

We’ve written about this separately, but it deserves a slot here for symmetry. We will not add an AI engine to our coverage matrix because a competitor announced coverage of it. We will add an engine when buyer-side usage of that engine, in the categories our customers operate in, crosses a real threshold — meaningful and sustained share of category prompts, not press release share. The current engines that meet that bar are ChatGPT and Gemini. The third engine that earns the slot will earn it on usage data, not on the marketing calendar.

7. Unlimited “bring your own prompts”

Customers occasionally ask if they can run the system on an arbitrary list of their own prompts, at unlimited volume. We allow custom prompts within the scoped tier; we don’t allow unbounded usage. This is not us being stingy — it’s a methodology guardrail. Self-curated prompt lists tend to drift toward prompts the brand wants to score well on, rather than prompts the brand’s buyers actually ask. Letting that drift run unchecked produces dashboards that show steady progress on a fictional category. We let customers add prompts, we let them weight the ones that matter to them, and we cap the size so the curated category set we built remains the spine of the score. Without that spine, the score loses its meaning.

What this list implies about the product roadmap

Read the seven items above as a single negative space and you can see the actual shape of what Enso is going to be for the next few years. We’re going to keep deepening the methodology layer (better prompt construction, better citation analysis, better cross-engine consensus). We’re going to keep improving the executive artifact (clearer narrative, better defensibility, easier to forward). We’re not going to widen into adjacent categories that would dilute the focus.

The best buyers we’ve worked with are the ones who find this clarifying rather than disappointing. They already have an SEO tool. They already have a content platform. What they don’t have is a deep, defensible read on AI visibility that’s opinionated about methodology. That’s the slot we’re designed for. Everything on this carve-out list is a slot we’re explicitly leaving for someone else.

When we’d revisit any of these

These aren’t religious commitments. They’re operating decisions based on what the category looks like in early 2026. Three things would cause us to revisit individual items on the list:

  • Buyer behavior shifts at scale.If a third AI engine genuinely becomes a meaningful share of category prompts in our customers’ markets, we’ll add it. The bar is sustained usage, not a launch event.
  • Methodology innovation closes a gap. If we figure out how to do, say, real-time alerting in a way that doesn’t train customers to react to noise, we’d ship it. The current refusal is a refusal to ship a known-bad implementation, not a refusal of the underlying need.
  • The buyer’s job changes.If the job a CMO is actually doing shifts substantially — and the shape of the executive artifact needs to change with it — we’ll change with it. We don’t expect this in 2026 or 2027.

If you’re evaluating us against a tool whose pitch is “we have all of the above plus more,” the question to ask isn’t which product has more features. It’s which set of carve-outs you find more honest. That’s the real product comparison.


Written by The Enso team. Have a question or correction? Email us.

Stop guessing how AI describes your brand.

Run your first audit in 45 seconds. No credit card. No sales call. Just a scorecard, a delta, and a 30/60/90-day plan.

Read the methodology