Skip to main content
Enso InsightsEnsoInsights

Security & Trust

Built like the platforms your CISO already trusts.

Enso Insights runs on established cloud infrastructure with strong, independently attested security controls — and adds product-level guardrails on top. The full list of providers we rely on is published on our Subprocessors page.

GDPR alignedCCPA alignedDPA publishedSub-processors disclosedUniform terms — no negotiation

How we operate

One set of terms. The same for every customer.

Enso Insights serves every customer on a single set of published terms — the same MSA, the same DPA, the same Privacy Policy, the same security posture, the same price. This is intentional. It lets us keep prices low, ship product weekly, and treat every customer equally — no most-favored- nation games, no quiet side-letters, no surprises in procurement.

If you can work with our standard terms, you can be running an audit in five minutes. If your procurement process requires bespoke contractual terms, formal third-party security attestations (SOC 2 Type II, ISO 27001), or business-associate agreements, we’re probably not the right fit yet — and that’s okay.

Reference: MSA §1.1 — Uniform Terms

Encryption everywhere

TLS 1.3 in transit. AES-256 at rest. Database backups encrypted with rotated keys managed by our infrastructure providers.

Row-level security by default

Every query against your audit data is scoped by your user_id at the database layer. A bug in our application code cannot leak your data to another tenant.

Least-privilege access

Customer accounts use email or Google OAuth via Supabase. Administrative access to production is limited to the Enso Insights operator, uses short-lived sessions, and is MFA-protected. No shared role accounts.

We never train on your data

Your prompts, audit outputs, and exports are not included in any model fine-tune, prompt-tuning corpus, or shared dataset. The no-training commitment is contractually enforced with each upstream LLM provider at the tier we use.

Incident response

We maintain a written 72-hour breach-response playbook. If Customer Personal Data is affected by a confirmed incident, affected customers receive written notice within the windows required by our DPA and by applicable breach-notification law.

Delete on request

Email us with your account and we will remove your data from production systems on a timeline consistent with the underlying backup-retention policies of our infrastructure. We confirm completion in writing.

Data flow

What happens when you run an audit.

In plain English, so you can skip the questionnaire.

  1. 1

    Your prompt leaves your browser over TLS 1.3

    Audit inputs (brand name, competitor set, market context, any free-text you add) are sent from your browser to the Enso Insights API over TLS 1.3. They are never logged in plaintext in any intermediary caching layer.

  2. 2

    We store it in a row-scoped table

    The prompt, the resulting audit job, and any derived scores are written to your row-scoped tables in Supabase (Postgres), protected by Row-Level Security policies that match every read and write to your user_id. We do not operate a shared table of customer prompts.

  3. 3

    We send it to the upstream AI engine under an enterprise/API tier

    The prompt is forwarded to the selected upstream engine (OpenAI, Google Gemini, etc.) under that provider’s enterprise or API data-handling tier. At the tiers we use, upstream providers are contractually prohibited from training on Customer Data and retain data only as necessary to fulfill the request and comply with their legal obligations. Specific retention windows per provider are listed on our Subprocessors page.

  4. 4

    The response comes back, gets scored, and is stored with your audit

    The engine response is scored against our rubric and stored alongside the audit job — again row-scoped to your account. Exports and executive summaries are generated on demand from that stored data.

  5. 5

    Nothing you submit is used to train a model

    Not by us, and not by our upstream LLM providers at the tiers we use. This is a contractual commitment in the MSA (§4.2) and in the Data Processing Agreement, not a best-effort promise.

References: MSA §4 (Data) · DPA · Subprocessors

Subprocessors

Every vendor that touches your data.

We list them publicly. We notify you in writing 30 days before adding a new one.

VendorPurposeRegion
SupabasePostgres database, auth, RLSUS-West
VercelHosting and edge networkGlobal
OpenAIGPT-4 class scoring engineUS (no-training, low-retention API tier)
Google CloudGemini 2.5 Pro scoring engineUS-Central (no-training Vertex AI tier)
BraveLLM context groundingUS

See the full subprocessors page for compliance certifications, processing details, and change-notification subscription.

Read everything before you sign up.

Our MSA, DPA, Privacy Policy, and Subprocessors list are public. The same terms apply to every customer — no negotiation, no exceptions, no surprises in procurement.