TL;DR
Omni positions itself as 'the AI analytics platform' — BI and embedded analytics with a governed semantic model, where an agentic AI generates semantic queries against the model and Omni compiles them to SQL. Mitzu is an agentic product analytics platform. The Analytics Agent assembles typed analysis specifications (funnel, retention, segmentation, journey, cohort) and a deterministic query engine turns them into SQL. Both are warehouse-native — data stays in Snowflake, BigQuery, Databricks, Redshift, ClickHouse, Postgres, and similar.
Use this comparison if you are evaluating warehouse-native AI analytics platforms and want a clear read on where Omni and Mitzu actually overlap, where they diverge, and which one fits the questions your team asks most often.
Omni recently raised a Series C at a $1.5B valuation and is positioning aggressively as "the AI analytics platform for the enterprise." That puts it in front of teams who already use — or are evaluating — Mitzu for product analytics on the warehouse. Both tools are warehouse-native, both ground their AI in a semantic model, both surface reviewable SQL. They are not, however, the same shape of tool. Omni is BI-shaped: dashboards, reporting, spreadsheets, embedded analytics. Mitzu is product-analytics-shaped: funnels, retention, journeys, cohorts, segmentation, deep-dive investigations. The comparison is fair, and the right answer for many teams is "both, at different layers."
What is Omni?
Omni is an AI analytics platform built by ex-Looker co-founders. The public product page positions it as a unified surface for self-service BI, embedded analytics, and AI-driven analysis — all anchored by a governed semantic model that Omni calls "the semantic foundation for native and external AI agents." Data stays in the customer's warehouse (Snowflake, BigQuery, Databricks, Redshift, and similar); Omni reads it in place.
Omni's semantic layer follows the LookML lineage of its founders: metrics, dimensions, joins, relationships, business logic, row-level filters, and access controls — declared in code and version-controlled in Git. The AI is positioned as another consumer of that model. Per Omni's own materials, the agent has a coordinator that plans actions, selects tools, executes queries, evaluates results, and decides what to do next. Critically, the AI generates semantic queries against the model rather than raw SQL, and Omni then compiles those semantic queries to SQL.
- Workbooks — a hybrid surface combining point-and-click exploration, SQL, and a spreadsheet-style interface. The flagship analyst experience.
- Dashboards and reports — the classic BI consumption layer, built from the semantic model.
- AI chat — natural-language interface with maintained conversation context; can build calculations, filter, summarise, explain charts, compare time periods.
- Embedded analytics — Omni dashboards and workbooks embedded inside customer-facing products with multi-tenant permissions.
- MCP server — exposes governed Omni queries to external AI tools (Claude, ChatGPT, Cursor, VS Code) so they inherit the same metrics and permissions.
- Git version control — the semantic model lives in Git; pull requests, branches, and review apply to metric definitions.
Omni's strength is BI breadth on a governed semantic foundation. It handles cross-domain analytical questions — revenue, marketing performance, operational reporting, product KPIs, finance — as long as someone has modelled them into the semantic layer.
What is Mitzu?
Mitzu is an agentic product analytics platform that runs on your data warehouse and answers behavioural questions through natural-language conversation, without writing SQL. The category is narrower than agentic BI on purpose: Mitzu is specialised for product, growth, and marketing behavioural questions on event data.
Mitzu meets users in three places: the in-app Analytics Agent, the Slack Agent in any public or private channel, and a remote MCP server that exposes Mitzu's capabilities to any MCP-compatible agent (Claude, Cursor, ChatGPT, custom). Setup is handled by a Configuration Agent that scans the warehouse, recognises common event schemas (Segment, Snowplow, Firebase, GA4, custom), maps user and group identifiers, picks up dimension tables, and builds the semantic layer automatically. Analysts review and adjust; nobody hand-writes YAML.
The trust differentiator: Mitzu's agent does not write SQL — and does not write semantic queries either. It assembles structured analysis specifications: funnel steps with a conversion window, retention cohorts and return events, segmentation filters with sampled property values, journey definitions with starting events and depth. A deterministic query engine — the same code that has been generating SQL for Mitzu's UI for years — turns those specifications into SQL. The same specification produces the same SQL every time. Methodology errors that LLMs reliably make (a funnel without a conversion window, a retention chart that double-counts, a cohort defined off the wrong join) are guard-railed by the engine, not by prompt engineering.
Mitzu's semantic layer is event-centric and product-analytics-specialised: events, event properties, entities (users, sessions, accounts, teams), dimension properties on those entities, and — critically — sampled property values. Filters are suggested from real values in the warehouse, not invented by the AI. Saved insights, dashboards, and named cohorts feed back into the semantic layer as additional context for future questions.
Omni vs Mitzu: side-by-side
| Omni | Mitzu | |
|---|---|---|
| Category | AI analytics platform — BI, reporting, embedded analytics, AI chat | Agentic product analytics on the warehouse |
| Who composes the query | AI generates a semantic query against the semantic model; Omni compiles it to SQL | Agent assembles a typed analysis specification; deterministic engine compiles it to SQL |
| Semantic layer shape | BI-shaped — metrics, dimensions, joins, business logic; LookML-style, Git-versioned | Product-analytics-shaped — events, event properties, entities, dimension properties, sampled values |
| Semantic layer authoring | Hand-authored by analysts / data engineers (with AI assistance in-app) | Auto-built by the Configuration Agent from warehouse introspection; analyst reviews |
| Methodology primitives | Charts built from metrics and dimensions; funnels, retention, journeys assembled per question | Funnel, retention, segmentation, journey, cohort as first-class primitives in the engine |
| Filter values | User-typed or model-defined | Sampled from real warehouse values — no invented region codes or feature names |
| Warehouses supported | Snowflake, BigQuery, Databricks, Redshift, Postgres, and others | Snowflake, BigQuery, Databricks, ClickHouse, Redshift, Athena, Trino/Presto, Postgres, Firebolt, Starburst, MS Fabric |
| Surfaces | Workbooks, dashboards, AI chat, embedded analytics, MCP server, APIs | In-app Analytics Agent, Slack Agent, remote MCP server |
| Reviewable SQL | ✅ shown for inspection | ✅ shown for inspection |
| Embedded analytics | ✅ first-class — multi-tenant, embedded in customer products | ❌ not the focus |
| Slack agent | ❌ | ✅ @mitzu in any channel; thread context shared |
| Best for | BI breadth on a governed semantic foundation — dashboards, reporting, embedded, cross-domain analysis | Product, growth, and marketing behavioural questions where methodology must be right |
SQL examples: the same question, two architectures
Take a typical product analytics question Mitzu is built for: "What is our 7-day signup-to-activation conversion rate, broken down by acquisition channel, for the last 30 days?"
Omni: SQL compiled from an AI-generated semantic query
In Omni, the AI generates a semantic query against the semantic model — assuming an analyst has previously modelled signup and activation as events, defined acquisition_channel as a dimension on the user entity, and authored a conversion-window measure or the join logic to support one. Omni then compiles the semantic query to SQL. Plausible output:
-- Plausible compiled SQL from an Omni semantic query.
-- The exact shape depends on how the semantic model was authored
-- (was a 7-day window measure defined? is channel on the user
-- or on the signup event? is "activation" a single event or a model?).
WITH signups AS (
SELECT u.user_id,
u.acquisition_channel,
min(e.event_at) AS signup_at
FROM events e
JOIN users u USING (user_id)
WHERE e.event_name = 'signup'
AND e.event_at >= current_date - INTERVAL '30 days'
GROUP BY 1, 2
),
activations AS (
SELECT user_id, min(event_at) AS activated_at
FROM events
WHERE event_name = 'activation'
GROUP BY user_id
)
SELECT s.acquisition_channel,
count(DISTINCT s.user_id) AS signups,
count(DISTINCT CASE
WHEN a.activated_at BETWEEN s.signup_at
AND s.signup_at + INTERVAL '7 days'
THEN s.user_id END) AS activated_in_7d,
round(100.0 * count(DISTINCT CASE
WHEN a.activated_at BETWEEN s.signup_at
AND s.signup_at + INTERVAL '7 days'
THEN s.user_id END)
/ nullif(count(DISTINCT s.user_id), 0), 1) AS conv_pct
FROM signups s
LEFT JOIN activations a USING (user_id)
GROUP BY 1
ORDER BY signups DESC;The semantic model carries real constraints, so it is harder for the AI to invent a channel value or join the wrong table. The methodology question — what counts as a 7-day window, whether activation events before the signup count, whether a user with multiple signups is double-counted — is still resolved by the model definitions and by what the AI chose to ask for. If a strict funnel-window measure was not authored into the model, the AI has to compose one inside the semantic query; how reliably it does that across phrasings is an empirical question.
Mitzu: SQL from a deterministic engine
The Mitzu agent does not write the SQL and does not author a semantic query. It assembles a funnel specification — roughly: { first_event: "signup", subsequent_events: ["activation"], conversion_window: "7d", breakdown: "acquisition_channel", date_range: "last_30_days" } — and the deterministic engine emits the same SQL every time:
-- Engine output for a 2-step funnel with a 7-day conversion window,
-- broken down by acquisition channel, for the last 30 days.
-- Same spec → same SQL.
WITH step_1 AS (
SELECT user_id,
min(event_time) AS step_1_at,
any(properties['channel']) AS channel
FROM events
WHERE event_name = 'signup'
AND event_time >= now() - INTERVAL 30 DAY
AND event_time < now()
GROUP BY user_id
),
step_2 AS (
SELECT s1.user_id,
s1.channel,
min(e.event_time) AS step_2_at
FROM step_1 s1
INNER JOIN events e
ON e.user_id = s1.user_id
AND e.event_name = 'activation'
AND e.event_time > s1.step_1_at
AND e.event_time <= s1.step_1_at + INTERVAL 7 DAY
GROUP BY s1.user_id, s1.channel
)
SELECT s1.channel AS channel,
count(DISTINCT s1.user_id) AS step_1_users,
count(DISTINCT s2.user_id) AS step_2_users,
round(count(DISTINCT s2.user_id)
/ nullIf(count(DISTINCT s1.user_id), 0) * 100, 1) AS conv_pct
FROM step_1 s1
LEFT JOIN step_2 s2 USING (user_id)
GROUP BY channel
ORDER BY step_1_users DESC;The conversion window is enforced strictly (activation must come after the signup and within 7 days). Distinct users prevent double-counting. The channel value comes from the signup row, so attribution is consistent. The engine has been generating this shape of SQL in production for years; the agent's job is to assemble the specification, not to author the query.
The SQL is shown to the analyst as a verification artifact, not the agent's authored work. No matter how the question is phrased — "signup to activated within a week, by channel," "7-day activation funnel split by source" — the same specification compiles to the same SQL.
Other product analytics questions, side by side
The shape difference shows up across the methodology surface area Mitzu is built for. Each row is a question Mitzu's engine handles as a first-class primitive; in Omni each is a chart an analyst assembles from metrics, dimensions, and the relationships the semantic model exposes.
| Question type | Omni | Mitzu |
|---|---|---|
| "Show retention by weekly cohort, signups vs. activated users, last 12 weeks" | Compose a cohort table from a retention measure if modelled — accuracy depends on how the cohort logic was authored | Retention primitive: cohort = signup, return event = any event (or specified), bucket = weekly. Engine handles cohort isolation and bucketing. |
| "Top 10 most-trodden user journeys from app open over the next 5 events" | Bespoke SQL or notebook work — journeys are not a native BI primitive | Journey primitive: starting event = app_open, depth = 5. Engine emits the Sankey / tree query. |
| "Users in DACH who completed checkout and have NPS < 7" | Filter on country dimension + measure on NPS — needs both modelled into the semantic layer | Segmentation with sampled country values from the warehouse + NPS join surfaced in the data catalog |
| "Why did week-2 retention drop in November?" | Returns a chart; root-cause investigation is an analyst job | Agent investigates from multiple angles — breakdowns, segment comparisons, correlated events — and returns a synthesised report |
| "Build a tenant-isolated dashboard our customers see in our app" | First-class embedded analytics with multi-tenant permissions | Not Mitzu's focus — point to Omni or an embedded BI tool |
| "Cross-functional dashboard with finance, sales, and product KPIs" | Native BI strength — once metrics are modelled, dashboards combine them | Product KPIs are first-class; finance and sales KPIs depend on what's in the warehouse and how Mitzu's catalog exposes them |
UI differences in one line each
- Omni workbook — a hybrid grid that combines a spreadsheet, a SQL editor, and a point-and-click query builder; the analyst's primary workspace.
- Omni dashboard — composed tiles backed by the semantic model; consumers filter and drill; analysts edit upstream metrics in Git.
- Omni AI chat — a side panel or full chat surface; conversation context maintained; calculations, filters, and charts created from prompts.
- Mitzu Explore — direct point-and-click funnel, retention, segmentation, and journey builders; outputs are charts plus the SQL behind them.
- Mitzu Analytics Agent — a chat surface where the agent assembles funnel / retention / segmentation specifications, runs them through the deterministic engine, and replies with chart + summary + reviewable SQL.
- Mitzu Slack Agent —
@mitzuin any channel; the agent reads thread context, runs the same engine, and posts answers as cards.
Strengths and trade-offs
Omni
| Strengths | Trade-offs |
|---|---|
| BI breadth — dashboards, reporting, spreadsheet-style modelling, embedded analytics in one platform. | Semantic model is BI-shaped (metrics + dimensions + joins). Product-analytics concepts like funnels with conversion windows or journey trees are not native primitives. |
| Governed semantic foundation — same model powers dashboards, AI chat, embedded, and external agents. | Semantic layer is hand-authored; setting it up and keeping it current is data engineering work. |
| First-class embedded analytics — multi-tenant, secure, designed to ship inside customer-facing products. | AI still composes queries (against the semantic model rather than raw schema, but still composing). Methodology choices around funnels, retention, and attribution depend on what the model exposes. |
| Git-versioned model — pull requests and review for metric definitions. | Slack agent and out-of-the-box product analytics methodology are not the focus. |
| Strong fit for analytics teams that need one tool spanning BI, reporting, and embedded use cases. | Per the public materials, Omni does not currently provide a built-in way to test response accuracy — analyst review is the gate. |
Mitzu
| Strengths | Trade-offs |
|---|---|
| The agent does not write SQL and does not author a semantic query. A deterministic engine compiles a typed specification into SQL — same input, same SQL, same answer. | Narrower scope — Mitzu is built for product, growth, and marketing behavioural questions, not classic BI dashboarding or financial reporting. |
| Auto-built semantic layer specialised for product analytics — events, event properties, entities, dimension properties, and sampled filter values. No hand-authored YAML. | Requires event data already in the warehouse. Companies without a warehouse, or with events trapped in a third-party tool that will not export, are not the fit. |
| Funnel, retention, segmentation, journey, and cohort are first-class primitives — methodology errors LLMs reliably make are guard-railed by the engine. | Open-ended statistical exploration belongs in a notebook (Hex, Deepnote, Jupyter), not in Mitzu. |
| Warehouse-agnostic — Snowflake, BigQuery, Databricks, ClickHouse, Redshift, Athena, Trino/Presto, Postgres, Firebolt, Starburst, and MS Fabric. | Embedded analytics for customer-facing products is not the focus. |
| Three surfaces share one semantic layer: in-app Analytics Agent, Slack Agent, and a remote MCP server for any external agent. | — |
| Per-editor seat pricing with unlimited events; warehouse compute stays under the customer's control. | — |
Capability scorecard
Where each tool stands on the capabilities that matter most when product analytics and AI BI are both on the table.
| Capability | Omni | Mitzu |
|---|---|---|
| Runs on the customer's warehouse | ✅ | ✅ |
| Multi-warehouse support (Snowflake, BigQuery, Databricks, Redshift, ClickHouse, Postgres, Trino…) | ✅ | ✅ |
| Auto-built semantic layer (no hand-authored model required) | ❌ hand-authored | ✅ Configuration Agent builds it |
| Semantic layer specialised for product analytics (events, properties, entities, sampled values) | ❌ BI-shaped | ✅ |
| Deterministic SQL engine (agent does not compose the query) | ❌ AI composes semantic queries | ✅ |
| Native funnel methodology with conversion window | ❌ | ✅ |
| Native retention methodology with cohort bucketing | ❌ | ✅ |
| Native segmentation and journey primitives | ❌ | ✅ |
| Sampled property values for filters | ❌ | ✅ |
| Reviewable SQL surfaced for every answer | ✅ | ✅ |
| AI chat with natural-language questions | ✅ | ✅ |
| MCP server for external agents | ✅ | ✅ |
| Slack agent | ❌ | ✅ |
| First-class embedded analytics (multi-tenant) | ✅ | ❌ |
| Workbook / spreadsheet-style modelling | ✅ | ❌ |
| Git version control on the semantic model | ✅ | ❌ |
| Diagnostic deep-dives (root-cause, impact analysis, why-did-X-change investigations) | ❌ chart-first, analyst-driven | ✅ agent investigates across angles |
| Best fit for embedded BI in a customer-facing product | ✅ | ❌ |
| Best fit for product, growth, and marketing behavioural analytics | ❌ | ✅ |
When to choose Omni, Mitzu, or both?
These are different layers, not substitutes. Omni is an AI BI platform with a governed semantic foundation. Mitzu is an agentic product analytics platform with an auto-built, event-centric semantic layer and a deterministic query engine. The right choice depends on which questions dominate your team's analytics workload.
- Choose Omni when the workload is BI-shaped — dashboards across finance, sales, marketing, and product; embedded analytics inside a customer-facing product; spreadsheet-style modelling on top of warehouse data; a single governed semantic model that powers everything.
- Choose Mitzu when product, growth, or marketing teams need to ask diagnostic behavioural questions — why did week-2 retention drop, did the new pricing page move trial-to-paid, which onboarding step has the highest drop-off — and you want methodology guard-rails the LLM cannot break, plus a Slack agent that brings answers to where the team already works.
- Run both when BI and product analytics are both first-class concerns. Let Omni own the BI surface and embedded analytics; let Mitzu own the product analytics methodology layer. Both stay warehouse-native, both surface reviewable SQL, and there is no data duplication — the warehouse is the single source of truth.
FAQ
Is Omni a product analytics tool?
Omni is an AI analytics platform with strong BI, embedded, and reporting capabilities on a governed semantic layer. Funnels, retention, journeys, and cohorts are not native primitives in the way they are in a dedicated product analytics tool — they are charts the analyst assembles from the semantic model. Teams who need first-class product analytics methodology typically pair Omni with a product analytics layer or use a dedicated product analytics platform.
Does Mitzu replace Omni for BI and dashboards?
No. Mitzu is built for product analytics — behavioural questions on event data, diagnostic deep-dives, impact analysis. It is not a BI tool and is not pitched as one. Dashboards and saved insights exist in Mitzu, but cross-domain BI breadth, embedded analytics, and spreadsheet-style modelling are not the focus.
Both tools have a semantic layer — what is the difference?
Shape and authoring. Omni's semantic layer is BI-shaped — metrics, dimensions, joins, relationships, business logic — and hand-authored in code, Git-versioned in LookML lineage. It expresses what a BI tool needs to know. Mitzu's semantic layer is product-analytics-shaped — events, event properties, entities, dimension properties on those entities, and sampled property values — and auto-built by a Configuration Agent that scans the warehouse.
It expresses what a product analytics agent needs to know, including the real values that show up in user-facing filters.
Does Omni's AI write SQL? Does Mitzu's agent write SQL?
Omni's AI generates a semantic query against the semantic model; Omni compiles it to SQL. The AI is still composing the query, just at a higher level than raw schema. Mitzu's agent does not compose a query at any level — it assembles a typed analysis specification (funnel, retention, segmentation, journey, cohort) and a deterministic engine compiles the specification into SQL. The same specification always produces the same SQL.
Can I use both together?
Yes. Both keep data in the warehouse; there is no duplication. A common pattern: Omni handles BI, reporting, and embedded analytics on top of dbt-modelled tables; Mitzu handles product analytics on the raw event tables and feeds back into the same warehouse for downstream modelling. Each tool owns the questions it is best at.
Where does the data live in either tool?
In your warehouse. Both architectures are warehouse-native. Neither moves data into a vendor silo. Compliance, data residency, and cost control all stay on your side of the line.
Related reading
- ClickHouse AI vs Mitzu: Agentic SQL vs Agentic Product Analytics
- Agentic Analytics Platforms Compared
- BI vs Product Analytics
- Warehouse-Native vs First-Generation Product Analytics
- The Semantic Layer for Agentic Analytics
- AI Analytics, Hallucinations, and SQL Transparency
- Mitzu Product Analytics




