Back to Blog
Product Comparisons

PostHog Agentic Analytics vs Mitzu: Vendor-Silo Agent vs Warehouse-Native Agentic Product Analytics

Same category, two architectures — and which questions each one can actually answer.

PostHog's Max AI is a chat agent on PostHog's event store; Mitzu is agentic product analytics on your data warehouse with a deterministic SQL engine. Compare architecture, methodology, SQL, and when to use each — or both.

István Mészáros
István Mészáros

Co-founder & CEO

May 14, 2026
10 min read
PostHog Agentic Analytics vs Mitzu: Vendor-Silo Agent vs Warehouse-Native Agentic Product Analytics

TL;DR

PostHog's agentic offering (Max AI) is a single-loop agent that lives inside PostHog and reasons in HogQL — PostHog's SQL dialect — over PostHog's event store and Data Warehouse. Mitzu is an agentic product analytics platform that runs on your existing warehouse (Snowflake, BigQuery, Databricks, Redshift, ClickHouse, Postgres, Trino, MS Fabric and more). The Analytics Agent assembles funnel, retention, segmentation, journey and cohort specifications; a deterministic engine turns them into SQL. The architectural split: an LLM authors HogQL inside PostHog's silo (Max AI) versus a deterministic engine authoring SQL inside your warehouse from a typed specification (Mitzu).

Use this comparison to evaluate tools through an agentic product analytics lens: which platform answers behavioural questions on your full business data with methodology you can trust, not just a chat box bolted onto an event silo.

PostHog has shipped one of the more ambitious agentic offerings in product analytics. Max AI is "an AI agent that works within the PostHog platform" — single-loop architecture, agent modes for SQL / Analytics / CDP, and an MCP server exposing HogQL to external IDE agents. Mitzu sits in the same category — agentic product analytics — but with a different architecture: it runs on the customer's data warehouse, and the agent doesn't write SQL. A deterministic query engine does, from a typed analysis specification. This piece walks the two architectures, side-by-side, with SQL examples for a question both tools can answer.

What is PostHog's agentic offering?

PostHog's agent is Max AI — chat that lives inside the PostHog app and can edit filters, create insights, build dashboards, write HogQL, summarise session replays, and reason about experiments. The architecture handbook describes a single-loop pattern (one agent, full conversation context, dynamic mode-switching) inspired by Claude Code, with agent modes — SQL, Analytics, CDP — that load domain-specific tools and prompts on demand. PostHog's engineering team has written candidly about rebuilding it twice before landing on this shape.

The SQL story is central. PostHog's blog and docs are explicit that SQL is the semantic layer LLMs reason in best — so the agent is built around authoring HogQL (PostHog's SQL dialect), grounded by tools that introspect the taxonomy, event and property definitions, and existing insights. The MCP server exposes the same surface to external agents: run HogQL queries, manage Data Warehouse views, sync sources, explore schema.

  • Single-loop agent — one agent with mode-switching (SQL, Analytics, CDP, Product), no black-box sub-agents.
  • HogQL authoring — the LLM writes queries against PostHog's event store, Data Warehouse, and linked external sources (Postgres, Stripe, HubSpot via PostHog connections).
  • Grounding toolsread_taxonomy, read_data, search for events, properties, schema, billing context, existing insights.
  • In-app surfaces — chat is present across filter editing, insight building, SQL editor, replay summarisation, experiment readouts.
  • MCP server — same agent surface for external IDE agents (Cursor, Claude Code, etc.) running HogQL inside the PostHog silo.

The data scope is PostHog. Events land in PostHog's ingestion pipeline, sit in PostHog's event store, and the agent reasons over that store (plus whatever's been linked into PostHog's Data Warehouse). External sources can be synced in via PostHog connectors, but the agent's centre of gravity is the PostHog environment — that's where the schema, taxonomy and definitions live.

What is Mitzu?

Mitzu is an agentic product analytics platform that runs on your data warehouse and answers behavioural questions through natural-language conversation, without writing SQL. The category is the same as PostHog's agentic offering — product, growth and marketing behavioural questions on event data — but the architecture is warehouse-native by design.

Mitzu meets users in three places: the in-app Analytics Agent, the Slack Agent in any public or private channel, and a remote MCP server that exposes Mitzu's capabilities to any MCP-compatible agent (Claude, Cursor, ChatGPT, custom). Setup is handled by a Configuration Agent that scans the warehouse, recognises common event schemas (Segment, Snowplow, Firebase, GA4, custom), maps user and group identifiers, and builds the semantic layer automatically.

The trust differentiator: Mitzu's agent does not write SQL. It assembles structured analysis specifications — funnel steps with a conversion window, retention cohorts and return events, segmentation filters with sampled property values, journey definitions — and a deterministic query engine turns those specifications into SQL. The same specification produces the same SQL every time. Methodology errors that LLMs reliably make (a funnel without a window, a retention chart that double-counts, a cohort defined wrong) are guard-railed by the engine, not by prompt engineering.

PostHog vs Mitzu: side-by-side

PostHog (Max AI)Mitzu
CategoryAgentic product analytics inside a SaaS suiteAgentic product analytics on the customer's data warehouse
Who writes the SQLLLM authors HogQL (single-loop agent, mode-switched)Deterministic query engine, from a typed analysis specification
Where data livesPostHog event store + PostHog Data Warehouse (vendor storage)Customer's warehouse — Snowflake, BigQuery, Databricks, Redshift, ClickHouse, Postgres, Trino, Athena, Starburst, Firebolt, MS Fabric
Joins to billing / CRM / support dataOnly what's linked into PostHog Data Warehouse via connectorsNative — events join to existing warehouse tables and dbt models
Methodology primitivesFunnels, retention, paths, cohorts (HogQL the LLM has to compose)Funnel, retention, segmentation, journey, cohort as first-class typed specifications
IngestionPostHog SDKs / connectors / ETL into PostHogWhatever already lands events in the warehouse — Segment, Snowplow, RudderStack, Firebase, GA4, custom
Pricing modelUsage-based — events, replays, MTUsPer editor seat — independent of event volume; customer's warehouse compute
Beyond product analyticsSession replay, feature flags, experiments, surveysOut of scope — pair with the relevant tool (PostHog, LaunchDarkly, etc.)
SurfacesIn-app chat + MCP server for IDE agentsIn-app Analytics Agent, Slack Agent, remote MCP server
Best forTeams that want product analytics, replay and flags in one developer-first SaaSTeams whose events are already in the warehouse and whose product analytics has to join business data

SQL examples: the same question, two paths

Take a typical product analytics question: "What is our 7-day signup-to-activation conversion rate, broken down by acquisition channel, for the last 30 days?"

PostHog: HogQL the LLM might generate

-- Plausible Max AI output against PostHog's events table.
-- HogQL — PostHog's SQL dialect. Methodology depends on prompt + grounding.
WITH signups AS (
  SELECT person_id,
         min(timestamp)                                       AS signup_at,
         argMin(properties.channel, timestamp)                AS channel
  FROM events
  WHERE event = 'signup'
    AND timestamp >= now() - INTERVAL 30 DAY
  GROUP BY person_id
),
activations AS (
  SELECT person_id, min(timestamp) AS activated_at
  FROM events
  WHERE event = 'activated'
    AND timestamp >= now() - INTERVAL 37 DAY
  GROUP BY person_id
)
SELECT s.channel                                              AS channel,
       count()                                                AS signups,
       countIf(a.activated_at <= s.signup_at + INTERVAL 7 DAY) AS activated_in_7d,
       round(activated_in_7d / signups * 100, 1)              AS conv_pct
FROM signups s
LEFT JOIN activations a ON a.person_id = s.person_id
GROUP BY s.channel
ORDER BY signups DESC;

Reads cleanly, but the methodology is doing a lot of work in the prompt. A different session, or a slightly different schema, can yield: a window measured against the wrong anchor, an activation that pre-dates the signup counted as a conversion, channel attribution joined off the wrong row when a user has multiple signups, or a window that quietly slips to 30 days because the LLM conflated the lookback with the conversion window. None of these are SQL bugs — they are methodology choices an LLM is making implicitly, every time. PostHog's grounding (taxonomy lookups, query examples loaded at session start) reduces the failure rate; it doesn't eliminate the class of error.

Mitzu: SQL from a deterministic engine

The Mitzu agent does not write the SQL. It assembles a funnel specification — roughly: { first_event: "signup", subsequent_events: ["activated"], conversion_window: "7d", breakdown: "channel", date_range: "last_30_days" } — and the deterministic engine emits the same SQL every time, against whichever warehouse the customer connected:

-- Engine output for a 2-step funnel with a 7-day conversion window,
-- broken down by channel, for the last 30 days. Same spec → same SQL.
WITH step_1 AS (
  SELECT user_id,
         min(event_time)             AS step_1_at,
         any(properties['channel'])  AS channel
  FROM events
  WHERE event_name = 'signup'
    AND event_time >= now() - INTERVAL 30 DAY
    AND event_time <  now()
  GROUP BY user_id
),
step_2 AS (
  SELECT s1.user_id,
         s1.channel,
         min(e.event_time) AS step_2_at
  FROM step_1 s1
  INNER JOIN events e
    ON e.user_id = s1.user_id
   AND e.event_name = 'activated'
   AND e.event_time >  s1.step_1_at
   AND e.event_time <= s1.step_1_at + INTERVAL 7 DAY
  GROUP BY s1.user_id, s1.channel
)
SELECT s1.channel                                AS channel,
       count(DISTINCT s1.user_id)                AS step_1_users,
       count(DISTINCT s2.user_id)                AS step_2_users,
       round(count(DISTINCT s2.user_id)
             / nullIf(count(DISTINCT s1.user_id), 0) * 100, 1) AS conv_pct
FROM step_1 s1
LEFT JOIN step_2 s2 USING (user_id)
GROUP BY channel
ORDER BY step_1_users DESC;

The conversion window is enforced strictly (activation must be after signup and within 7 days). Distinct users prevent double-counting. Channel comes from the signup row, so attribution is consistent. The engine has been generating this shape of SQL in production for years across every supported warehouse; the agent's job is to assemble the specification, not to author the query.

The SQL is shown to the analyst as a verification artifact — not the agent's authored work.

Other product analytics shapes Mitzu produces deterministically

The same pattern — typed specification in, engine-generated SQL out — covers the rest of the product analytics surface. Sketches below; real outputs are dialect-specific to the warehouse.

-- Retention: weekly cohorts of "signup" users returning via any "active" event,
-- 8-week window, segmented by acquisition channel.
WITH cohort AS (
  SELECT user_id,
         date_trunc('week', min(event_time)) AS cohort_week,
         any(properties['channel'])          AS channel
  FROM events
  WHERE event_name = 'signup'
    AND event_time >= now() - INTERVAL 70 DAY
  GROUP BY user_id
),
returns AS (
  SELECT c.user_id,
         c.channel,
         c.cohort_week,
         floor(date_diff('day', c.cohort_week, e.event_time) / 7) AS week_index
  FROM cohort c
  INNER JOIN events e
    ON e.user_id = c.user_id
   AND e.event_name = 'active'
   AND e.event_time >= c.cohort_week
   AND e.event_time <  c.cohort_week + INTERVAL 56 DAY
)
SELECT channel,
       cohort_week,
       week_index,
       count(DISTINCT user_id) AS retained_users
FROM returns
GROUP BY channel, cohort_week, week_index
ORDER BY cohort_week, week_index;
-- Cohort + warehouse join: users who completed checkout this month,
-- enriched with subscription tier from the billing fact table — no ETL.
SELECT e.user_id,
       max(e.event_time)              AS last_checkout_at,
       b.subscription_tier,
       b.mrr_usd
FROM events e
INNER JOIN dim_billing_subscriptions b
  ON b.user_id = e.user_id
 AND b.is_current
WHERE e.event_name = 'checkout_completed'
  AND e.event_time >= date_trunc('month', now())
GROUP BY e.user_id, b.subscription_tier, b.mrr_usd
ORDER BY mrr_usd DESC;

Retention specifications carry the cohort definition, return event, granularity and window. Cohorts can pull dimension data from warehouse tables modelled in dbt (billing, CRM, support) without exporting events into a vendor — that's the warehouse-native side of the architecture. PostHog's MaxAI can reach external sources after they're linked into PostHog's Data Warehouse; Mitzu reads them in place.

UI differences

Both products surface chat front-and-centre, but the operating model around the chat is different. PostHog is a SaaS application — events live in PostHog, the chat helps you build and edit PostHog insights, dashboards, replays, flags and experiments. The output of an answer is usually a PostHog artifact (an Insight, a Dashboard) anchored to PostHog's data model.

Mitzu's app surfaces an Analytics Agent chat alongside the explorer — funnels, retention, segmentation, journeys, cohorts and dashboards built from typed specifications. Every answer surfaces the engine-generated SQL for verification, and the semantic layer (events, properties, entities, sampled property values) is browsable so analysts can see the agent's grounding rather than guess at it. The Slack Agent shares the same semantic layer and engine, so a PM asking a question in #growth gets the same answer they'd get in-app. An external agent (Claude, Cursor, ChatGPT) reaching Mitzu over MCP gets the same primitives.

Advantages and trade-offs

PostHog (Max AI)

StrengthsTrade-offs
One developer-first SaaS for product analytics, session replay, feature flags, experiments and surveys — chat threads everything together.Data lives in PostHog's storage. Billing, CRM and support tables aren't there unless explicitly linked into PostHog Data Warehouse.
Single-loop agent with mode-switching gives a coherent conversation across SQL, analytics and CDP tasks.The LLM authors HogQL — methodology errors on funnels, retention and cohorts are still possible, even with strong grounding.
Open-source under the PostHog license; self-hosting available for teams that need it.Pricing scales with events, replays and MTUs. Growth in event volume shows up directly in the bill.
MCP server exposes HogQL to IDE agents — Cursor, Claude Code, etc. can run queries inside the PostHog silo.MCP scope is HogQL-on-PostHog; an external agent doesn't gain access to the full warehouse through it.
Fast time-to-value for greenfield teams without an existing warehouse stack.Less natural fit when events are already landing in a warehouse and the data team owns the modelling layer.

Mitzu

StrengthsTrade-offs
The agent does not write SQL — a deterministic query engine does, from a typed specification. Same input, same SQL, same answer.Narrower scope — Mitzu is built for product, growth and marketing behavioural questions, not session replay, feature flags or experiment platforms.
Auto-built semantic layer specialised for product analytics — events, event properties, entities, dimension properties and sampled filter values. No hand-authored YAML.Requires event data already in the warehouse. Greenfield teams without a warehouse should land events there first (Segment, Snowplow, RudderStack, Firebase, GA4).
Funnel, retention, segmentation, journey and cohort are first-class primitives across in-app, Slack and external MCP agents.Open-ended statistical exploration belongs in a notebook (Hex, Deepnote, Jupyter), not in Mitzu.
Warehouse-native — events join in place to billing, CRM, support, dbt models. No data egress, no per-event pricing.Self-hosted deployment is available on the Enterprise tier; lower tiers are SaaS.
Per-editor seat pricing with unlimited events; warehouse compute stays under the customer's control.

Capability scorecard

Where each tool stands on the capabilities that matter for agentic product analytics work. ✅ supported and idiomatic. ❌ not supported, or possible only via workaround.

CapabilityPostHog (Max AI)Mitzu
Runs on the customer's data warehouse❌ data in PostHog
Multi-warehouse support (Snowflake, BigQuery, Databricks, Redshift, ClickHouse, Postgres, Trino…)
Deterministic SQL engine (agent does not write SQL)
Auto-built semantic layer specialised for product analytics❌ HogQL + taxonomy
Native funnel methodology
Native retention methodology
Native segmentation, journey and cohort primitives
Sampled property values for filters
Reviewable SQL surfaced for every answer✅ HogQL
Joins to billing / CRM / support data without ETL❌ requires PostHog connectors and Data Warehouse setup✅ joins to existing warehouse and dbt tables in place
Per-event / MTU pricing✅ usage-based❌ per-editor seat, unlimited events
Slack agent on the same semantic layer
MCP server for external agents✅ HogQL within PostHog✅ product-analytics primitives over the warehouse
Session replay❌ out of scope
Feature flags / experiments❌ out of scope
Open-source / self-hosted✅ Enterprise tier

When to choose PostHog, Mitzu, or both?

They share a category — agentic product analytics — but the architectures point at different teams. The choice usually falls out of one question: where are your events today, and where do you want analytics methodology to live?

  • Choose PostHog when you want a single SaaS that covers product analytics, session replay, feature flags and experiments; when the engineering team is comfortable with PostHog as the centre of gravity for product data; when greenfield speed matters more than warehouse-native joins.
  • Choose Mitzu when events are already in a warehouse (or will be) alongside billing, CRM and support data; when the data team owns the modelling layer (dbt, semantic models); when product analytics methodology must be deterministic and reproducible across in-app, Slack and external MCP agents.
  • Run both when PostHog Cloud earns its place for session replay and feature flags but the heavy product-analytics questions need to land on warehouse-native data — let PostHog handle replay and experimentation while Mitzu handles the behavioural analysis joined to the rest of the business.

FAQ

Is Max AI a text-to-SQL tool?

Effectively yes for the SQL mode — Max AI writes HogQL against PostHog's event store and Data Warehouse, grounded by taxonomy and schema introspection. PostHog's own engineering writing makes this explicit: SQL is the semantic layer the LLM is best at, so the agent is built around authoring queries. Mitzu's architecture is the opposite: the agent does not write SQL. It assembles analysis specifications; a deterministic engine emits the SQL.

Does Mitzu replace PostHog?

Only for the product analytics layer, and only when events are warehouse-native. PostHog's session replay, feature flags, surveys and experiments aren't Mitzu's scope. Many teams keep PostHog Cloud for those surfaces and run Mitzu on the warehouse for behavioural analysis. See the broader Mitzu vs PostHog comparison for the platform-level view.

Can Max AI answer questions across billing, CRM and support data?

Only for the sources explicitly linked into PostHog's Data Warehouse (Postgres, Stripe, HubSpot, and similar connectors). The agent doesn't reach into a customer's full warehouse the way a warehouse-native tool does. Mitzu reads the warehouse in place, so behavioural events join naturally to billing, CRM, support and any dbt model already there.

How is Mitzu's MCP server different from PostHog's MCP server?

PostHog's MCP server exposes HogQL and Data Warehouse management to IDE agents — the surface is SQL over PostHog. Mitzu's MCP server exposes product-analytics primitives (funnel, retention, segmentation, journey, cohort) backed by the deterministic engine and warehouse — the surface is typed analyses over the customer's data. An external agent reaching Mitzu over MCP gets methodology guard-rails for free; reaching PostHog over MCP gets HogQL on the PostHog silo.

Where does the data live in either tool?

PostHog: in PostHog's event store and Data Warehouse, plus whatever's linked in via connectors. Mitzu: in the customer's warehouse, full stop. The architectural split has direct knock-on effects on pricing (per-event vs per-seat), compliance (vendor storage vs customer-controlled), and join richness (PostHog-linked sources vs everything already modelled in the warehouse).

References

Key Takeaways

  • Max AI is an LLM authoring HogQL; Mitzu's agent doesn't write SQL — a deterministic engine emits it from a typed analysis specification.
  • PostHog data lives in PostHog's storage; Mitzu data stays in the customer's warehouse. That's the difference between a vendor-silo agent and a warehouse-native agent.
  • Funnels, retention, segmentation, journeys and cohorts are first-class primitives in Mitzu, not SQL the LLM has to compose correctly each session.
  • PostHog's MCP server exposes HogQL inside the PostHog silo; Mitzu's MCP server exposes product-analytics primitives to any external agent (Claude, Cursor, ChatGPT) with the deterministic engine underneath.

About the Author

István Mészáros

Co-founder & CEO

LinkedIn: https://www.linkedin.com/in/imeszaros/

Co-founder and CEO of Mitzu. Passionate about product analytics and helping companies make data-driven decisions.

Share this article

Subscribe to our newsletter

Get the latest insights on product analytics.

Ready to transform your analytics?

See how Mitzu can help you gain deeper insights from your product data.

Get Started

How to get started with Mitzu

Start analyzing your product data in three simple steps

Connect your data warehouse

Securely connect Mitzu to your existing data warehouse in minutes.

Define your events

Map your product events and user properties with our intuitive interface.

Start analyzing

Create funnels, retention charts, and user journeys without writing SQL.