Back to Blog
Product Comparisons

BigQuery AI vs Mitzu: Agentic SQL on Google's Warehouse vs Agentic Product Analytics on the Warehouse

Gemini-authored SQL on BigQuery vs a deterministic engine specialised for product analytics methodology.

BigQuery AI brings Gemini-powered Conversational Analytics, Data Agents, and BigQuery Graph to the warehouse. Mitzu adds an agentic product analytics layer with a deterministic query engine on top — and runs on BigQuery natively. Compare architecture, methodology, SQL examples, and where to use each.

István Mészáros
István Mészáros

Co-founder & CEO

May 14, 2026
10 min read
BigQuery AI vs Mitzu: Agentic SQL on Google's Warehouse vs Agentic Product Analytics on the Warehouse

TL;DR

BigQuery AI is Google Cloud's umbrella for agentic capabilities on BigQuery — Conversational Analytics (GA), Data Agents, BigQuery Graph (preview semantic layer), proactive workflows, and the Google Cloud Data Agent Kit. Gemini writes SQL against your warehouse. Mitzu is an agentic product analytics platform. The Analytics Agent assembles funnel, retention, segmentation, journey, and cohort specifications; a deterministic query engine turns them into SQL. Both run on the same warehouse — Mitzu connects to BigQuery natively, alongside Snowflake, Databricks, ClickHouse, Redshift, Postgres, Trino and others.

Use this comparison to evaluate tools through an agentic product analytics lens: which platform enables a trusted AI data analyst workflow on BigQuery — with reliable methodology and a grounded semantic layer — not just faster dashboarding.

Google has been re-positioning BigQuery as an autonomous data-to-AI platform for the agentic era, anchored by BigQuery AI, Conversational Analytics, and Gemini. That makes "BigQuery AI vs Mitzu" a fair question to ask: both run on the warehouse, both promise an agentic analytics workflow, and BigQuery-using teams sit squarely in Mitzu's ICP. The honest framing is that they sit at different layers. BigQuery AI is general-purpose agentic SQL on Google's warehouse. Mitzu is agentic product analytics on the warehouse — narrower category, deterministic engine, semantic layer specialised for funnels, retention, journeys and cohorts. They are complementary, and Mitzu connects to BigQuery as a first-class warehouse.

What is BigQuery AI?

BigQuery AI is the umbrella name for Google Cloud's agentic offering on top of BigQuery. The public surface gathers several capabilities together: Conversational Analytics for chatting with your data in natural language, Data Agents for science and engineering workflows, BigQuery Graph as a semantic-layer preview, proactive agentic workflows that watch for metric shifts, and the Google Cloud Data Agent Kit — a portable suite of MCP tools and skills.

The clearest reference implementation is Conversational Analytics, generally available inside BigQuery Studio and surfaceable through Looker Studio Pro, Gemini Enterprise, and the Conversational Analytics API. Gemini writes SQL against BigQuery grounded in schema metadata, custom business instructions, verified queries, and user-defined functions. According to Google's documentation, agents can be customised with knowledge sources (tables, views, graphs, UDFs), table and field annotations, processing instructions (synonyms, defaults, filters), and verified queries — patterns the LLM is told to follow for recurring business logic.

  • Conversational Analytics (GA) — Gemini-authored SQL with executive summaries, follow-up suggestions, and surfaced "thinking" steps for verification.
  • Data Agents — a Data Science Agent for loading, cleaning and visualising in BigQuery ML/DataFrames/Spark, plus a Data Engineering Agent for pipeline work.
  • BigQuery Graph (preview) — a semantic layer that maps entities, relationships, and native measures, navigable by Conversational Analytics for higher-accuracy answers.
  • Proactive agentic workflows (preview) — detect metric shifts, run root-cause investigations, deliver scheduled briefings.
  • Google Cloud Data Agent Kit (preview) — MCP tools and skills that work in VS Code, Gemini CLI, Claude Code, and any MCP-compatible client.

The architecture is general-purpose by design. Conversational Analytics is positioned for ad-hoc data exploration, forecasting (via AI.FORECAST), anomaly detection (AI.DETECT_ANOMALIES), text generation, and graph-based relationship queries. It is not pitched specifically at product analytics, and the platform does not ship native funnel, retention, segmentation, or cohort primitives. Methodology lives in whatever SQL Gemini writes — or in the LookML / BigQuery Graph model the data team builds and keeps current.

What is Mitzu?

Mitzu is an agentic product analytics platform that runs on your data warehouse and answers behavioural questions through natural-language conversation, without writing SQL. The category is narrower than general agentic analytics — Mitzu is specialised for product, growth, and marketing behavioural questions on event data.

Mitzu meets users in three places: the in-app Analytics Agent, the Slack Agent in any public or private channel, and a remote MCP server that exposes Mitzu's capabilities to any MCP-compatible agent (Claude, Cursor, ChatGPT, custom). Setup is handled by a Configuration Agent that scans the warehouse, recognises common event schemas (Segment, Snowplow, Firebase, GA4, custom), maps user and group identifiers, and builds the semantic layer automatically. BigQuery is one of the supported warehouses — see Product Analytics with BigQuery and Mitzu.

The trust differentiator: Mitzu's agent does not write SQL. It assembles structured analysis specifications — funnel steps with a conversion window, retention cohorts and return events, segmentation filters with sampled property values, journey definitions — and a deterministic query engine turns those specifications into SQL. The same specification produces the same SQL every time. Methodology errors that LLMs reliably make (a funnel without a window, a retention chart that double-counts, a cohort defined wrong) are guard-railed by the engine, not by prompt engineering.

BigQuery AI vs Mitzu: side-by-side

BigQuery AIMitzu
CategoryAgentic SQL / general analytics on BigQueryAgentic product analytics on the warehouse
Who writes the SQLLLM (Gemini, via Conversational Analytics)Deterministic query engine, from a typed analysis specification
GroundingSchema, custom instructions, verified queries, UDFs, BigQuery Graph (preview)Auto-built product-analytics semantic layer (events, properties, entities, sampled values)
Methodology primitivesNone native — LLM composes ad-hoc SQL per question; recurring patterns captured as verified queriesFunnel, retention, segmentation, journey, cohort as first-class primitives
Where it runsBigQuery onlyBigQuery, Snowflake, Databricks, ClickHouse, Redshift, Athena, Trino/Presto, Postgres, Firebolt, Starburst, MS Fabric
SurfacesBigQuery Studio, Looker Studio Pro, Gemini Enterprise, Conversational Analytics APIIn-app Analytics Agent, Slack Agent, remote MCP server
Semantic layer modellingBigQuery Graph (preview) or LookML in Looker — typically hand-authored by data engineersAuto-built by the Configuration Agent; reviewed and adjusted in the Mitzu app, no YAML
Source modelProprietary Google Cloud service; Gemini-only LLMCommercial SaaS; self-hosted deployment available on Enterprise
Best forGeneral-purpose analytics across any domain on BigQuery, with deep Google Cloud integrationProduct, growth and marketing behavioural questions where methodology must be right

SQL examples: the same question, two paths

Take a typical product analytics question: "What is our 7-day signup-to-activation conversion rate, broken down by acquisition channel, for the last 30 days?"

BigQuery AI: SQL Gemini might generate

-- Plausible Gemini output via Conversational Analytics, BigQuery Standard SQL.
-- Looks reasonable; methodology depends on the prompt + grounding.
WITH signups AS (
  SELECT user_id,
         MIN(event_time)                                AS signup_at,
         ANY_VALUE(properties.channel)                  AS channel
  FROM `my_project.analytics.events`
  WHERE event_name = 'signup'
    AND event_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY)
  GROUP BY user_id
),
activations AS (
  SELECT user_id,
         MIN(event_time) AS activated_at
  FROM `my_project.analytics.events`
  WHERE event_name = 'activated'
    AND event_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 37 DAY)
  GROUP BY user_id
)
SELECT s.channel,
       COUNT(*)                                                                     AS signups,
       COUNTIF(a.activated_at <= TIMESTAMP_ADD(s.signup_at, INTERVAL 7 DAY))         AS activated_in_7d,
       ROUND(SAFE_DIVIDE(
         COUNTIF(a.activated_at <= TIMESTAMP_ADD(s.signup_at, INTERVAL 7 DAY)),
         COUNT(*)
       ) * 100, 1)                                                                   AS conv_pct
FROM signups s
LEFT JOIN activations a USING (user_id)
GROUP BY s.channel
ORDER BY signups DESC;

Reads cleanly, but the methodology is doing a lot of work in the prompt. A different prompt run, or a slightly different schema, can yield: a window measured against the wrong anchor, an activation that pre-dates the signup counted as a conversion, channel attribution joined off the wrong row when a user has multiple signups, or a window that quietly slips to 30 days because Gemini conflated the lookback with the conversion window. None of these are SQL bugs — they are methodology choices an LLM is making implicitly, every time. Verified queries and BigQuery Graph can constrain the surface, but every new variant of the question is a fresh authoring problem.

Mitzu: SQL from a deterministic engine

The Mitzu agent does not write the SQL. It assembles a funnel specification — roughly: { first_event: "signup", subsequent_events: ["activated"], conversion_window: "7d", breakdown: "channel", date_range: "last_30_days" } — and the deterministic engine emits the same SQL every time:

-- Engine output for a 2-step funnel with a 7-day conversion window,
-- broken down by channel, for the last 30 days. Same spec -> same SQL.
WITH step_1 AS (
  SELECT user_id,
         MIN(event_time)                AS step_1_at,
         ANY_VALUE(properties.channel)  AS channel
  FROM `my_project.analytics.events`
  WHERE event_name = 'signup'
    AND event_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY)
    AND event_time <  CURRENT_TIMESTAMP()
  GROUP BY user_id
),
step_2 AS (
  SELECT s1.user_id,
         s1.channel,
         MIN(e.event_time) AS step_2_at
  FROM step_1 s1
  INNER JOIN `my_project.analytics.events` e
    ON e.user_id = s1.user_id
   AND e.event_name = 'activated'
   AND e.event_time >  s1.step_1_at
   AND e.event_time <= TIMESTAMP_ADD(s1.step_1_at, INTERVAL 7 DAY)
  GROUP BY s1.user_id, s1.channel
)
SELECT s1.channel                                AS channel,
       COUNT(DISTINCT s1.user_id)                AS step_1_users,
       COUNT(DISTINCT s2.user_id)                AS step_2_users,
       ROUND(SAFE_DIVIDE(
         COUNT(DISTINCT s2.user_id),
         COUNT(DISTINCT s1.user_id)
       ) * 100, 1)                               AS conv_pct
FROM step_1 s1
LEFT JOIN step_2 s2 USING (user_id)
GROUP BY channel
ORDER BY step_1_users DESC;

The conversion window is enforced strictly (activation must be after signup and within 7 days). Distinct users prevent double-counting. Channel comes from the signup row, so attribution is consistent. The engine has been generating this shape of SQL in production for years; the agent's job is to assemble the specification, not to author the query.

The SQL is shown to the analyst as a verification artifact — not the agent's authored work.

More product analytics examples Mitzu turns into deterministic SQL

The same pattern — typed specification in, deterministic SQL out — covers the rest of the product analytics surface area. Three quick examples.

Weekly retention for a cohort

Spec: { cohort_event: "signup", return_event: "session_start", time_unit: "week", windows: 8, segment: "country = 'US'" }. The engine bins users by signup week, counts who came back in each subsequent week, and avoids the classic double-count when a user has multiple session_start events in the same bucket. The agent in BigQuery AI has to compose this from scratch every time — and there are at least three plausible weekly bucketings that would each look right at a glance.

Journey analysis from a starting event

Spec: { starting_event: "added_to_cart", depth: 5, time_window: "1d" }. The engine returns the top paths users take after adding to cart within 24 hours, with counts at each step. Journey methodology — ordered event sequences with branching — is the most error-prone shape for an LLM to author; a deterministic engine that has the methodology baked in side-steps the whole class of mistakes.

Cohort impact analysis

Spec for a deep dive: "Did users exposed to the new pricing page convert better in the next 14 days?" Mitzu defines two cohorts (saw_new_pricing, did_not_see_new_pricing), measures trial-to-paid conversion in the same 14-day window relative to exposure, and surfaces the delta — same specification, same SQL, every run. The Analytics Agent stitches this into a deep-dive narrative with multiple tool calls: cohort definitions, segmentations, time-window comparisons, churn deltas. Heatmaps, cohort retention curves, and impact deltas come out of the same primitives.

UI differences

BigQuery AI primarily lives inside BigQuery Studio, with the Conversational Analytics panel attached to a notebook or dataset. Answers come back as executive summaries, raw data, and Gemini-generated charts, with the SQL surfaced underneath for verification. The same agent can also be embedded in Looker Studio Pro reports, in Gemini Enterprise, or called via API. Configuration of data agents — knowledge sources, verified queries, business instructions — happens in BigQuery Studio.

Mitzu's Analytics Agent lives in a chat panel inside the Mitzu app, alongside a visual insight builder, dashboards, cohort lists, and the semantic-layer view. Adjacent users — PMs, growth leads, marketers — reach the same agent through the Slack Agent by mentioning @mitzu in any channel and following up in thread. External agents reach the same backend through the remote MCP server. Every surface answers from the same specification model and the same deterministic engine — answers are consistent regardless of where the question came from.

Advantages and trade-offs

BigQuery AI

StrengthsTrade-offs
Deeply integrated with the Google Cloud stack — BigQuery, Looker, Gemini, Vertex AI, Cloud Storage — all under one identity and billing model.Locked to BigQuery as the warehouse and to Gemini as the model. Teams on Snowflake, Databricks, ClickHouse, or Redshift cannot use it.
Gemini authors SQL across the full BigQuery surface — billing, support, finance, ML, product events — with one chat interface.The LLM authors SQL. Methodology errors on funnels, retention, cohorts and journeys are easy to make and hard to spot in a chat reply, even with verified queries in place.
BigQuery Graph (preview) lets data teams encode entities, relationships, and native measures the agent navigates instead of raw tables.BigQuery Graph and verified queries are data-engineering work the team has to staff and maintain — weeks of upfront effort for the kinds of patterns Mitzu auto-builds.
Proactive workflows (preview) — metric-shift detection, scheduled investigations, briefings to inbox — built into the platform.Charts and dashboards from Conversational Analytics are LLM-generated rather than driven by a typed methodology layer; consistency across questions is not guaranteed.
Strong fit when BigQuery is already the centre of gravity and the team wants one Google-managed agent across the entire warehouse.Closed-source, proprietary, Google-cloud-only. No self-hosting; no choice of model.

Mitzu

StrengthsTrade-offs
The agent does not write SQL — a deterministic query engine does, from a typed specification. Same input, same SQL, same answer.Narrower scope — Mitzu is built for product, growth and marketing behavioural questions, not classic BI dashboarding or financial reporting.
Auto-built semantic layer specialised for product analytics — events, event properties, entities, dimension properties and sampled filter values. No hand-authored LookML or BigQuery Graph modelling.Requires event data already in the warehouse. Companies without a warehouse, or with events trapped in a third-party tool that will not export, are not the fit.
Funnel, retention, segmentation, journey and cohort are first-class primitives.Open-ended statistical exploration belongs in a notebook (Hex, Deepnote, Jupyter), not in Mitzu.
Warehouse-agnostic — runs on BigQuery, Snowflake, Databricks, ClickHouse, Redshift, Athena, Trino/Presto, Postgres, Firebolt, Starburst and MS Fabric.Self-hosted deployment is available on the Enterprise tier; the lower tiers are SaaS.
Three surfaces share one semantic layer: in-app Analytics Agent, Slack Agent, and a remote MCP server for any external agent.
Per-editor seat pricing with unlimited events; warehouse compute stays under the customer's control.

Capability scorecard

Where each tool stands on the capabilities that matter for product analytics work on BigQuery.

CapabilityBigQuery AIMitzu
Runs on BigQuery
Multi-warehouse support (Snowflake, Databricks, ClickHouse, Redshift, Trino, Postgres…)❌ BigQuery only
Open-source or self-hosted option✅ Enterprise tier (self-hosted)
Deterministic SQL engine (agent does not write SQL)
Auto-built semantic layer specialised for product analytics❌ semantic layer requires BigQuery Graph / LookML modelling✅ Configuration Agent builds it
Native funnel methodology
Native retention methodology
Native segmentation, journey, and cohort primitives
Native impact and deep-dive investigations
Sampled property values for filters
Reviewable SQL surfaced for every answer
Proactive metric-shift detection and briefings✅ preview⚠️ on Mitzu's roadmap
MCP server for external agents✅ Data Agent Kit (preview)✅ Remote MCP
Slack agent
Model choice❌ Gemini only
General-purpose across any analytics domain❌ Product analytics only

When to choose BigQuery AI, Mitzu, or both?

These are layers, not substitutes. BigQuery AI gives you a general agentic interface to the warehouse, deeply integrated with Google Cloud. Mitzu gives you a product-analytics-specialised agent on top of the same warehouse. The right choice depends on what shape of question dominates your team's analytics workload.

  • Choose BigQuery AI when your analytics surface is broad and cross-domain, BigQuery is already the centre of gravity, you want a Google-managed agent end-to-end, and you have the data-engineering cycles to staff BigQuery Graph or LookML modelling and verified-query libraries.
  • Choose Mitzu when product, growth, or marketing teams need to ask diagnostic behavioural questions (why did week-2 retention drop, did the new pricing page move trial-to-paid, which onboarding step has the highest drop-off) and you want methodology guard-rails the LLM cannot break — without weeks of semantic-layer engineering.
  • Run both when BigQuery is the system of record for a wide analytics surface and product analytics is one of several question types — let BigQuery AI handle the long tail across finance, support, and operations, and let Mitzu specialise in the behavioural layer.

FAQ

Does Mitzu work with BigQuery?

Yes. BigQuery is a first-class supported warehouse. Mitzu reads event tables and dbt-modelled tables in place — no data movement, no per-event pricing. See Product Analytics with BigQuery and Mitzu for a walk-through, and Top 5 Product Analytics tools for BigQuery for the broader landscape.

Does BigQuery AI replace Mixpanel, Amplitude, or other product analytics tools?

Not by itself. BigQuery AI is general-purpose agentic analytics on warehouse data. For product analytics methodology specifically — funnels with conversion windows, retention cohorts, journey trees, segmentations with sampled filter values — you either add a layer like Mitzu, or build that methodology yourself in BigQuery Graph and verified queries and rely on Gemini to compose it correctly each time.

Can BigQuery AI do funnels and retention?

Gemini can absolutely write a funnel or retention query against BigQuery. Whether the methodology is right depends on the prompt, the grounding, and the day. The risk is not that the SQL fails to run — it usually runs — but that it answers the wrong question (window measured wrong, double-counted users, attribution joined off the wrong row). A deterministic engine that owns the methodology removes that class of error.

What is BigQuery Graph and how does it compare to Mitzu's semantic layer?

BigQuery Graph (preview) is a semantic layer where data engineers model entities, relationships, and native measures so Conversational Analytics can reason over a curated business map instead of raw tables. It is BI-shaped — metrics, dimensions, joins, relationships — and it is hand-authored. Mitzu's semantic layer is product-analytics-shaped (events, event properties, entities, dimension properties, sampled values) and it is built automatically by the Configuration Agent. Different shape, different operational cost.

Is Mitzu open source?

Mitzu is a commercial SaaS product. The Enterprise tier supports self-hosted deployment in the customer's own infrastructure. Pricing is per editor seat with unlimited events on every tier — see the pricing page for current details.

Where does the data live in either tool?

In BigQuery. Both architectures are warehouse-native: BigQuery AI by construction (Gemini queries BigQuery directly), Mitzu by design (the agent reads BigQuery via the deterministic engine). Neither moves data into a vendor silo. Compliance, data residency, and cost control all stay on the customer's side of the line.

References

Key Takeaways

  • The architectural distinction is who writes the SQL: an LLM (BigQuery Conversational Analytics) or a deterministic engine driven by a typed analysis specification (Mitzu).
  • Mitzu's semantic layer is auto-built by a Configuration Agent that scans the warehouse — no hand-authored LookML, no BigQuery Graph modelling, no weeks of metric work.
  • Funnels, retention, segmentations, journeys, and cohorts are first-class primitives in Mitzu, not SQL Gemini has to compose correctly each time.
  • Both architectures keep the data in BigQuery. Neither requires data egress.

About the Author

István Mészáros

Co-founder & CEO

LinkedIn: https://www.linkedin.com/in/imeszaros/

Co-founder and CEO of Mitzu. Passionate about product analytics and helping companies make data-driven decisions.

Share this article

Subscribe to our newsletter

Get the latest insights on product analytics.

Ready to transform your analytics?

See how Mitzu can help you gain deeper insights from your product data.

Get Started

How to get started with Mitzu

Start analyzing your product data in three simple steps

Connect your data warehouse

Securely connect Mitzu to your existing data warehouse in minutes.

Define your events

Map your product events and user properties with our intuitive interface.

Start analyzing

Create funnels, retention charts, and user journeys without writing SQL.