How I AI“Vibe analysis”: How Faire uses Cursor, enterprise search, and custom agents to analyze data
At a glance
WHAT IT’S REALLY ABOUT
Faire’s “vibe analysis” stack: Cursor, MCPs, search, agents workflows
- The episode reframes analytics as mostly context gathering and interpretation—not just crunching numbers—and shows how AI tools drastically shorten the “figure out what happened” phase.
- Tim demos enterprise AI search (via Notion) to quickly surface hypotheses for a conversion drop, then uses ChatGPT Deep Research and Cursor to forensically trace code changes tied to a checkout friction source (EORI).
- Alexa walks through an end-to-end feature performance analysis: pulling implementation context from the codebase, generating and executing Snowflake SQL via MCP, reviewing results in a Mode dashboard via MCP, and drafting a structured Notion doc via MCP.
- They close with operational automation: a custom Cursor agent that writes standardized experiment readouts from Eppo results into Notion (and a Slack summary), plus a fast workflow for designing and analyzing customer surveys using ChatGPT Projects and structured outputs from Qualtrics.
IDEAS WORTH REMEMBERING
5 ideasMost analytics time is spent on context, not calculations.
They argue the hardest part is knowing what to ask, where data lives, and what changed—AI meaningfully improves speed and quality by making context discovery self-serve.
Enterprise AI search turns “What happened?” into a hypothesis list in minutes.
By querying Notion AI over time-bounded sources (PRDs, XP docs, launch announcements) across Slack/Notion/Jira, Tim quickly narrows likely causes of a conversion drop without manual document spelunking.
Code history is a high-fidelity source of truth for product reality.
PRDs can drift from implementation; querying GitHub via Deep Research/Cursor produces an accurate timeline of what shipped, when, and who was impacted—critical for incident-style investigations.
Cursor acts as a “context engine” when paired with MCPs.
Instead of copy/pasting across tools, Cursor can pull repo context and directly invoke connected systems (e.g., Snowflake, Mode, Notion, Eppo), reducing context switching and enabling iterative analysis loops.
Semantic layers dramatically improve AI’s zero-shot SQL accuracy.
Faire’s structured semantic definitions (business terms, joins, metrics) help LLMs map natural language to the right tables/fields, enabling faster, more reliable query generation and democratized self-serve questions.
WORDS WORTH SAVING
5 quotesEveryone's talking about vibe coding, but no one's really talking about vibe analysis.
— Tim Trueman
The most important, often the most difficult thing, is actually just getting the right context in the first place.
— Tim Trueman
Cursor is the ultimate context engine.
— Tim Trueman
It's not the AI's name on this analysis, it's mine.
— Alexa Cerf
Don't do it in your head.
— Claire Vo
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome