
“Vibe analysis”: How Faire uses Cursor, enterprise search, and custom agents to analyze data
Claire Vo (host), Tim Trueman (guest), Alexa Cerf (guest)
In this episode of How I AI, featuring Claire Vo and Tim Trueman, “Vibe analysis”: How Faire uses Cursor, enterprise search, and custom agents to analyze data explores faire’s “vibe analysis” stack: Cursor, MCPs, search, agents workflows The episode reframes analytics as mostly context gathering and interpretation—not just crunching numbers—and shows how AI tools drastically shorten the “figure out what happened” phase.
Faire’s “vibe analysis” stack: Cursor, MCPs, search, agents workflows
The episode reframes analytics as mostly context gathering and interpretation—not just crunching numbers—and shows how AI tools drastically shorten the “figure out what happened” phase.
Tim demos enterprise AI search (via Notion) to quickly surface hypotheses for a conversion drop, then uses ChatGPT Deep Research and Cursor to forensically trace code changes tied to a checkout friction source (EORI).
Alexa walks through an end-to-end feature performance analysis: pulling implementation context from the codebase, generating and executing Snowflake SQL via MCP, reviewing results in a Mode dashboard via MCP, and drafting a structured Notion doc via MCP.
They close with operational automation: a custom Cursor agent that writes standardized experiment readouts from Eppo results into Notion (and a Slack summary), plus a fast workflow for designing and analyzing customer surveys using ChatGPT Projects and structured outputs from Qualtrics.
Key Takeaways
Most analytics time is spent on context, not calculations.
They argue the hardest part is knowing what to ask, where data lives, and what changed—AI meaningfully improves speed and quality by making context discovery self-serve.
Get the full analysis with uListen AI
Enterprise AI search turns “What happened?” into a hypothesis list in minutes.
By querying Notion AI over time-bounded sources (PRDs, XP docs, launch announcements) across Slack/Notion/Jira, Tim quickly narrows likely causes of a conversion drop without manual document spelunking.
Get the full analysis with uListen AI
Code history is a high-fidelity source of truth for product reality.
PRDs can drift from implementation; querying GitHub via Deep Research/Cursor produces an accurate timeline of what shipped, when, and who was impacted—critical for incident-style investigations.
Get the full analysis with uListen AI
Cursor acts as a “context engine” when paired with MCPs.
Instead of copy/pasting across tools, Cursor can pull repo context and directly invoke connected systems (e. ...
Get the full analysis with uListen AI
Semantic layers dramatically improve AI’s zero-shot SQL accuracy.
Faire’s structured semantic definitions (business terms, joins, metrics) help LLMs map natural language to the right tables/fields, enabling faster, more reliable query generation and democratized self-serve questions.
Get the full analysis with uListen AI
AI-generated SQL still needs human QA and interpretability aids.
Alexa validates funnel outputs (e. ...
Get the full analysis with uListen AI
Automation wins come from standardizing outputs, not just generating text.
Their experiment-writeup agent encodes a consistent template (metrics, confidence intervals, roll-out recommendation) and publishes to Notion/Slack—saving analyst time on repetitive formatting and documentation chores.
Get the full analysis with uListen AI
Notable Quotes
“Everyone's talking about vibe coding, but no one's really talking about vibe analysis.”
— Tim Trueman
“The most important, often the most difficult thing, is actually just getting the right context in the first place.”
— Tim Trueman
“Cursor is the ultimate context engine.”
— Tim Trueman
“It's not the AI's name on this analysis, it's mine.”
— Alexa Cerf
“Don't do it in your head.”
— Claire Vo
Questions Answered in This Episode
In the conversion-drop forensic workflow, how do you validate that an AI-surfaced “smoking gun” PR actually caused the metric movement (vs correlation)?
The episode reframes analytics as mostly context gathering and interpretation—not just crunching numbers—and shows how AI tools drastically shorten the “figure out what happened” phase.
Get the full analysis with uListen AI
What permissions and governance model does Faire use to give PMs/designers read access to GitHub and data tools safely from day one?
Tim demos enterprise AI search (via Notion) to quickly surface hypotheses for a conversion drop, then uses ChatGPT Deep Research and Cursor to forensically trace code changes tied to a checkout friction source (EORI).
Get the full analysis with uListen AI
How is the semantic layer maintained over time—who owns it, how do you version it, and how do you prevent metric definition drift?
Alexa walks through an end-to-end feature performance analysis: pulling implementation context from the codebase, generating and executing Snowflake SQL via MCP, reviewing results in a Mode dashboard via MCP, and drafting a structured Notion doc via MCP.
Get the full analysis with uListen AI
What are the biggest failure modes you’ve seen with Snowflake/Mode MCP executions (wrong joins, wrong grain, silent filters), and what QA checklist do you enforce?
They close with operational automation: a custom Cursor agent that writes standardized experiment readouts from Eppo results into Notion (and a Slack summary), plus a fast workflow for designing and analyzing customer surveys using ChatGPT Projects and structured outputs from Qualtrics.
Get the full analysis with uListen AI
For experiment writeups, where do you draw the line between “agent can ship to Notion” vs “needs analyst review,” especially for heterogeneous metrics or multiple treatments?
Get the full analysis with uListen AI
Transcript Preview
How do we start at the very beginning of analyzing a product and its quality and its usage through analyzing conversion rates?
The new AI tools have just absolutely transformed the process of just getting all that context. You can go as broad as you like, self-serve, into an unfamiliar topic just incredibly quickly, and that means you can not only deliver quicker analysis, you can just deliver much better analysis, too. I'm gonna start just by doing an enterprise AI search. So I'm just gonna start very simply by asking Notion: What experiments or new features launched between September to December twenty twenty-four that could have added friction to the checkout process for new retailers in Europe or North America? And I've just said, "Focus on XP docs, PRDs, and launch announcements." I've got, straight away, a really interesting list of hypotheses to dig into with no work. And you can see it searched across Slack, Notion, Jira, and everything else very, very quickly.
So, Alexa, how do we do actual analysis of data when we've identified a problem or an opportunity we wanna go after?
Without AI, especially the context gathering, would mean hours spent digging through all the specs and PRDs, writing SQL queries from scratch, and then, you know, spending a lot of time writing and editing a doc. Using Cursor to actually create, edit, write SQL has been pretty game-changing.
[upbeat music] Welcome back to How I AI. I'm Claire Vo, product leader and AI obsessive, here on a mission to help you build better with these new tools. Today, I have a great episode with Tim and Alexa from the data team at Faire. They're gonna show us how you can use Cursor, MCPs, ChatGPT, and even write your own agents to do data analysis. We're gonna see everything from decomposing that scary question, "What went wrong in September?" to doing detailed funnel analysis on experiments and surveys. Let's get to it. AI is supposed to make work easier, but I've been there: weeks of setup, endless back and forth with engineering, and yet another tool the team never really adopts. That's why I use Zapier's AI orchestration platform. It connects with nearly eight thousand apps, so I can finally put AI to work without the drama, without the delays, and without pulling engineering in every time I wanna automate something. With Zapier, you can roll out AI-powered workflows that do real work across your whole company in days, not weeks. I use Zapier every single day. It automatically responds to leads with enriched, personalized data, it checks my calendar weekly and offers smarter ways to manage my time, and it even drafts emails for every new request that lands in my inbox. All of that running quietly in the background, so I can focus on the work that matters. And Zapier's built for scale. With enterprise-grade security, compliance, and governance, it's trusted by teams at Dropbox, Airbnb, Opendoor, and thousands more. Go to try.zapier.com/howiai to learn more about how Zapier can bring the power of AI orchestration to your entire org. Alexa, Tim, thank you for joining How I AI.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome