How I AI“Vibe analysis”: How Faire uses Cursor, enterprise search, and custom agents to analyze data
CHAPTERS
- 0:00 – 2:53
Why “vibe analysis” matters: shipping faster vs. knowing what’s good
Claire frames the core product problem: AI accelerates building and shipping, but it also increases the risk of shipping features that don’t improve outcomes. The episode will focus on using modern AI tools to analyze product quality and conversion, not just build software.
- 2:53 – 4:14
Analytics isn’t just SQL: context gathering as the hardest step
Tim explains that most analytics effort is not data crunching—it’s building the context to ask the right questions, find the right data, and interpret results correctly. AI’s biggest impact is collapsing the time required to get oriented and generate hypotheses.
- 4:14 – 5:46
Enterprise AI search to generate hypotheses for a conversion drop
Tim demonstrates using Notion’s enterprise AI search to investigate a real conversion funnel drop that began in September with another drop in December, seemingly concentrated at checkout. Instead of manual document/Slack/Jira digging, he prompts the system to list launches and experiments that may have added checkout friction.
- 5:46 – 9:05
Drilling into a hypothesis (EORI) and why PRDs aren’t enough
After spotting EORI-related changes, Tim uses AI search to quickly understand the term and its implications. He emphasizes that documentation can diverge from implementation, so the next layer is querying the codebase to see what actually shipped and when.
- 9:05 – 12:40
Codebase forensics with ChatGPT Deep Research + GitHub connection
Tim shows a ‘forensic investigation’ prompt that asks ChatGPT Deep Research to scan GitHub and produce a time-sequenced report of changes to the EORI checkout flow. Claire notes this pattern is useful not only for product analysis but also for incident/sev investigations and PM support during outages.
- 12:40 – 18:55
Doing it faster in Cursor: “context engineering” with MCPs
Tim runs the same forensic prompt in Cursor and highlights why Cursor can outperform: it’s a ‘context engine’ that can connect to many internal systems via MCPs. Claire argues PM/designer onboarding should include GitHub read access and local setup because code is now a universal data source.
- 18:55 – 26:27
Finding the smoking gun: mapping PR history to funnel drops
The Cursor-generated report includes a timeline of PRs affecting EORI collection, including an experiment launched in mid-September and a later wider rollout, aligning with the observed conversion drops. This enables targeted conversations with the right owners instead of broad stakeholder hunting.
- 26:27 – 30:00
End-to-end feature analysis in Cursor: from code context to funnel SQL
Alexa walks through analyzing a redesigned signup flow for a new payment method without an A/B test (pre/post analysis). She uses Cursor to extract implementation details from the codebase (who sees the flow, step order, success definition, emitted events) and then generate SQL for funnel measurement.
- 30:00 – 34:17
Event tracking and QA: making AI-generated SQL trustworthy
Claire and Alexa discuss ensuring features are properly instrumented and using AI to validate tracking plans. Alexa explains her QA habits: sanity-check drop-offs, avoid impossible sequences, and enforce commented CTEs via Cursor rules to make AI SQL reviewable and maintainable.
- 34:17 – 37:04
Semantic layers: the hidden enabler for reliable ‘zero-shot’ analytics
Alexa explains Faire’s semantic layer work: structured definitions of business terms, tables, joins, metrics, and common queries so LLMs can query correctly. A general semantic layer powers company-wide self-serve Q&A, while a specialized per-scope layer improves deeper analysis in Cursor.
- 37:04 – 44:39
From SQL to dashboards to writeups: Mode MCP + Notion MCP workflow
After refining queries and building Mode dashboards (legacy vs. new funnel), Alexa uses a Mode MCP to have Cursor read the dashboard and generate takeaways and next steps. She then uses a Notion MCP to draft a structured, exec-ready doc aligned to Faire’s writing principles and templates.
- 44:39 – 53:10
Custom agent for experiment writeups: automating the Eppo → Notion pipeline
Tim demonstrates a lightweight agent driven by a Cursor rules (MDC) file that automates A/B test documentation. The agent pulls experiment results from Eppo, gathers context from Notion, drafts a standardized writeup (metrics, CIs, interpretation), creates a Notion doc, and even generates a Slack summary.
- 53:10 – 59:40
Bonus: survey design + unstructured survey analysis with ChatGPT Projects
Tim speed-runs an end-to-end survey workflow: provide product context + hypotheses, generate a 10-minute customer survey, export a Qualtrics-ready coding file, and produce an analysis plan. He also shows using AI to process messy Qualtrics exports and assess hypotheses as proved/neutral/disproved with confidence scores.
- 59:40 – 1:03:28
Lightning round: what to do when the model isn’t cooperating + closing
Alexa shares her go-to fix: summarize the long conversation and restart to recover from context drift. Tim runs parallel attempts with multiple models and compares outputs, treating prompting like an A/B test; the episode closes with hiring plugs and where to find them.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome