Skip to content
How I AIHow I AI

“Vibe analysis”: How Faire uses Cursor, enterprise search, and custom agents to analyze data

Tim Trueman and Alexa Cerf from Faire’s data team demonstrate how AI tools are revolutionizing data analysis workflows. They show how data teams, product managers, and engineers can use tools like Cursor, ChatGPT, and custom agents to investigate business metrics, analyze experiment results, and extract insights from user surveys—all while dramatically reducing the time and technical expertise required. *What you’ll learn:* 1. How to use AI to investigate sudden drops in business metrics by searching documentation and codebases 2. Techniques for creating a semantic layer that helps AI understand your business data 3. How to build end-to-end analytics workflows using Cursor and Model Context Protocols (MCPs) 4. Ways to automate experiment analysis and create standardized reports 5. How AI can help design and analyze customer surveys 6. Strategies for creating executive-ready documents from raw data analysis 7. Why every team member should have access to code repositories—not just engineers *Brought to you by:* Zapier—The most connected AI orchestration platform: https://try.zapier.com/howiai Brex—The intelligent finance platform built for founders: https://brex.com/howiai *Where to find Tim Trueman:* LinkedIn: https://www.linkedin.com/in/tim-trueman-99788592/ *Where to find Alexa Cerf:* LinkedIn: https://www.linkedin.com/in/alexandra-cerf/ *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo *In this episode, we cover:* (00:00) Introduction to Tim and Alexa from Faire (02:53) The challenge of analyzing product quality and usage (04:14) Breaking down what analytics actually involves beyond data manipulation (05:46) Demo: Investigating a conversion rate drop using enterprise AI search (09:05) Using ChatGPT Deep Research to analyze code changes (12:40) Leveraging Cursor as the ultimate context engine for code analysis (18:55) Analyzing a new product feature’s performance with Cursor (26:27) How semantic layers make AI tools more effective for data analysis (30:00) Using Model Context Protocols (MCPs) to connect AI with data tools (34:17) Creating visualizations and dashboards with Mode integration (37:04) Generating structured analysis documents with Notion integration (44:39) Building custom agents to automate experiment result documentation (53:10) Designing and analyzing customer surveys (59:40) Lightning round and final thoughts *Tools referenced:* • Cursor: https://cursor.com/ • ChatGPT: https://chat.openai.com/ • Notion: https://www.notion.so/ • Snowflake: https://www.snowflake.com/ • Mode: https://mode.com • Qualtrics: https://www.qualtrics.com/ • GitHub: https://github.com/ *Other references:* • Model Context Protocol (MCP): https://www.anthropic.com/news/model-context-protocol • Faire Careers: https://www.faire.com/careers _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Claire VohostTim TruemanguestAlexa Cerfguest
Nov 3, 20251h 3mWatch on YouTube ↗

CHAPTERS

  1. 0:00 – 2:53

    Why “vibe analysis” matters: shipping faster vs. knowing what’s good

    Claire frames the core product problem: AI accelerates building and shipping, but it also increases the risk of shipping features that don’t improve outcomes. The episode will focus on using modern AI tools to analyze product quality and conversion, not just build software.

  2. 2:53 – 4:14

    Analytics isn’t just SQL: context gathering as the hardest step

    Tim explains that most analytics effort is not data crunching—it’s building the context to ask the right questions, find the right data, and interpret results correctly. AI’s biggest impact is collapsing the time required to get oriented and generate hypotheses.

  3. 4:14 – 5:46

    Enterprise AI search to generate hypotheses for a conversion drop

    Tim demonstrates using Notion’s enterprise AI search to investigate a real conversion funnel drop that began in September with another drop in December, seemingly concentrated at checkout. Instead of manual document/Slack/Jira digging, he prompts the system to list launches and experiments that may have added checkout friction.

  4. 5:46 – 9:05

    Drilling into a hypothesis (EORI) and why PRDs aren’t enough

    After spotting EORI-related changes, Tim uses AI search to quickly understand the term and its implications. He emphasizes that documentation can diverge from implementation, so the next layer is querying the codebase to see what actually shipped and when.

  5. 9:05 – 12:40

    Codebase forensics with ChatGPT Deep Research + GitHub connection

    Tim shows a ‘forensic investigation’ prompt that asks ChatGPT Deep Research to scan GitHub and produce a time-sequenced report of changes to the EORI checkout flow. Claire notes this pattern is useful not only for product analysis but also for incident/sev investigations and PM support during outages.

  6. 12:40 – 18:55

    Doing it faster in Cursor: “context engineering” with MCPs

    Tim runs the same forensic prompt in Cursor and highlights why Cursor can outperform: it’s a ‘context engine’ that can connect to many internal systems via MCPs. Claire argues PM/designer onboarding should include GitHub read access and local setup because code is now a universal data source.

  7. 18:55 – 26:27

    Finding the smoking gun: mapping PR history to funnel drops

    The Cursor-generated report includes a timeline of PRs affecting EORI collection, including an experiment launched in mid-September and a later wider rollout, aligning with the observed conversion drops. This enables targeted conversations with the right owners instead of broad stakeholder hunting.

  8. 26:27 – 30:00

    End-to-end feature analysis in Cursor: from code context to funnel SQL

    Alexa walks through analyzing a redesigned signup flow for a new payment method without an A/B test (pre/post analysis). She uses Cursor to extract implementation details from the codebase (who sees the flow, step order, success definition, emitted events) and then generate SQL for funnel measurement.

  9. 30:00 – 34:17

    Event tracking and QA: making AI-generated SQL trustworthy

    Claire and Alexa discuss ensuring features are properly instrumented and using AI to validate tracking plans. Alexa explains her QA habits: sanity-check drop-offs, avoid impossible sequences, and enforce commented CTEs via Cursor rules to make AI SQL reviewable and maintainable.

  10. 34:17 – 37:04

    Semantic layers: the hidden enabler for reliable ‘zero-shot’ analytics

    Alexa explains Faire’s semantic layer work: structured definitions of business terms, tables, joins, metrics, and common queries so LLMs can query correctly. A general semantic layer powers company-wide self-serve Q&A, while a specialized per-scope layer improves deeper analysis in Cursor.

  11. 37:04 – 44:39

    From SQL to dashboards to writeups: Mode MCP + Notion MCP workflow

    After refining queries and building Mode dashboards (legacy vs. new funnel), Alexa uses a Mode MCP to have Cursor read the dashboard and generate takeaways and next steps. She then uses a Notion MCP to draft a structured, exec-ready doc aligned to Faire’s writing principles and templates.

  12. 44:39 – 53:10

    Custom agent for experiment writeups: automating the Eppo → Notion pipeline

    Tim demonstrates a lightweight agent driven by a Cursor rules (MDC) file that automates A/B test documentation. The agent pulls experiment results from Eppo, gathers context from Notion, drafts a standardized writeup (metrics, CIs, interpretation), creates a Notion doc, and even generates a Slack summary.

  13. 53:10 – 59:40

    Bonus: survey design + unstructured survey analysis with ChatGPT Projects

    Tim speed-runs an end-to-end survey workflow: provide product context + hypotheses, generate a 10-minute customer survey, export a Qualtrics-ready coding file, and produce an analysis plan. He also shows using AI to process messy Qualtrics exports and assess hypotheses as proved/neutral/disproved with confidence scores.

  14. 59:40 – 1:03:28

    Lightning round: what to do when the model isn’t cooperating + closing

    Alexa shares her go-to fix: summarize the long conversation and restart to recover from context drift. Tim runs parallel attempts with multiple models and compares outputs, treating prompting like an A/B test; the episode closes with hiring plugs and where to find them.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome