Skip to content
How I AIHow I AI

“Vibe analysis”: How Faire uses Cursor, enterprise search, and custom agents to analyze data

Tim Trueman and Alexa Cerf from Faire’s data team demonstrate how AI tools are revolutionizing data analysis workflows. They show how data teams, product managers, and engineers can use tools like Cursor, ChatGPT, and custom agents to investigate business metrics, analyze experiment results, and extract insights from user surveys—all while dramatically reducing the time and technical expertise required. *What you’ll learn:* 1. How to use AI to investigate sudden drops in business metrics by searching documentation and codebases 2. Techniques for creating a semantic layer that helps AI understand your business data 3. How to build end-to-end analytics workflows using Cursor and Model Context Protocols (MCPs) 4. Ways to automate experiment analysis and create standardized reports 5. How AI can help design and analyze customer surveys 6. Strategies for creating executive-ready documents from raw data analysis 7. Why every team member should have access to code repositories—not just engineers *Brought to you by:* Zapier—The most connected AI orchestration platform: https://try.zapier.com/howiai Brex—The intelligent finance platform built for founders: https://brex.com/howiai *Where to find Tim Trueman:* LinkedIn: https://www.linkedin.com/in/tim-trueman-99788592/ *Where to find Alexa Cerf:* LinkedIn: https://www.linkedin.com/in/alexandra-cerf/ *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo *In this episode, we cover:* (00:00) Introduction to Tim and Alexa from Faire (02:53) The challenge of analyzing product quality and usage (04:14) Breaking down what analytics actually involves beyond data manipulation (05:46) Demo: Investigating a conversion rate drop using enterprise AI search (09:05) Using ChatGPT Deep Research to analyze code changes (12:40) Leveraging Cursor as the ultimate context engine for code analysis (18:55) Analyzing a new product feature’s performance with Cursor (26:27) How semantic layers make AI tools more effective for data analysis (30:00) Using Model Context Protocols (MCPs) to connect AI with data tools (34:17) Creating visualizations and dashboards with Mode integration (37:04) Generating structured analysis documents with Notion integration (44:39) Building custom agents to automate experiment result documentation (53:10) Designing and analyzing customer surveys (59:40) Lightning round and final thoughts *Tools referenced:* • Cursor: https://cursor.com/ • ChatGPT: https://chat.openai.com/ • Notion: https://www.notion.so/ • Snowflake: https://www.snowflake.com/ • Mode: https://mode.com • Qualtrics: https://www.qualtrics.com/ • GitHub: https://github.com/ *Other references:* • Model Context Protocol (MCP): https://www.anthropic.com/news/model-context-protocol • Faire Careers: https://www.faire.com/careers _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Claire VohostTim TruemanguestAlexa Cerfguest
Nov 3, 20251h 3mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:002:53

    Introduction to Tim and Alexa from Faire

    1. CV

      How do we start at the very beginning of analyzing a product and its quality and its usage through analyzing conversion rates?

    2. TT

      The new AI tools have just absolutely transformed the process of just getting all that context. You can go as broad as you like, self-serve, into an unfamiliar topic just incredibly quickly, and that means you can not only deliver quicker analysis, you can just deliver much better analysis, too. I'm gonna start just by doing an enterprise AI search. So I'm just gonna start very simply by asking Notion: What experiments or new features launched between September to December twenty twenty-four that could have added friction to the checkout process for new retailers in Europe or North America? And I've just said, "Focus on XP docs, PRDs, and launch announcements." I've got, straight away, a really interesting list of hypotheses to dig into with no work. And you can see it searched across Slack, Notion, Jira, and everything else very, very quickly.

    3. CV

      So, Alexa, how do we do actual analysis of data when we've identified a problem or an opportunity we wanna go after?

    4. AC

      Without AI, especially the context gathering, would mean hours spent digging through all the specs and PRDs, writing SQL queries from scratch, and then, you know, spending a lot of time writing and editing a doc. Using Cursor to actually create, edit, write SQL has been pretty game-changing.

    5. CV

      [upbeat music] Welcome back to How I AI. I'm Claire Vo, product leader and AI obsessive, here on a mission to help you build better with these new tools. Today, I have a great episode with Tim and Alexa from the data team at Faire. They're gonna show us how you can use Cursor, MCPs, ChatGPT, and even write your own agents to do data analysis. We're gonna see everything from decomposing that scary question, "What went wrong in September?" to doing detailed funnel analysis on experiments and surveys. Let's get to it. AI is supposed to make work easier, but I've been there: weeks of setup, endless back and forth with engineering, and yet another tool the team never really adopts. That's why I use Zapier's AI orchestration platform. It connects with nearly eight thousand apps, so I can finally put AI to work without the drama, without the delays, and without pulling engineering in every time I wanna automate something. With Zapier, you can roll out AI-powered workflows that do real work across your whole company in days, not weeks. I use Zapier every single day. It automatically responds to leads with enriched, personalized data, it checks my calendar weekly and offers smarter ways to manage my time, and it even drafts emails for every new request that lands in my inbox. All of that running quietly in the background, so I can focus on the work that matters. And Zapier's built for scale. With enterprise-grade security, compliance, and governance, it's trusted by teams at Dropbox, Airbnb, Opendoor, and thousands more. Go to try.zapier.com/howiai to learn more about how Zapier can bring the power of AI orchestration

  2. 2:534:14

    The challenge of analyzing product quality and usage

    1. CV

      to your entire org. Alexa, Tim, thank you for joining How I AI.

    2. TT

      Well, great to be here. Thanks for having us.

    3. AC

      Thank you so much.

    4. CV

      One of the things that we can do now that I am probably personally causing in the, in the internet world is we can just build a lot of, a lot of product. I am always out there, like... I was thinking the other day, I was like, "I'm gonna tweet something where I tell PMs that they should just spend a, a month saying yes instead of saying no. Like, let's ship some features." And I think AI has really accelerated product development, software engineering, getting innovation to the hands of customers, but the problem it has created is we don't know if those products are any, any good, [chuckles] any good. So the, the perennial, uh, product problem, which is you can ship things, and they can not make the difference that you hoped they would make. And so I'm really excited about this conversation because you are gonna show us how to use AI, and even some of these tools that software engineers or product managers might be familiar with, to do really deep, meaningful product analysis. And I spent a lot of time in experimentation, and so I love a good conversion rate optimization. So Tim, we're gonna kick it to you to start with, how do we start at the very beginning of analyzing kind of a product and its quality and its usage through analyzing

  3. 4:145:46

    Breaking down what analytics actually involves beyond data manipulation

    1. CV

      conversion rates?

    2. TT

      Yeah, I love this. I think everyone's talking about vibe coding, but no one's really talking about vibe analysis, and we're heading in that direction very quickly. So, uh, let's get into it. Um, so before we do anything too technical, I think we wanna share a really broad range of examples here, from the really complicated to the, like, actually incredibly simple. I think everyone knows PMs are gonna have to become engineers, and then we've got a lot of issues where all of you guys are gonna have to become anal- analysts as well. Um, so I think there's a lot we can show here. So we wanna start off with just a really simple use case that should be familiar to, I think, everyone listening. Uh, but I think it illustrates the point that it's often the most simple AI tools that can actually have the biggest impact here. Um, I think before we get into the actual demo, I think it's useful just to pause very quickly for a second on, on the question of what analytics actually is. So I think once you break that down, you get a much clearer view of where these current tools can be most valuable. Um, I think most people jump straight to the nuts and bolts of actually manipulating and crunching data, but actually, it's really just a small part of the overall process. And the most important, often the most difficult thing, is actually just getting the right context in the first place, 'cause that's what separates good analysis from bad. Like, you need to know to ask the right questions, to come up with the right hypotheses, to know what analysis is even worth doing in the first place. You need to know where the data lives, and you need to be able to interpret it all very, um, very well. And the new AI tools have just absolutely transformed the process of just getting all that context. You can go as broad as you like, self-serve, uh, into an unfamiliar topic just incredibly quickly, and that means you can not only deliver quicker analysis, you can just deliver much better analysis, too. Um,

  4. 5:469:05

    Demo: Investigating a conversion rate drop using enterprise AI search

    1. TT

      so to illustrate the point, I wanna talk through what sadly I'm, I'm guessing is a very familiar situation, where a business metric suddenly drops off a cliff, uh, and no one's got a clue what to do with it. Um, so I'm actually- I'm gonna use a real example from Faire for this. Um, and this happened to our new customer conversion funnel at the end of last year.... So if you've ever worked in growth, everyone's gonna know new customers, they're just extremely sensitive to even the tiniest little friction. So almost anything anyone does in the business anywhere can affect these kind of things, whether it's a sign-up flow, a search algorithm, a shipping policy, like, this all can affect these things. Um, and if you're not careful, you're gonna have to decomp the entire business. So let me show you how these things can just be done so much quicker. And so imagine this problem lands on my desk. Um, I might look at a couple of just existing dashboards that exist to say, "Uh, what's going on here?" And you can see, uh, very quickly the issues started in September, and there was another drop in December, and it seems to be concentrated in the checkout stage. But beyond that, I've really got no idea what could have actually caused that. So let, let's start Reboard. I'm just gonna share my screen. I'm gonna start just by doing an enterprise AI search. You know, we use Notion, but frankly, every document system now is gonna have an AI system. If they haven't got one yet, it's coming, and they are just game changers. So I'm just gonna start very simply by asking Notion, "What happened?" Okay, so the only thing I'm gonna do, I'm gonna just make this more realistic. I'm gonna filter the date range. I don't want it cheating and looking at the answer. It's only gonna have access to the things I had access to when I actually did this. So I'm gonna put it up to the end of April last year, which is when I ran it, okay? And then we're just gonna get that running. So if you read this, all I've asked is, "What experiments or new, new features launched between September to December twenty twenty-four that could have added friction to the checkout process for new retailers in Europe or North America?" And I've just said, um, "Focus on XP docs, uh, PRDs, and launch announcements." Okay? So if you think about what I'd have done in the past, I'd be, like, crawling through a million documents, doing a load of searches, going through a ton of, uh, different Slack channels, trying to work out what's going on, and instead, look, I've got straight away a really interesting list of hypotheses to dig into with no work. And you can see it's searched across Slack, Notion, Jira, and everything else very, very quickly. And, uh, if you... Let's just pull out a couple of these. So what's happening? So let's go. So you've got, uh, clearly we launched some kind of, um, checkout experiment around this time. That's definitely worth looking in. Uh, we've done something with a checkout blocker in Europe. Okay, lots of interesting things to dig into. Now, with a couple of clicks, I've got a good long list, but I don't really know what these things are. So I've got all the links of the extra documents I could go click into, but let's just ask, as a starting point, uh, "What is EORI?" Let's pick one of them. What is EORI? So we just asked that. It's gonna run another little search and give us more things. Now, um, you've got a little bit here, but it's gonna start bring up a little bit more information, uh, to just get a bit more, um, a bit more detail on this thing. So let's see where that goes. Okay, so very quickly, it's saying, "Give me the term of what it is," and you can kind of see it's... Okay, it's, uh, um, a regulation that's involved Europe, and someone's done something to, uh, start asking for more details, clearly trying to improve checkout and conversion rates once, uh, and they're trying to bring that one

  5. 9:0512:40

    Using ChatGPT Deep Research to analyze code changes

    1. TT

      in. But I think this is a great starting point. I've got some detail, but I think what's really interesting here is everyone knows, like, PRD is one part of the story, but between a PRD being written and something going into the code base, a lot can happen. So to actually understand what's going on, you usually need to go one layer deeper into the actual technical implementation, and I wanna show you, like, a quick trick, uh, of how I do that. Um, so I think one of the best things about these AI tools is just the ability of someone who's, like, non-technical to access things that they couldn't previously access, and a great example of that is just being able to talk to the product code base. I'm not an engineer. I can't write Kotlin or Swift. I used to be a lawyer, for God's sake. Um, instead, I can run a deep research against our code base to find out exactly what got implemented for a particular feature and when. Now, I'm gonna do this in two different ways. I'm gonna do it on ChatGPT, which I think is very simple, and anyone can replicate incredibly quickly. Everyone's familiar with it. And I'm gonna do it on Cursor, which is a bit more specialized but just incredibly powerful. Um, so I'm gonna open up a new chat, and I'm gonna put it into Deep Research mode and make sure my GitHub is connected. So all you do, it's not technical to do that, you just need to say yes a few times to get your GitHub connected. Um, the only reason you can do it on, on Deep Research is just because it's the only way you can actually access it. It's gonna search our code base now, um, in exactly the same way it would normally search the web on a deep research. So I'm just gonna put in a prompt. Here we go. Let's just copy that in. Now, let me talk a little bit about what this prompt is doing. So I've given it a role. I've said, "You're a senior staff engineer, and you've got expertise in all these different code bases, Kotlin, Swift, TypeScript, and you are working at Faire." And I've given it a task to say, "Please conduct a forensic investigation of the code base to produce a comprehensive time sequence report of all changes to the EORI collection process at checkout between June twenty-four and February twenty-five." So just making sure we don't miss anything, and the rest is just a bit of detail as to what I want this to look like. So I've said, "I want an exec sum. I want a table with all the different PRs and commits, what they've gone into, and I really want it to focus in on the actual impact these commits had on the retailer experience." Like, explain it to me in layman's terms. Um, and then I've just put a few requirements in here just to give it a bit more context, so be precise, simple, clear language, only use GitHub sources.

    2. CV

      I wanna call out here, um, you're, you're using this prompt in the context of sort of a, what I would call, like, a business incident, right? New user signups just dropped. But this is a prompt that I want the engineers watching or listening to the podcast to really pay attention to because if you're in the middle of a, you know, sev one incident, and you need to trace who did what... I know so many of our engineering teams are looking, either manually looking through code, looking at these specialized kind of code gen tools to do this, but probably aren't reaching for something like ChatGPT-... Deep Research to just go ahead and do this for you. And if you're a product manager looking to be helpful during an incident, this is maybe a task you can take on, on behalf of your engineering team, just to provide some additional context in the background.

    3. TT

      Hundred percent, I think this is great for engineers. I think it's great for just getting people to talk better to engineers. I think there's just so much you can do here. So as always, Deep Research is asking a few questions, so, uh, "use discretion." We'll just answer a few of those to make sure we get it. Uh, "use discretion" and "yes, please." So that'll get it going, but now-

    4. CV

      You, you prompt just like I do. I just say, "You pick, you decide. You go. I don't care." [chuckles]

    5. TT

      I think the fact that Pro doesn't ask you these questions

  6. 12:4018:55

    Leveraging Cursor as the ultimate context engine for code analysis

    1. TT

      make me think it's more to, like, make you feel like it's doing it, rather than anything else.

    2. CV

      Yeah.

    3. TT

      So that's gonna take a bit of time. So while that's running, I wanna show you how to do this in Cursor. 'Cause I think Cursor is one of those tools that everyone thinks of for vibe coders, they think of it for engineers. They're not really thinking, uh, about what else it can do. And I think for both analysts and non-analysts alike, it's an incredible tool. So, um, I think more and more people are talking about the phrase context engineering, rather than prompt engineering. I love that. Um, it sort of actually explains what we're trying to do here, and for me, just Cursor is the ultimate context engine. You can hook it up to MCPs. Um, so basically, I can hook it up to every single system in our business to get all the data I need, and that just makes it such an incredibly good accelerator for getting context and doing analysis. So I actually find, increasingly, this is getting better results than Deep Research on ChatGPT. So both are good, both are game changers, but I think this is just a little bit quicker and better. So I'm just gonna make sure my, uh, MCPs are all hooked up, and then all I'm gonna do is I'm gonna drop exactly the same prompt into Cursor, and we'll see the two running. So exactly the same prompt. So just for context, we are not even started on our, uh... [chuckles] It hasn't even got off to the races at all on, on the ChatGPT. And straight away in, uh, in Cursor, we're going and finding it's got a nice to-do list. It's saying it's gonna search all the right things in GitHub. It's gonna then forensically analyze it, uh, and we'll just let this run for a little bit. You can see it's already starting to pull in the code and the pull request that we want.

    4. CV

      One of the things that I think is interesting to call out is, you know, I've run a lot of product engineering data orgs before. Engineering, certainly, day one, what are you doing? You're getting access to all the repos. You're getting set up with GitHub. You're pulling your, your local environment together. I know that data teams often have a similar onboarding because they're working so closely with production data. One of the things I think is gonna change, or if it hasn't already, should change right now, is, I think product managers and designer onboarding, first seven days has to include access, re- at least read access to GitHub, getting your local repository pulled down, getting all your MCPs set up because it just... Code has become now a data source for anybody doing work, not just people writing code. So I look at this, and I think leaders out there need to pay attention and rethink basically their onboarding process. Because you don't wanna be in a situation like this and go like: "Can somebody get me GitHub? Like, can I-

    5. TT

      Uh

    6. CV

      ... can I get access?"

    7. TT

      It goes even beyond that. Like, everyone should have access to every system-

    8. CV

      Yeah

    9. TT

      ... and it should be from day one. Like, these tools are just the best onboarding accelerators. We've seen it for analysts, we've seen it for engineers. Suddenly, people get the context very quickly. Okay, so we're already off. It's summarized everything, it's written a nice report, and we're actually starting to write things out here. So straight away, you can see I've got a nice, exact summary. It's given me a few things, but this, this is what I was most interested in. Okay, so I'm getting a table here, for those who can't see my screen. I'm getting a table with every single PR that affected this part of the flow from, like, it starts in July '24, all the way to still going, uh, but it'll probably go to somewhere like December or February, depending on where it's gonna go, with all of these things. Now, let's just call out what this is doing. So it's given me an exact link to the specific PR that actually pushed this into, uh, the, the codebase. It's given me the name of it, and it's given me a summary of what it did. It's saying who was affected, and it's saying what was the impact on a retail experience. Now, if anyone's done this kind of thing-

    10. CV

      [chuckles]

    11. TT

      ... it's, it's so difficult to do and actually, uh, like, pick through all the codes and actually understand what's going on on this, and it can just be incredibly tricky. And so very quickly, knowing nothing about this feature, I can already start to get really smart on what happened, and I can see, if I dive down here, yeah, you can see there was an experiment launched in mid-September, right in the sweet spot of when this, uh, drop first happened. And if I scroll through, getting through to looking at December, uh, yeah, you can see it launched, uh, all treatment, uh, uh, all users that bandwidth went live. So this now looks like a really interesting, potentially smoking gun that we can deep dive into. And so instead of spending days talking to people about all the potential hypotheses, uh, I can now speak to exactly the right colleagues and have a really targeted conversation and informed conversation right from the off with them, uh, to crunch through this problem in a matter of, like, hours, rather than weeks here. So even before we've done any data crunching, this can just be absolutely game-changing for us.

    12. CV

      Yeah, and it allows you to go a lot deeper than, you know, I've been able to do historically on these kinds of analyses. You know, when you're running these high-velocity experimentation programs, you have so many concurrent experiments. You have experiments colliding with rollouts, colliding with just plain launches, and just trying to decompose what was the state of your app on any single day is really challenging. And even if you can do the manual research to get this at a feature level, like, yeah, today we launched the one, one, one-page checkout, I think the real challenge is, well, did we implement it well? Is there anything in there that we should, like, worry about? Did we exclude any users from that, like... A- and so I do think the ability to use code as a, a detailed source of truth when doing these kinds of forensic analyses really makes the difference in figuring out what's going on with your business.

    13. TT

      And then getting smart enough to go one level deeper as well. You can ask follow-up questions to say, uh, "How did it differ for different segments? Are there other ones interest?" Like, you can get so much detail just by asking questions on these kind of things without speaking to any engineers, uh, it's great.

    14. CV

      ... And this gives me a little bit of some inspiration on other use cases for querying your codebase and GitHub history for events. One of the things that I do very frequently is I do a very similar analysis to this, but I say, "What is everything that shipped in the last week from the context of a customer?" And then I use it to write my newsletter. So again, like, I'm starting to use our codebase as a source of truth for our marketing materials. I don't have to proxy through, like, what was in the PRD, or what did a PM write, or any of that stuff. I'm just like, "Just tell me what was in the code [chuckles] in the code commits," 'cause that's what I know went live. It can interpret what the customer-facing experience and intention would be, and then you can create these really interesting business and market-facing assets out of that. So I just think the ability to query your codebase and your GitHub history for any use case, including this one, is really useful.

    15. TT

      Yeah, I love that.

    16. CV

      Great.

  7. 18:5526:27

    Analyzing a new product feature’s performance with Cursor

    1. CV

      Now, what, what do we do after this? So you've identified you have a conversion rate problem. You've identified maybe a couple sources of the issue. You're gonna go talk to your colleagues. You're gonna look at the code. Um, how do we actually do some analysis? Or I know we said we were gonna do some vibe analysis, and we have seen very few numbers. So Alexa, how do we do actual analysis of data when we've id- identified a problem or an opportunity we wanna go after?

    2. AC

      Yeah. So obviously, like, a quite classic analytics task, I'm going to take us through... You know, we launched a new product feature, and we actually wanna understand how it did. So I'll take us end to end from understanding how the feature was built, analyzing its performance, and then producing a summary that could eventually go to our exec team. Um, like Tim kind of touched on, without AI, especially the context gathering, would mean hours spent digging through all the specs and PRDs, writing SQL queries from scratch, and then, you know, spending a lot of time writing and editing a doc. So with AI, I can pull context similar to what Tim just did directly from the codebase, I can generate queries, and I can draft a s- draft a synthesized doc. Um, and so I am going to start sharing my screen.

    3. CV

      And while you pull that up, I have to say, people think that why I got into AI in a deep way was because I thought it was so fun to code, and it was actually, it made my SQL so much less ugly- [chuckles] ... than it used to be. It was, like, my number one use case however many years ago. I was like, "Thank God! Now I don't have to bother my colleague with my disgusting SQL. " [chuckles] I can bother, uh, AI with my horrifying SQL, and it can make it a little bit more, uh, efficient.

    4. AC

      Yeah, I mean, uh, even just ChatGPT for the last-

    5. CV

      Yeah

    6. AC

      ... couple months has been a game, game-changer for SQL queries. The problem with ChatGPT is you had to spend a good amount of time giving context, like the exact table names-

    7. CV

      Yeah

    8. AC

      ... the exact field names. And so using- I mean, it's not, it's sort of most marketed use case, but using Cursor-

    9. CV

      Mm

    10. AC

      ... which is what I'm gonna show today, to actually create, edit, write SQL, has been pretty game-changing, um, especially because it's so context-aware, and I will talk about that. So Cursor can take, like, three to four minutes to run some queries, so I'm gonna just kick off this prompt, and then I'll explain the context and what I have done. So while that's running, I will set the stage. Last month, in July, we redesigned the signup flow for a new payment method that we have been piloting, and this process of signup is successful when a customer links their bank account, uh, for the payments. And our old flow had been live for a few months. We had a hypothesis that we could improve it, so we redesigned the flow. Because this is a pilot, we actually, like, didn't have enough retailers or, or users, um, to run an A/B test, so I just needed to do a pretty straightforward, you know, how is this performing before?

    11. CV

      Mm-hmm.

    12. AC

      How is it performing after? Um, historically, again, that would've meant a lot of digging through documentation or [chuckles] more realistically, just pinging an engineer to ask questions like, "Okay, what did we build? Who sees it, and why? What front-end events are emitted that I can use to analyze this?" Um, and while I do work closely with our engineers during the end of spec phase, like, figure this out, those details are easy to lose track of, and especially, like, we're often coming back to analyze things, you know, weeks or even months after the feature launched. I will say that I probably would start with Notion AI context building, similar to Tim, but we already showed that, so I'm skipping straight to the codebase. And if we go up to this prompt, uh, my prompts are way less pretty than Tim's. I don't [chuckles] like, spend a lot of time on them. Like, I feel like with Cursor, you can always iterate. And so I wanted to understand the setup wizard, which is what we called this new flow. I told it to research our codebase, and I essentially asked who, what, where, when, why. And so if we go to this answer, we can see, okay, it is, you know, looking into the codebase, and, you know, I'm not an engineer, I don't really know what this means, but it... You know, we called this in our code the first-run user experience, and it tells me about some flags, cannot be sub-users. There's, like, a lot of detail here, um, and it's telling me when users see this flow, what happens during the flow, the order of steps that happen. That's, like, pretty important. If I'm gonna analyze a funnel, I need to know, like, in what order did things happen, and then if there is a success event, like, when the setup is complete. And then it gives me a bunch of events that I can use to analyze it. So this is already such a game-changer. Like, in the past, I would've leaned on secondhand sources like Notion, uh, to piece together how it was built. With Cursor, like you were saying, I can go straight to the source and have it translated into natural language, and that just gives me a lot more confidence because it reflects what's actually [chuckles] live and not what someone remembered to write down.

    13. CV

      One thing I wanna call out while you're going to your next step is, one of the steps that I see-... skipped by engineering teams is good event tracking when they release a feature. Because, you know, you, you start up front in the PRD, and you, like, define a tracking plan, and then it gets to implementation, and people forget, "Should be a front-end event, should it be a back-end event?" And one of my favorite follow-up AI tasks after something has been released or it's in code review, is I do a quick prompt, and I go: "Is this- is everything appropriately tracked in this feature?"

    14. AC

      Yeah.

    15. CV

      And I get either Cursor or Devin to go in and put in all the right events and make sure that the schemas are normalized. So for all the data analysts out there, be annoying and do a PR for your own, uh, events on new features, so you're not, you know, stuck with what the engineers built for you.

    16. AC

      That inspires me to... I can take the end spec and just put it into any AI tool and say, "What front-end events do I-

    17. CV

      Yep

    18. AC

      ... or what events do I need to ask for-

    19. CV

      Yep

    20. AC

      ... to be able to measure the success of this effectively?"

    21. CV

      Yep.

    22. AC

      Um, 'cause right now I'm just doing that in my head. That is not something that I have yet.

    23. CV

      Yeah, don't do it in your head. That's the-

    24. AC

      Look to the high for you

    25. CV

      ... the subtitle of How I AI. How I AI.

    26. AC

      Yes.

    27. CV

      Don't do it in your head. [laughs]

    28. AC

      So, uh, with this next prompt, um, I, again, not the most, like, sophisticated prompt. I'm just saying, "I wanna understand at a high level how this feature has been performing," and I give the quick context of, you know, our goal is to make it better. That's pretty obvious, but I just want to spell that out. And I, like Tim, I'm giving a fair amount of discretion to the Cursor agent. I'm saying, "Okay, come up with the ideal output fields. I have some ideas, but like, you know, it's up to you." And then, two, I do find that telling it explicitly to create a file, it sometimes forgets to [chuckles] do that and just writes the SQL directly in the, um, conversation sidebar. Uh, use the MCP connection, like, I went through all this trouble to set it up. Uh, I want it to use the Snowflake MCP connection, and then actually QA the file. And that's what's so powerful about this Cursor agent and the Snowflake MCP, is not only is it writing the SQL, which is what ChatGPT has been doing for me for the last year, it is running it, looking at the output, and then making, like, its own sniff test, sense check decisions, which is just so cool. Okay, and then another thing I wanna call out as

  8. 26:2730:00

    How semantic layers make AI tools more effective for data analysis

    1. AC

      we are running this, the reason why I have a fair amount of confidence that this is gonna work relatively quickly is because I and our data team have done a fair amount of work to create what's called a semantic layer. And so, uh, first, our amazing data engineering team, like six months ago, decided we were gonna create, like, a general company semantic layer. And a semantic layer is essentially just a translation for an LLM of, like, our business terms, tables, fields, filters, metrics, et cetera. And AI can look at those files and understand, like, what our tables mean. This general one covered, like, our most used generic tables, orders, items, users, et cetera. Um, and so they connected it to a custom GPT, and anyone in the company can go ask pretty basic questions like, "What was the average order size in Europe last year?" And get an answer really quickly. And so that's been a huge unlock to save our analytics [chuckles] team time of, like, we're not answering these questions for people. They can self-serve. It's just democratizing data and, you know, saving us a lot of time so that we can focus on more deep analysis. And for deeper analysis, like, we needed something more than just these basic tables. And so I, with a lot of help from one of our data engineers, she's built a specialized semantic layer just for, like, my scope as a test. So I was... You know, we're the... I was the first one in the company to do this, but we're planning on kind of rolling it out to all of the areas of scope. And, you know, basically, this semantic layer just defines the tables that I use the most, the joins, the filters, the metrics, and because it lives in our codebase, it's, like, in our data science repo, Cursor can just tap into it, and it just makes the zero shot ability, like, insane of running SQL.

    2. CV

      I've seen a couple of these, and, yeah, I don't know what yours looks like, but they really just look like defined terms, tables. This table means this, this field means that. If you're trying to query average order value, this is how you do it, and it's almost your documentation in a little bit more of a structured form around common queries. And what I think is nice about this is its ability to be managed by code. You can change it, you can update it-

    3. AC

      Mm-hmm

    4. CV

      ... you can add new things. I also think, for the data engineers out there, it reduces a little bit of needed complexity on the data warehouse setup, because previously, you were creating these, like, aggregate tables-

    5. AC

      Mm-hmm

    6. CV

      ... and these, like, defined metrics, and you're hoping people were writing queries the right way. And now you can desi- d- define these canonical queries and know that no matter kind of like what your tables look like, they're gonna get to, to the right answer, which I think is quite nice on the data engineering side.

    7. AC

      Yeah, so this is an example of, like, what you were talking about. It's just a very structured JSON file.

    8. CV

      Mm-hmm.

    9. AC

      And from what I understand, I did not do this, but I had the engineer explain the process to me, and honestly, LLMs helped a lot with creating this. You know-

    10. CV

      Yeah

    11. AC

      ... he fed in details about our data warehouse and just a million queries that I had previously written, and it kind of helped spit out this type of thing. He also used LangChain to, like, change the names of a bunch of the reports that we had into question form, because obviously, when I'm querying this, whether it's through a custom GPT or Cursor, I'm often asking a question, and so tran- I thought that was pretty cool. Like, translating it to a question makes the semantic layer work so much better.

    12. CV

      Oh, this is gonna be my next project. This is so fun. [laughs]

    13. AC

      Oh, amazing! Glad to

  9. 30:0034:17

    Using Model Context Protocols (MCPs) to connect AI with data tools

    1. AC

      inspire. So to go back to the actual SQL that was run-

    2. CV

      Mm.

    3. AC

      - and I will actually just run this.... let's see, hopefully this.

    4. CV

      And just in case people missed this, you did call out the Snowflake MCP, which was what we're-

    5. AC

      Yes

    6. CV

      ... seeing right now, which is a programmatic way to hook into running queries in your Snowflake data warehouse. So you can not only generate the SQL here, but instead of, like, copying and pasting it, and going into, like, Snowflake cloud and running it, or whatever your visualization tool is, you can just run it right here. You're getting your tables right here. So again, like, you're, you're eliminating that context switching, you're eliminating the copy and paste, and you're getting your data right here.

    7. AC

      Yep, exactly. And so I am... Oh, this is interesting. This actually, I am looking at this, and it's- I think it showed a mistake. Um, but, you know, I asked it to queue it- QA itself. Normally, this has done, does a very good job, but one of the quick QAs that I do-

    8. CV

      Mm-hmm

    9. AC

      ... for something like this, is I wanna see no skip steps. Oh, actually, you know what? I remember from the context, this is a temporary, um, this is a step that only some people see.

    10. CV

      Okay.

    11. AC

      But usually when I'm looking through this, you know, in a, in- if we were not doing this demo, I would spend probably a lot longer QA-ing this, but I just wanna see drop-off that makes sense, right? Like, I don't wanna see zero, zero, and then one, or then zero. And so that's just a quick QA that I can do. You know, it's not the AI's name on this analysis, it's mine. So [chuckles] I do that. The other thing that I have done to really make sure that I can QA this effectively is I, in my cursor rules, I tell it to comment every single CTE, so that I know what the... And sorry, CTEs are, like, sections of SQL that often are created when you're writing SQL. And I just wanna know each step of what is happening, so that as I'm looking at the SQL, I can say, "Okay, the agent said it's doing this, and, like, looking at this code, I can actually tell that it's doing this."

    12. CV

      So engineers, cover your ears, because engineers hate, hate, hate, hate, hate when I say this. They hate it! I love over-commented AI code, and let me tell you why. Because when you are not writing this code, you really need to understand the thought process behind how the code was designed. And having AI comment the code that it writes, gives you a natural language way to understand if your understanding of the implementation matches the actual technical im- implementation of the code itself, based on the AI's own reasoning. Fine, delete it if you want to, I don't care.

    13. AC

      Okay.

    14. CV

      I know all the arguments against over-commented code, and I think there's a lot of benefits for human review, and it's also great context for AI when they go back and work on it. So engineers, you can now uncover your ears. You can yell at me on Twitter if you want to, or on X if you want to, but I do the same thing, where I say, "Go ahead and, like, comment in the code, so I can understand how you've decomposed these step by step."

    15. AC

      Yeah. It's pretty, pretty awesome. It's also... I even have a custom GPT in ChatGPT to comment code I've written before. I just insert code, and then, you know, if I'm ever handing off dashboards to someone, I really don't want anyone to be so confused that they have to bother me. You know, my goal is to have it be quite self-serve.

    16. CV

      Look, those lines of code are not gonna expand themselves. Let's get some commentary. [laughing]

    17. AC

      Exactly. [laughing]

    18. CV

      This episode is brought to you by Brex. If you're listening to this show, you already know AI is changing how we work in real, practical ways. Brex is bringing that same power to finance. Brex is the intelligent finance platform built for founders. With autonomous agents running in the background, your finance stack basically runs itself. Cards are issued, expenses are filed, and fraud is stopped in real time, without you having to think about it. Add Brex's banking solution with a high-yield treasury account, and you've got a system that helps you spend smarter, move faster, and scale with confidence. One in three startups in the US already runs on Brex. You can, too, at brex.com/howiai.

  10. 34:1737:04

    Creating visualizations and dashboards with Mode integration

    1. AC

      So I'm gonna kick off my next, my next prompt. Uh, but basically, like, we're gonna skip ahead a couple hours here, because, um, up until this point, like, my goal was to get this kind of clean base query that I could use for dashboards in Mode, which is Faire's BI tool. You know, a lot of what we are doing as the strategy and analytics team is creating, creating tables that then can be used for t- pretty charts to tell a story. And so let's pretend that I spent a few hours with Cursor, like, refining queries. I actually did one for the old flow and the new flow. I actually did do this. This is also a real use case, like Tim's. Um, and then I built some visualizations in Mode. What's really cool is that there is actually a Mode MCP, and I can tell it to view a dashboard directly. Um, for those who are listening, here we have on the old- on the left-hand side, our legacy flow, and on the right-hand side, our new flow. Um, you'll see that there's one step that is only present in some of the, uh, some of the entry points. This is split by entry point, and basically it's just showing, you know, like, what is the overall success rate, and success rate by step for each of these flows? And so this is what I have pointed the Mode MCP towards, um, in this, in this prompt. So if we go back to this prompt, and I'm just gonna tell it to run this tool. Okay, so I'm telling it again, like, "Hey, go look at this Mode dashboard and use this MCP." I also give it the direct SQL that, um, that I wrote with Cursor, uh, that's powering that dashboard. I'm just asking it for some detailed takeaways and next steps. I give it a little bit of context, um, and I tell it to ask clarifying questions and use the MCPs if necessary. The MCPs, I think, I'm not sure if we've defined it yet, but Model Context Protocol, I believe, is what it [chuckles] stands for, are, like, so powerful. I think that that's when this has felt like magic-... the most. Like, at first, I assumed that they were similar to APIs, where everything needs to be defined. Like, some engineer on, you know, both sides needs to go define endpoints, that there's a very specific structure. It seemed like a lot of work. These models just, like, know what to do. It's just wild to me [chuckles] . Um, I will say that there's a lot of work on our data engineering side to get some of these MCPs set up, so I think Ben on our analytics platform team has just spent a lot of time on this. Like, I, I don't wanna minimize that step, but as the end user of them, it is... Like, it just feels magical every time it can just

  11. 37:0444:39

    Generating structured analysis documents with Notion integration

    1. AC

      access something. And so if we go into the results over here, um, next, key takeaways and next steps. Cool. So, uh, we- looks like we did a good job. Yay, Faire. Um, and it gives, like, a pretty detailed, um, list of, you know, the funnel analysis, insights and concerns, actionable next steps, et cetera. Like, this is already a pretty good sort of output to start with. Um, but at the end of the day, like, analysis like this only matters if you can communicate it clearly, right? Like, you need to sort of convince people of whatever you are trying to communicate. So we also have a Notion MCP, and I'm gonna ask Cursor to create a doc that captures our findings in a structured way.

    2. CV

      And I wanna pause really quickly, because we have done this in maybe 15 minutes, where you have taken a problem, kind of like a pre- and post-analysis of a feature change. You have written SQL. You have not used a WYSIWYG analytics tool.

    3. AC

      Nope.

    4. CV

      You have written straight-up good SQL, traceable SQL, to do a funnel analysis of that on a daily basis. Very interesting. You have made a dashboard for it so that your business users can use it. You have then done a meta-analysis of that dashboard using, um, the MCP to actually read the dashboard, do a first-pass analysis, create a summary, not only of the results, but of recommended next steps, and then you are going to publish that to your business using Notion. Now, I have to say, I have worked with a lot of data teams, and most of them spending their time saying, "What is the priority of this analysis? We have a backlog. I need data engineering, and fine, here's the dashboard."

    5. AC

      [chuckles]

    6. CV

      Like, it's like the ones that, like, get promoted three times in a year, that [chuckles] go the extra step, where they're like, "And here's the analysis, and here are my recommended next steps, and I made it pretty so you can share it with your boss." And I just think, like... I was watching this, and I was like, "Oh, man, I'm gonna promote this data analyst. Like, they're pretty, they're pretty, they're pretty good." And so I just think the ability to level up the quality of your work, and think through the interesting things. The interesting thing isn't like, "Did I write this SQL join correctly?"

    7. AC

      Mm-hmm.

    8. CV

      The interesting thing is like, "Have I thought through all the edge cases? Do I have any creative ideas on what we could do next? Can we improve this analysis for the future?" And so I really like this end-to-end flow, because it just shows how you are leveraging up into higher strategic tasks-

    9. AC

      Mm-hmm

    10. CV

      ... um, as opposed to spending your time sort of in the tactics.

    11. AC

      Yeah. I mean, it's... I totally agree, and we are almost done, but, um, like you said, you know, we need to, we need to communicate this. And so one thing that we have done on strategy and analytics is, um, our chief strategy officer, Dan, like, he really cares about synthesized writing, and all the leaders on his team care about synthesized writing. And so we worked with him a couple months ago to actually create some guidance on how to write at Faire. Like, Faire is very much a vertical doc culture, you know, pre-read culture. We're not creating a lot of slides, we are writing a lot of docs. And so we have this sort of, like, use answer first structure key principles doc, and then we also have a template for what docs should look like. And so actually, in this prompt, you'll see, like, I tell it to follow these, um, to follow these rules that are in these docs, and that's, like, another thing that I love about SQL, or sorry, about Cursor, is you can just tell it what rules to follow in a variety of ways. Um-

    12. CV

      Okay, Alexa, I'm gonna give you an upgrade here, which is you should reference these files in your Cursor rules so you don't actually have to answer-

    13. AC

      I actually-

    14. CV

      Uh, oh.

    15. AC

      That's what... That's a great... I should. I mean, I wanted to, you know, show the full flow, but-

    16. CV

      [chuckles]

    17. AC

      ... um, the reason I don't is 'cause it would've actually done it in the previous step.

    18. CV

      Oh, yeah.

    19. AC

      'Cause it would, it would've, it would've known, and then I wouldn't have gotten to talk about it.

    20. CV

      Oh, okay.

    21. AC

      But yes, I will, I will do that once we are done.

    22. CV

      It's showbiz, folks.

    23. AC

      [chuckles]

    24. CV

      That's what this is.

    25. AC

      Um, and so the last thing is, I am going to pull over the doc. Uh, this is one I created from a previous time I did this, just because I wanted to highlight in yellow. Um, I gave instructions in this prompt to tell me what to add. I think one thing I wanna get across is this... I don't think that Cursor yet, or AI, can zero shot, like, an executive-ready doc yet. Like, there's... That is where I think that we still need to do three to four revs of, um, of sort of editing, adding analysis, making sure this makes sense. Like, these tools have so much context, but we have... We still have some context that is just this, like, je ne sais quoi. Like, humans are still valuable. And so this is, like, a pretty good start, and I think what's cool about Cursor is, like, I cut out some of the middle men. I got to this point, like, really, really quickly. But we're not just creating, like, AI slop docs all over the place. We are, you know, just accelerating how fast analysts can do things like this. We, you know... And the other thing that's really helpful about, um... I would run this through that guidance three or four times. Um, it can be hard when you're been so in the weeds of an analysis to, like, take a step back and make sure your story makes sense, and so that's what LLMs are really good for. Um, so it can, like, cover my blind spots.

    26. CV

      ... Well, you know what's more painful than running this three times through your guidance? Is sitting three times with your SVP of strategy and having them tell you, "This makes no sense, and you need to go [chuckles] back and edit stuff." So again, I think, uh, what a, what a nicer way to get to a higher quality output than, uh-

    27. AC

      Yes

    28. CV

      ... than having to-

    29. AC

      It saves me time-

    30. CV

      Mm-hmm

  12. 44:3953:10

    Building custom agents to automate experiment result documentation

    1. TT

      I agree.

    2. CV

      Awesome. Okay, Alexa, so we just saw how Cursor can do end-to-end funnel analysis all the way to the proverbial front door of your SVP strategy. Tim, let's talk about another kind of analysis, which is experimentation analysis. My favorite.

    3. TT

      Yeah, you should hold it close to your heart. So, look, we've talked about the big picture, we've talked about, like, a really detailed sort of actual analyst and how they do their day job. But I think one of the other things these AI tools are just so good is just accelerating process, like automating away some of those routine, lower-impact steps in the analytics journey. Um, so as a good example, we wanna show you a quick agent we built, which automates the process of writing up experiment results. So across Faire, we might be running, I don't know, hundreds of A/B tests on the product a month, and each of those experiments needs to be monitored, assessed, documented, and that just takes up so much time for our analysts. So if we don't stay on top of this, very quickly, it's our team that can become the bottleneck and slow down our launch velocity, which is the last thing anyone wants. And I know this is something that's happening up and down the country around every single tech company, um, so we thought it'd be a good example just to, to demonstrate. So, um, let me show you how I built this. Uh, one thing I wanna m- really, really stress here is just how straightforward these things are to build. Like, once you've gone through the pain of setting up Cursor, getting your MCPs in place, actually spinning up any new agent you can think about is just so quick and so non-technical for anyone to do. So it all runs off a Cursor rules file. So for... If you don't know what these are, they're literally just a type of file, uh, an MDC file, that these agents know to look for and know they're likely to contain instructions. Um, they're really easy to set up. It's basically plain English. So you just write, uh, a simple, uh, one line, uh, interesting description of what it is. So, "Format for writing experiment result using Eppo data." Eppo is just the, uh, experiment tool that we use. It basically takes our data, does a bit of analysis, slaps a UI around it, uh, and, and writes it out for us. Um, uh, so you then select when you want to apply. I've just selected Apply Intelligent. I trust the model to work out when it needs to use it, and they do a pretty good job. And then other than that, it literally is just writing out what you want the agent to do. Now, this might look a bit complicated, and I'll generally write this in a few minutes in plain text, what I wanted to write. I'll ask Cursor to then tear the thing down, and I'll rewrite it a couple of times and just get it right in the format I want. But ultimately, it's just a step-by-step guide of what I want this thing to do. So I've just said, for those who are listening, I've said, "If you're asked to write up experiment results, do the following things." So, "Ask the experiment name, if you haven't already got it, and then go collect the data you're gonna need." Uh, so use the Eppo MCP we've set up. So go talk to our experiment space, pull in the actual results of the experiment, and then use our Notion MCP, that we've already talked about, to go pull in all the other context that you might need. So any other documentation that's gonna help it interpret that data and write up this report. And then I've got a little bit down here, you can see, telling it exactly what kinds of da- um, sort of documents to look for. So PRDs, experiment docs, technical specifications, that's, that's what it's gonna help it look for. And then I ask it to basically write out those results in the format I give it, and then I'm pretty prescriptive about the format I want, 'cause I want this to do it really consistently in the format we want, with really tight, um, tight takeaways. So actually, I've asked it to create it in just a local file on my Cursor, on my computer, and that just means I can actually look at it before it goes create to the Notion doc. I can take a peek, refine the prompt if I need to, but that's just a fallback. And then ultimately, it's gonna turn it into another Notion doc, so everyone else in the business can see it. And it's gonna do all this incredibly quickly. And let's actually just see what this thing looks like in, in reality. So let's just run it on, uh, an experiment result. So I've just said, "Please write up the experiment results for..." And I've given it the name of the experiment, which is Vertical Product Tile Images. Uh, and straight off, it's gone off, and it's found, uh, it's written itself a nice to-do list. It's found the Eppo results, so it's just called the results, and it's found its results. Great, it's found the, the rules, and now it's gonna start writing this all out for me, which is great to see.... and then while it's doing all that, we'll just have a look. So the format we've gone through, uh, we can just show here. So basically, the rest of this is all just showing exactly what the format of this thing's gonna look like. So I've asked it to give me the document links, exactly, uh, what I want. So if I click into more context, a brief summary of the experiment, uh, and then the key bit, the actual metrics that it's got from EPO. So it's gonna show me the actual results, the confidence intervals. It's gonna pull out the most important ones, and it'll give me a nice little color coding for it. Uh, and then I just want the actual answer from this. So I actually want it to do the work of interpreting what we should do next, and so it's written the takeaway section. So I want a clear, should we roll this out? Should we roll it back? What should we do? And give me the reasons why. Like, why are we doing this, and are there any other interesting insights that you found, uh, that we should call out from this? So let's see. Right, so it's look- let's have a look at what it's doing here. It has found everything we need. It's starting to write out the doc, which is nice to see, uh, in this little thing. I'm just gonna go ahead and queue up, so turn this into a Notion. So as soon as I've read it, while we look at the actual results, uh, it will start writing the Notion doc. And let's have a look. So straight away, in a second, while it's running that, I have got a st- write-up with all the right context I need. So it's got the links I needed. It's got the context. It's pulled the right data, good. The nice thing is this result, so this was just literally sharing, uh, vertical images rather than square images, like a really standard growth experiment, like which one performs better. And you can see a nice stat sig lift, uh, of about three and a half percent, uh, for the treatment. Uh, and then it's pulled out some other interesting business metrics, and let's have a look at these takeaways. So it's saying, uh, "Great, roll it out," the right answer, uh, because of that lift, and it's also pulled out some interesting things. So it said, uh, oh, "Data science prediction models are also actually positive." So it's saying not only have we got more retailers, they're actually higher quality retailers, the ones we've got. So this looks good. As a first pass, this looks great. Um-

    4. AC

      And just to call out one thing here personally, like, we, we have a standard format for doing these, where you have to type the confidence interval and type the emojis, and that is, like, work that is not valuable for our team. And so it's pretty awesome that, like, it came up with takeaways, but it also saved us five minutes of, like, fiddling around with emojis and decimal points. [chuckles]

    5. CV

      Yeah, I mean, AI as a translation layer between a SaaS interface or a SQL query into natural language in the format that you like, that your boss likes, that's just a time-saver in and of it, of itself. So I, I love using AI as, like, the universal format translator.

    6. TT

      So as you can see, I've just asked the Notion link. It should produce the Notion. So let's just open that up, and let's put it on screen. And look, straight away I've got a nice, uh, document I can share around with everyone with all the right color codes, the takeaways, and even as a little bonus, let's see if it's done it. Always has trouble getting things in a little toggle, but right in the bottom here, I've even asked it to spit out a Slack with an even more summarized version.

    7. CV

      Mm-hmm.

    8. TT

      So I can just drop this into the right review channels, and straight away, this can go and get approved. Now, are we gonna do this for every complicated experiment? Probably not. There might need to be a bit of analysis, but for the simple ones, straight one-shot. Even the complicated ones, this accelerates you. But also, anyone in the business can start doing this, which means we can pass more and more of these things down to engineers, PMs, other people to write this kind of stuff and do the analysis for them, which again, can just massively accelerate our launch velocity at Faire, which we're really excited for.

    9. CV

      Yeah, I, I'm sorry, and I know this is my brand, but I feel like AI is just accruing to every task. Sorry, PM, it's your job now. [laughing] So, uh, I, I do like that, that let little trend that's happening. This is amazing. Um, love it. Have done these kinds of analyses before. They have not been this easy to read, and they certainly haven't been generated in ninety seconds. Really useful tool for experimentation analysis. A call-out to the experimentation tools out there that I know and love. Um, if you have not made an MCP for access to your data, you are limiting your customers, and so I do think sort of AI integration of SaaS tools is going to be a way that teams start to evaluate the quality of tools that they're working with. So just something to think about if you're out there building data analysis tools. Okay,

  13. 53:1059:40

    Designing and analyzing customer surveys

    1. CV

      we are gonna wrap up very quickly with a final... We're gonna do a bonus. We usually only do three use cases, but your, yours are all so good, we're gonna do a speed run through a bonus use case, which is actually designing and analyzing kind of unstructured data in a user survey. So Tim, you're gonna whip us through how you could use AI to make surveys and survey analysis a lot better.

    2. TT

      Yeah, I'm, I'm gonna do this really quickly. We're not gonna spend time on this, but, um, let's just show. I, I think it's just another one of those incredibly common analytics use cases everyone has to do, and they are just so time-consuming. You've got to design the survey correctly, code it into a survey platform, then analyze all those results. It's really time-consuming, but end-to-end, AI can just, like, transform the whole process. Let's show another one. I'm just gonna stop that. I'm not gonna run these. I'm just gonna go straight to my backup. So let's just start on design. So what I love doing this, I think you can do it in Cursor, you can do it in many things. I think ChatGPT Projects is really good for this, and again, incredibly accessible. Everyone knows how these work. Uh, it's just a great way of giving context. So if we switch over to this one, which ChatGPT, it's lovely and taking a bit of time to load, you can see in Files, all I did was give it a bit of background information. So what is our bit of business? So this was a survey we want to design on Faire Direct tools, so that's our tools that we give all our brands to help them accelerate their sales with their own customers. And so I've given a, a ton of information to the model that just says, like, "What actually is Faire Direct? What are these tools? What's the strategy?" And then I-- whenever I do a survey like this, um, whether I'm doing AI or not, I'll start with a hypothesis. That- that's ultimately what you want to test, and so this is a nice way. If I just open up those hypotheses, so this is what I fed it into. I just gave it a list of simple hypotheses on what, um-... what we want to learn. Uh, we do aligned, we've got everyone aligned on some hypotheses. There's 14 in here, and then really simple, I'll just call out one, like, um, higher sales on Faire leads to, uh, more usage of these tools, um, things like that, that we ask. Now, I've just given that into it, and all I did, if I just look at this prompt that we ran, so this was a simple prompt. All I did was drop it in saying, "You're a, you're a specialist at doing these customer insight surveys. Run- design me a 10-minute survey for the thousand brands to test those hypotheses." I said, "These are the inputs I've given you. Here's a bit of design requirements that we want," and I asked for three things. I said, "Turn those hypotheses into a full questionnaire that we can go ask our customers, but also, don't just do that, give me the coding file that turns that questionnaire into the actual..." In this case, Qualtrics, the platform we use to actually run these things, can actually design that straight away in one click, "and give me an a- analysis plan for some what to do with it."

    3. CV

      I have to pause you really quickly, 'cause this whole episode has been Tim saying, "I just did this really simple prompt," and then you see this, like, 1,000-word, hyper-structured, very organized [chuckles] prompt. And Alexa's like, "Oh, man, I would just go in there and be like, 'Make me a nice survey, please.'" [laughing]

    4. TT

      [chuckles] I love it. So I'm a big believer that 99% of my prompts are gonna be one line, and then if I'm gonna send a model, a big model to go do work for 15 minutes, I'll probably ask another model just to turn my one line into something more, more detailed. Uh-

    5. CV

      I want, I want the A/B test of Alexa-

    6. TT

      Yeah

    7. CV

      ... you run this exact same GPT with a tinier prompt, and you tell me if you get the same quality. [chuckles]

    8. TT

      See what happens. See what happens. Maybe I'm just, uh, I don't trust it quite as much as Alexa does just yet. Okay, so what do we get from that? So very quickly, from a list of hypotheses, I've got straightaway a really nice first pass of a survey. Now, it's gonna ask a load of questions. It's about the right length. Like, this can just massively accelerate the process, and then once we've got that right, it's also given me that coding file, which I'll just scroll on screen. These things are painful to write.

    9. CV

      Yeah.

    10. TT

      So just having this, a one-liner, to tell exactly how the system should prompt this and write it out, is just like, saves hours of time for our research operations team, and it even then translates that into an analysis plan that says, "This is what the outputs from that are gonna look like." So straightaway, this whole thing can go from a list of hypotheses into something that we could probably get out to our customers by the end of the day. Now, that's, like, shortens this enormously. But what happens when you get the results back? That's the other thing this can do, and so again, I'll do this incredibly quickly and just show you the final result. But I did a very similar prompt as well. So all I did... I'm gonna show you the file I dropped into this, just show you how painful this is. So I just g- gave the same hypotheses, and look how bad this is. Like, it's the raw output from Qualtrics. Like, these usually take a lot of cleaning. It's one line for every respondent, and then one column, not just for every question, but for every possible answer to every question. So these things are incredibly dense, uh, for anyone to work them, and they take a bit of time and a bit of playing with. So the only other thing I gave it was, uh, a s- a sort of helper file, which was basically that, um, sort of coding file that I just showed you. So it's the, what's the question ID, what's the question language, what's the answers? And then is it... I just add these two columns, which is, like, is it a demographic question or an answer, and is it a single choice, or is it a multiple choice? That's all I gave it, and then I've written another one of my, uh, fun and simple prompts. Um, so, uh-

    11. CV

      [chuckles]

    12. TT

      ... role, task here, just analyze the survey results, give- find the right, most interesting things in this data, and then judge the predefined hypotheses. Um, uh, so I want a table that basically says, like, for those hypotheses, was it right or was it wrong? Uh, and then, again, I always end on little ChatPRD. I don't want it to go away 15 minutes before, uh... and come back with something that isn't very useful. And let's have a look at this just very quickly. So I've got a nice little summary up front, and then there are my 14 hypotheses.

    13. CV

      Oh!

    14. TT

      And it's got a nice table that says, "Proved, neutral, disproved," for each of them, and it's even, because I asked it to, giving me a nice confidence score. So I said, "One, it's really confident in this. Five, it's not very confident at all," and you can kind of see the different levels throughout this. And then beneath it, I've got, for each of these, actually the specific analysis that I asked it to do, so just throw all the insights it found to back up those findings. So, like, is this the only analysis we're gonna do on this survey? Like, almost certainly not, but day one, I've got the results, I've thrown it into this, and within a matter of minutes, I've got a much, much better intuition of what all that data is showing. So while I might go and do some analysis on this, I can be so much more targeted on exactly what we want to, what we want to look into and where I want to spend my time. Uh, and straightaway, we can start sort of sharing some of these findings out with people very, very quickly.

  14. 59:401:03:28

    Lightning round and final thoughts

    1. CV

      Oh, no. So at- I'm reflecting now after this episode, like, okay, I've told everybody to ship a bunch of features, and now I'm gonna be like, "Do a bunch of analysis." [chuckles] Like, in my mind, I'm like, "Oh, my gosh, I'm underusing AI to actually understand my business, and it's so accessible, and if I can just write 17-point prompts like Tim, I can get really high-quality insights." But I do wanna call out, uh, just reflecting on this whole episode and your four workflows, what I love about what you're showing us, is so many people think that AI is an input to producing a thing, but haven't done that, that full circle back to analyzing the thing, sharing the thing, communicating about the thing, and I think you're showing both sides. You can create with AI, and you can analyze and communicate with AI, and I think looking at both sides of that coin is really useful. Okay, we are going to do the one and only lightning-round question because we have gotten long on this episode, and I wanna get you all back to all of your agents and MCPs and analysis. We're gonna go back to prompts one last time. We're gonna figure out your personality around prompts. Alexa, Tim, when AI is not listening, when your MCP will not call the tool, what is your prompting technique? Alexa, what do you do?

    2. AC

      ... I think that mine's pretty straightforward, where I think the problem that I run into most frequently is that I'm clearly running out of context.

    3. CV

      Mm-hmm.

    4. AC

      Like, a conversation has gone so long that it's starting to be wonky, and so while I think, you know, level one is just starting over, uh, what AI is best at is summarizing. So I'll say, "Hey, summarize, like, what we've done so far in this, in, you know, 30-turn conversation, and then use that to start over." Um, because, you know, like, like, I've heard other episodes people say you wanna figure out, like, where it got off track. Clearly, I'm a pretty efficient person. I don't- you know, I'm not Tim. I'm not, like, writing out the entire prompt for 20 minutes. Like, I don't have time for that. I just wanna say, "Hey, summarize what happened. We're gonna start over," but I'm gonna give it that summary so at least the new conversation can get some context from the old.

    5. CV

      Great, and Tim, what about you?

    6. TT

      I've so much to do with my prompts. It's all AI. It's all AI. [chuckles]

    7. CV

      [laughs]

    8. TT

      Uh, what my, what my ChatGPT... So, um, I generally will go and open up three windows on Cursor, and I'll do three chats with three different models and put the same prompt in, and go get myself a cup of tea and see what comes back. [chuckles] That's the British stereotype in me, getting my cup of tea while I do it, but-

    9. CV

      Yeah, you run the A/B test, is what you do.

    10. TT

      Yeah, exactly that.

    11. CV

      Okay. Uh, I, I love this. Tim, Alexa, where can we find you, and what can we be helpful with?

    12. AC

      You can find me on LinkedIn. My full name is Alexandra. And, uh, ways to be helpful, our strategy and analytics team is hiring across the board. Our team partners super closely with PMs and our go-to-market team. We make strategic, data-driven decisions. Super fun. We have tons of open roles, so if you like experimenting with AI, we are very AI forward. So you can learn more at faire.com/careers.

    13. TT

      And you can find me on LinkedIn as well, and I'd echo that as well. Like, come join us. If you love AI, come join us and show us how we can do it more here.

    14. CV

      Okay, we will link to your Careers page in the show notes. Alexa, Tim, this has been so fun. Thank you for joining How I AI.

    15. TT

      Thank you for having us.

    16. AC

      Thanks for having us.

    17. CV

      [upbeat music] Thanks so much for watching. If you enjoyed the show, please like and subscribe here on YouTube, or even better, leave us a comment with your thoughts. You can also find this podcast on Apple Podcasts, Spotify, or your favorite podcast app. Please consider leaving us a rating and review, which will help others find the show. You can see all our episodes and learn more about the show at howiaipod.com. See you next time.

Episode duration: 1:03:28

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode KOr-xQuNK4A

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome