Skip to content
How I AIHow I AI

The secret to better AI prototypes: Why Tinder's CPO starts with JSON, not design | Ravi Mehta

Ravi Mehta, now a product advisor, has built and scaled products used by millions. His past roles include Chief Product Officer at Tinder, Entrepreneur in Residence at Reforge, and senior product leadership positions at Facebook, TripAdvisor, and Xbox. In this episode, Ravi demonstrates his data-driven approach to AI prototyping that produces dramatically better results than traditional "vibe prototyping." He also shares his structured framework for generating professional-quality images in Midjourney that look like they were shot by a professional photographer. *What you’ll learn:* 1. Why most product managers and designers are “vibe prototyping” with AI and getting mediocre results 2. How to use JSON data models instead of design systems as the foundation for better AI prototypes 3. A simple three-part framework for structuring Midjourney prompts to get professional-quality photos 4. How to use Claude and Unsplash’s MCP server to generate realistic data and images for your prototypes 5. Why real data (not Lorem Ipsum) is critical for getting meaningful feedback from stakeholders 6. The film stock “cheat code” that instantly elevates your AI-generated photos *Brought to you by:* Google Gemini—Your everyday AI assistant: https://ai.dev/ Persona—Trusted identity verification for any use case: https://withpersona.com/lp/howiai *Where to find Ravi Mehta:* Website: https://www.ravi-mehta.com/ Reforge: https://www.reforge.com/profiles/ravi-mehta LinkedIn: https://www.linkedin.com/in/ravimehta/ X: https://x.com/ravi_mehta *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo *In this episode, we cover:* (00:00) Introduction to Ravi and data-driven prototyping (02:31) The problem with “vibe prototyping” in product development (04:18) Spec-driven prototyping vs. data-driven prototyping (05:27) Demo: Spec-driven approach to prototyping (08:26) Limitations of the basic AI prototype approach (11:24) The data-driven prototyping approach explained (12:08) Demo: Data-driven prototyping (17:45) Creating a prototype with the generated JSON data (23:33) Comparing the quality difference between approaches (26:44) Modifying the prototype (28:53) Benefits of this approach (34:40) Structured Midjourney prompting (36:20) The subject-setting-style framework for better image prompts (44:27) Using camera metadata to refine your results (48:54) Lightning round and final thoughts *Tools referenced:* • Claude: https://claude.ai/ • Reforge Build: https://www.reforge.com/build • Midjourney: https://www.midjourney.com/ • Unsplash MCP: https://github.com/okooo5km/unsplash-mcp-server-go?utm_source=chatgpt.com *Other references:* • Reforge AI Strategy Course: https://www.reforge.com/courses/ai-strategy _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Claire VohostRavi Mehtaguest
Sep 29, 202554mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:002:31

    Introduction to Ravi and data-driven prototyping

    1. CV

      PMs and designers are prompting prototyping systems that they don't quite understand how to get the best outcomes from. I'm always impressed that a prototype gets generated, but sometimes it's just, like, not quite what I need for the product I'm building or the experience I'm trying to craft. And so I know you have come up with a system called data-driven prototyping, which you're gonna show us.

    2. RM

      The thing that we can do is we can help the LLM by starting to separate out the idea of not just generating the UI, but also by helping it with the data. So I've got a prompt here. It says, "Using JSON," because we want it to be structured data, "generate a sample itinerary that I can use to prototype a shared trip itinerary feature. The destination is Paris."

    3. CV

      I just think about the human parallel to this, which is searching through stock photos, trying to find which one is representative. It just takes so much time, and because an MCP now can, like, programmatically go through the tasks to be done using these external tools, it just makes it a lot faster to get higher quality media into your prototypes.

    4. RM

      So this is the finished prototype based on that prompt. We can see it generated 22 different files. It's a really nice componentization. It's got a little bit of sample data in there, and it generated mock data, so we can see what day one looks like. We've got some photos in there. We can see what day two looks like.

    5. CV

      [chuckles] This will be you teaching me how to actually bring some data and structure to my vibe designing and prototyping. This is genius. I'm really excited. [upbeat music] Welcome back to How I AI. I'm Claire Vo, product leader and AI obsessive, here on a mission to help you build better with these new tools. Today, I am giving you elite prompting strategies from Ravi Mehta, who is CPO at Tinder and a product leader at places like Facebook and TripAdvisor. Ravi's gonna show us how design systems and UX descriptions are not the foundation of great prototyping. In fact, JSON and data models should be. He'll also walk us through how to use structured prompting in Midjourney to get high-quality photos and images for your prototypes. Let's get to it.

    6. SP

      This podcast is supported by Google. Hey, everyone, Shresta here from Google DeepMind. The Gemini 2.5 family of models is now generally available. 2.5 Pro, our most advanced model, is great for reasoning over complex tasks. 2.5 Flash finds the sweet spot between performance and price, and 2.5 Flash Lite is ideal for low-latency, high-volume tasks. Start building in Google AI Studio at ai.dev.

  2. 2:314:18

    The problem with “vibe prototyping” in product development

    1. CV

      Hey, Ravi, thanks for coming on How I AI. I'm excited to see some of these workflows that are gonna be really useful for me.

    2. RM

      Thanks so much for having me. Uh, I'm excited to go through it, too. I've been having a ton of fun playing with these things.

    3. CV

      Yeah, so we've seen, seen a lot of engineers lean into vibe coding and some of the pros and cons of that. And what I am also seeing, which I think you are probably seeing, is product managers and designers doing, like, vibe prototyping-

    4. RM

      Yep

    5. CV

      ... where, you know, where if we're saying people are writing code they don't understand, [chuckles] I might, I might argue that PMs and designers are prompting prototyping systems that they don't quite understand how to get the best outcomes from. And I think these are such cool tools for product managers and designers and other folks to get their ideas across, but a lot of times, I've been personally dissatisfied with the outcomes of my prototypes. I'm always impressed that a prototype gets generated, but sometimes it's just, like, not quite what I need for the product I'm building or the experience I'm trying to craft. And so I know you have come up with a system called data-driven prototyping, which you're gonna show us, that's gonna help us close that gap between using sort of like vibe-based price-- or vibe-based prompting into these prototyping tools, into something a little bit more structured that you think gets better quality.

    6. RM

      Yeah, absolutely. And I've been playing a lot, a lot with prototyping, both for people that are doing zero to one, as well as people that are using prototyping to understand established products. And when you're using prototyping to understand and to advance established products, the game is a little bit different because you have existing UI, um, that you have to adhere to, and then you have existing data and functionality that you have to adhere to. And oftentimes, the thing that's really important is, how do you provide the right context to the vibe

  3. 4:185:27

    Spec-driven prototyping vs. data-driven prototyping

    1. RM

      coding tool? And I think that there's two common ways that people do that. The first is, like, spec-driven prototyping, where you write a really detailed prompt. You try to make that as specific as possible to give the tool enough context to create the thing that you need. The other way that people, you know, create, uh, prototypes is with design-driven prototyping, where you actually start with images. You might start with wireframes. You might start with Figma designs. You upload them in, and then the prototyping tool takes those, and they bring those to life. Um, but it occurred to me, when we're building products, a common thing in the product life cycle between design and specification is when engineering starts to take a look at what you want to build, one of the first things that they do is they say, "Here's the data schema that's actually gonna drive the front end." And by doing that, they take, um, some of the things that are a little bit ambiguous around designs or specs, and they codify them in a really concrete way, and so I started to play around with, can you do that from a prototyping standpoint? And I wanna show, um, how you can use this technique to create prototypes that are, um, a lot more functional, um, and then a lot more flexible, so that you can, um, change them to test different data sets for different purposes, so you can get really good, accurate user feedback.

  4. 5:278:26

    Demo: Spec-driven approach to prototyping

    1. RM

      So why don't we start? We'll, um, take a look at how someone might, uh, typically use more of a spec-driven, um, type of approach to prototyping.

    2. CV

      Mm-hmm.

    3. RM

      I'm using, uh, Reforge Build, which is a new prototyping tool specifically designed for product teams that are working with established products. Um, it's been really good. I think one of the things I've noticed about it is it just generates very clean code that's much more usable, um, in terms of taking things into production. So here, I've got a short, uh, prompt. Uh, "So make a website for planning a Paris trip with multiple people. Include some activities, hotels, and restaurants over three days. Add user profiles, and let people comment on things. Make it look nice." So I was thinking about when I was at TripAdvisor, we wanted to create a trip itinerary planning feature, but what we figured out as we were doing the spec for that is trip planning is often a multiplayer activity, not a single-player activity, so we wanted to understand, like, how do you create a trip itinerary-... feature that is really multiplayer from the start, and I thought, you know, prototyping is a great way to explore that idea. So here's how you might typically start to build a prototype around that idea and explore it.

    4. CV

      Yeah, and I have to say, make it look nice is a commonly used Claire Vo prompt when going into- [chuckles] ... these AI tools. Like, make it good. [chuckles] Very sophisticated prompting, prompting techniques. So I think this is, you know, ripped from the headlines sort of, uh, vibe prototyping prompting right here.

    5. RM

      It's funny 'cause it does work. I find that AI is often very responsive to a little bit of pushing and prodding.

    6. CV

      Yeah. Okay, great. So you're using this sort of very common, um, experience-based description of what you wanna see, right? Very functional. It's a website. It has a certain set of, you know, pieces of data in it. There's user profiles, and people can comment. It's, like, very functionally descriptive.

    7. RM

      Definitely. And so, like, you know, what's pretty amazing is that this will work. You know, it's not-

    8. CV

      Mm

    9. RM

      ... a lot of context, but if I hit create, um, we'll go in, and the tool will create a plan. The, the Reforge Build tool does a really nice job of asking you, um, some follow-up questions, and you're actually asking them to do multiple things all at the same time.

    10. CV

      Mm-hmm.

    11. RM

      You're asking them to think about the UX design. You're asking them to think about the underlying data structure. You're asking them to figure out the code architecture, so there's a lot that needs to be done, and, you know, at the end of the day, uh, it's, you know, it's capable of doing it, but because you're asking it to do so many different things, the output, um, is kind of an average across those things, rather than really spiking each of those areas, which you want it to be. You know, as a product team, you have someone who's great at design, you have someone who's great at product, you have someone who's great at engineering, thinking about all of those things. One of the things that I like a lot about the Reforge tool is it asks you follow-up questions to help make your prompt better, which is important for making sure that the system has as much context as it needs. I'll just skip that for now, so it gets into the code generation,

  5. 8:2611:24

    Limitations of the basic AI prototype approach

    1. RM

      and right now it's going in, and it's starting to write the code. It's starting to come up with a componentized plan, so, you know, it's gonna use reusable UI primitives. It knows sort of overall what it's trying to do, which is create a full prototype with mock in-memory state. And so it's actually doing the work to say, "Okay, we need a data model here," and as part of doing this, it will create that underlying data model as well. And you can see here, it's creating a file. It's called lib_mock.ts. But at the end of the day, because it's trying to do so many different things at once, the end result typically is not as high quality as the approach that I'm gonna show. So, um, let me actually cut over to this particular prompt. So make a website for planning a Paris trip already built, so we can kinda see what that looks like.

    2. CV

      Great, and one of the things that I wanna reflect on while you're pulling that up is, I think the exposure of the reasoning is really interesting. 'Cause I was reflecting on your prompt, and I was like, "What if I Slacked this to a designer?" We're kinda like, "Make a website-

    3. RM

      Yeah. [chuckles]

    4. CV

      ... with a three-day itinerary for Paris with multiple people and comments." I would get... I probably wouldn't even get back a list of questions. I would get, "Yeah, find time on my calendar, so we can talk about this."

    5. RM

      Totally. [chuckles]

    6. CV

      And it's so funny that you can do that iterative process of taking a very high-level idea, actually getting back some structured questions to help the sort of design side be, and the product side be fleshed out, and then it can go into, "Okay, what would an engineer think about implementing this?" in, like, two or three responses. And so I just... I, I, I reflect on it as a, uh, accelerated version of the product development process we all know and love. Um, and it's just so interesting that it follows almost the same pattern but much, much faster, [chuckles] much more efficient.

    7. RM

      I mean, it's pretty amazing with that little context that you can get to what we have-

    8. CV

      Yeah

    9. RM

      ... here, but we can definitely do, do better. So this is the finished prototype based on that prompt.

    10. CV

      Mm-hmm.

    11. RM

      We can see it generated 22 different files-

    12. CV

      Yep

    13. RM

      ... so really nice componentization. Um, it's got a little bit of sample data in there, and it generated mock data, so we can see what day one looks like. Um, we've got some photos in there. We can see what day two looks like, but it's also got some problems. So, for example, Seine River Cruise, it tried to get a photo, but now that's failing, and that is actually a hallucination problem. A lot of times, when these tools are trying to create media, they do know some URLs that are out there, but they'll hallucinate other URLs, and then you'll get broken links like this.

    14. CV

      I wanna call out-

    15. RM

      Um-

    16. CV

      ... another one on-

    17. RM

      Yeah, go ahead

    18. CV

      ... day one. If you go to day one, this looks like a hotel in French Polynesia, but not in Paris. [chuckles]

    19. RM

      Yeah, [chuckles] it's a really good point. Yeah, this is definitely not the right photo. I hadn't even noticed that.

    20. CV

      Yeah, so I'm like-

    21. RM

      Uh-

    22. CV

      ... that, I mean, I wanna go there. It looks like a great hotel, but it does not look like a Parisian, Parisian hotel.

    23. RM

      This is also not the best-

    24. CV

      Yeah

    25. RM

      ... uh, photo of the Eiffel Tower. [chuckles]

    26. CV

      No. I mean, it is the, one of the more important parts of the Eiffel Tower, [chuckles] but-

    27. RM

      It is an important part, yes. [chuckles]

  6. 11:2412:08

    The data-driven prototyping approach explained

    1. CV

      Okay, so we're, we're looking at this. It's, I mean, in 30 seconds, very impressive, right? As I said, like, we're impressed that these prototypes can be generated at all, and what I think you're calling out is it's good, not great. Like, if a designer brought this to you, you'd be like, "Mm, let's, let's just go back to the, to the Figma, uh, board and, and try something else."

    2. RM

      Absolutely, and it's gonna take a lot of back and forth to get it to the level that you want. But the thing that we can do is we can help the LLM by starting to separate out the idea of not just generating the UI, but also by helping it with the data. And so the idea behind this approach is that rather than starting with a prompt like we have here-

    3. CV

      Mm-hmm

    4. RM

      ... let's start with a data set.

  7. 12:0817:45

    Demo: Data-driven prototyping

    1. RM

      Um, and so I'll go over to Claude, and I'll ask it to generate some data. So I've got a prompt here. It says, "Using JSON," because we want it to be structured data, "generate a sample itinerary that I can use to prototype a shared trip itinerary feature."... The destination is Paris. The itinerary should include an itinerary name, cover photo, and date range covering three days. There should be three to four travelers associated with the itinerary. Each traveler should include a first name, last name, avatar photo, and preferred travel style, like foodie or history buff. For each day, include a collection of things to see on that day. There should be twelve to fifteen items in total. The items should, uh, be a hotel for day one, popular things to see on each subsequent day. Each item should include a name, a start time, a duration, a star rating, number of reviews, tags to describe them, a photo, and a short description. Some items should have notes for one or more travelers. The notes should be in chronological order and respond to each other like a message thread for each itinerary item. And so here we have, like, essentially a data schema specification. Um, and it's a lot more detailed than what we put in upfront, so we're cheating a little bit. But oftentimes, teams that are working on existing products will already know what their data schema is, and so you'll have a head start, not just with the schema, but also with the, with the data.

    2. CV

      Well, and I'll also say that the cheat code here, even though you're really are describing a pretty detailed data schema, you're not talking about relationships, you're not talking about, you know, parent-child relationship, foreign keys, none of that stuff. You're really just describing the, the components of the data structure in natural language, which probably took you a minute or two to type out, or if you're using voice, you know, even less than that. And so it does allow you to get a lot more detail that'll eventually be more structured without having to force yourself to write a data model.

    3. RM

      Absolutely. And this is a nice way to kinda think about the feature. Like, what are the key things I'd wanna see in the itinerary? What are some of the key things about the travelers? And so just thinking about it from a data-first perspective helped me understand the feature-

    4. CV

      Mm-hmm

    5. RM

      ... a little bit better. Another really important part of the data is media, so avatar photos and photos that are, you know, actually of the Eiffel Tower or actually of the hotel that you're planning to stay at. And so, um, for this particular prompt, I've added in the Unsplash MCP server. Uh, and so, uh, MCP servers are a great way for Claude to be able to access external services. The Unsplash MCP server can take in a particular query, like Eiffel Tower or a particular hotel name, and then pass back a photo that matches that query, so that when we generate the mock data, we're actually getting real URLs from Unsplash. And so if I go in here, I look at my tools, one of the tools that's active here is the Unsplash MCP server. It's pretty easy to install. There's a tool called Smithery, which makes it, um, easy to get up and running, and this was a key unlock for me. It's like, how do you actually pull in actual photo data versus some of the hallucinated URLs that you'll often get?

    6. CV

      Or do what I do, which is go sit on the Unsplash site and search through and find ones that I like and all that kind of stuff. So I did not know this MCP existed, and it will definitely shortcut a lot of my kind of prototyping workflow, so I'm excited to see how this works. Okay, so you've created this prompt. Um, I must call out for the, the, the power users of Claude, I bet you could create a Claude project to create prompts like this for data models [chuckles] . So there's probably a whole meta cycle you can do here to make even this a little bit more efficient, but let's show what this, this generates. So you put this into to Claude, you connect the MCP, and then you generate something?

    7. RM

      Yeah, absolutely. So let's start it up.

    8. CV

      Okay. And is there any reason you chose Claude over any other tool? Any particular affection for the model, the app, any of that stuff, or just it's the one you reach for?

    9. RM

      I do love Claude. I find it's pretty consistent. I use ChatGPT and Claude-

    10. CV

      Mm-hmm

    11. RM

      ... um, sort of a fifty-fifty split-

    12. CV

      Yep

    13. RM

      ... uh, during the day. But anytime I wanna generate data that feels like human and authentic-

    14. CV

      Mm

    15. RM

      ... I find myself going to Claude.

    16. CV

      Yeah.

    17. RM

      And I wanted to go to Claude here because, um, particularly because of the conversations between the travelers. I thought-

    18. CV

      Yeah

    19. RM

      ... uh, Claude would do a nicer job generating those.

    20. CV

      Great, and what I wanna call out here for folks that are maybe not watching on video and are listening is, Claude is starting to generate this comprehensive JSON and then calling the search photo tools in the, um, Unsplash MCP over and over again to generate a Paris cover photo, avatar photos, attractions, I'm sure hotels and restaurants. And so it's super... I just think about, again, I think about the human parallel to this, which is searching through stock photos-

    21. RM

      Yeah

    22. CV

      ... trying to find which one is representative. It just takes so much time, and because an MCP now can, like, programmatically go through the tasks to be done using these external tools, um, it just makes it a lot faster to get higher quality media into your prototypes.

    23. RM

      Absolutely. And, you know, this will take a long time. You know, the- it, it's interesting with AI, I think certain things are moving a lot faster with AI, other things aren't. Um, you know, it's much faster for me to create a document these days than it is to create a presentation in a lot of cases. Um, and here, just, you know, manually going through the search, the photos is really challenging. But it went, it got all those photos, and now it's generating the JSON. So it's took- it's taken that, um, natural language prompt that we had, said, "Okay, here's the da- data schema, and now I'm gonna fill out the data schema with pretty authentic information."

  8. 17:4523:33

    Creating a prototype with the generated JSON data

    1. CV

      And I wanna do a call back to your first prototype, which is, I'm sure this or a version of this is what's populating your first prototype. But it was one, as you said, one of many jobs that the prototyping tool had to do. It not only had to think through user experience, technical implementation, writing the code, it also had to go, "Okay, and what data goes into this code and what images?" And I do think the idea of taking sort of critical parts of that workflow and giving a dedicated sort of prompt and tool to those critical parts and taking that job off the sort of general, um, building can ultimately end up in higher quality, or at least I'm guessing that's our, our hypothesis here.

    2. RM

      ... I think so, and I think that's the fundamental concept behind agents, is you want, you know, individual agents with individual contexts working together in sequence-

    3. CV

      Yeah

    4. RM

      ... to get to an output, rather than-

    5. CV

      Yeah

    6. RM

      ... trying to do it all in one go. Um, and here, Claude has done a really good- really nice job with this prompt. We now have a incredibly detailed data set. We've got, um, the travelers that are here, the messages that they're sharing back and forth, the items that they wanna see, like Montmartre, you know, the type of item. It's tagged with, um, different relevant tags that are fun and interesting. There's URLs for images. Um, and now we have actually a much more detailed spec, you know, in the form of data for this trip itinerary platform. And so now we can actually just copy this JSON, go back to our build tool, and now our prompt doesn't even need to be very detailed, and so we could do something like, "Generate a trip itinerary feature based on the sample data below." Paste it in, so we have all that sample data here, and then c- hit Create. And now there's a ton of really interesting, very specific context that's available to the prototype. And what I found was interesting about this pro- approach is when you provide data in this way, um, the AI doesn't get, uh, fuzzy with it. Actually, we'll just take the data and use it as is, and then build the rest of the experience around the data.

    7. CV

      You just gave me an idea, so this is an impromptu How I AI idea, but there are so many, um, SQL generation and data schema exploration MCPs, and I was just thinking, as I'm prototyping apps, I should just hit our production database and come up with sample... like, example, um, JSON that represents actually the real data that somebody would, you know, use in some of our features, and then use some of that to prototype it. And so there is-- you know, we're showing a completely fictionalized set of, of data, but you could-- I could imagine a world in which you can actually pull a representative set from your production data or production-like data to really give your prototypes a real feel for how your users are using them. And as a product leader, um, I've done this a lot in, in product and design reviews, where I say: Yeah, but what happens when the user's profile is, you know, a thousand words long? Or, what happens when, um-

    8. RM

      Yeah

    9. CV

      ... the Eiffel Tower photo is vertical and we crop it horizontal? Like, how we thought about all these things, and actually putting that real data in helps you stress-test the user experience in a way that I think, um, is really important, where designers are never gonna put a vertic- a vertically inappropriately cropped photo in their beautiful Figma designs.

    10. RM

      Yeah, absolutely.

    11. CV

      You're never gonna get that, like, accidental broken experience, but AI will do it for you and help you, help you test some stuff. [chuckles]

    12. RM

      This is really true for, um, UGC experiences, right?

    13. CV

      Yeah.

    14. RM

      'Cause the, the content that users provide is never as beautiful-

    15. CV

      Yeah

    16. RM

      ... as what we put into, into Figma, and it's nice to see how it's actually gonna look to users. The other interesting thing is, like, if you have a set of data that you can pull out, um-

    17. CV

      Mm-hmm

    18. RM

      ... but you wanna augment it in some way, so let's say we had a trip itinerary, but we didn't necessarily have conversations, you could throw that JSON into Claude-

    19. CV

      Yeah

    20. RM

      ... and say, "Hey," to the Jason-- say to Claude, "Augment this JSON with, um, information about the travelers and their conversations," and it'll go in. It'll start with the JSON that you have, and it'll flow in, um, the data that you need, and so you can iterate on the data that you already have to get to something that you need for your prototype.

    21. CV

      Well, and I'm not gonna presume your age. We're both twenty-one years old, but this is also-

    22. RM

      I am, yes. [chuckles] Good guess.

    23. CV

      This is also making me think back to how much Lorem Ipsum I put in mock-ups for, for, for very young people.

    24. RM

      Totally.

    25. CV

      You used to have to, like, put placeholder text and placeholder images [chuckles] in your designs, and there was actually a cottage industry of, like, funny Lorem Ipsum generators, um, on the internet, where you, like, copy and paste paragraphs of code or of, of text. And I'm just thinking, just the fact that you can put pseudo-realistic content at scale in your designs... I even think about going past, you know, fake data. I've seen so many designs where designers just, like, grab the component and duplicate it down the page, so it looks all the same-

    26. RM

      Yeah, and then it's-

    27. CV

      ... and the number of comments are the same and all that kind of stuff. And I, I do think it's really helpful to be able to generate the sort of like, full surface area of the data model and the design experience without so much, um, manual burden on a designer or a content designer, a product manager, an engineer trying to figure out what goes in each component and what are the versions of, of each of those components.

    28. RM

      Absolutely. Um, and it, it's one of these things where it work, it works much better with stakeholders and with users-

    29. CV

      Yeah

    30. RM

      ... if you have authentic data.

  9. 23:3326:44

    Comparing the quality difference between approaches

    1. RM

      Yeah, so we got a prototype. Uh, much cleaner than the other prototype.

    2. CV

      Mm-hmm.

    3. RM

      We have a list of all of the travelers. We have what's happening on each of the days. We have beautiful photos for the different things in the itinerary. Um, so I can click on day two, day three, and-

    4. CV

      Oh, look, we got a full Eiffel Tower. [chuckles]

    5. RM

      Um, we've got... Absolutely, and it's a beautiful shot, right?

    6. CV

      Mm-hmm.

    7. RM

      It, it's pulling the stuff off o- on Unsplash, and it's working on pulling out their most popular images, so they look really good-

    8. CV

      Mm-hmm

    9. RM

      ... which is especially important when you're prototyping for a consumer, 'cause people are very sensitive to this. We've got tags, we've got comments, um, avatar photos, so all of this feels much richer than, um, than it does if you just have it generate both the data and the functionality at the same time.

    10. CV

      Yeah, this is-- I mean, you know, we're doing a before and after comparison, but this is just a lot richer. It's a lot more ac- it's the accuracy-... of the data significantly improves the perceived quality of the, the design. I, I mean, honestly, you know, there are some components of this that are similar the, uh, of the old design, not exactly, but it's really interesting to see how just having the right photo, the Eiffel Tower, the right data, some of these like metadata components, like how long it might take for you to spend time at an attraction, the accurate avatars, which I like. I think the old avatars were just little, um, initials avatars, but these are actually like real people. They're friends. Emma, Oliver, and Marcus-

    11. RM

      Yeah

    12. CV

      ... are apparently going on this trip. Uh, it really, it really does look like a much higher quality experience here.

    13. RM

      And we can take a look at the, the before.

    14. CV

      Mm-hmm.

    15. RM

      Um, let's see. And yeah, yeah.

    16. CV

      Oh, yeah.

    17. RM

      This is a good start, but it's not sort of the level of quality that we want and that we need.

    18. CV

      Yeah, and it al- it almost is a lot cleaner, too. I was noticing in the old prototype, there's like a lot of little, um, emojis and things that are, are filled in here and there that you as a product person or designer might not want in. But when you say, like, "Here's my clean data schema, here's the media I want and the media I don't want," um, it gives you sort of a much more modern look and feel to this experience.

    19. RM

      And I think that's because of the separation of concerns. The tool has been able to focus on what is the right UX around this data set, rather than simultaneously figuring out the UX, simultaneously figuring out the data set.

    20. CV

      Yeah, and what I wanna call out is so many people that I've spoken to are really worried about getting design systems into these prototyping tools, but have really underinvested in what you're showing, which is like the data models. And I was actually talking to somebody yesterday, and they said, "What context do I need to make sure I always give my PRDs and my prototyping tools to generate?" And I said, "Get your engineering to give your definition of your data schema [chuckles] and just copy and paste that in."

    21. RM

      Yeah.

    22. CV

      That is like one of the first things I think you should do because it's the right level of constraints around the experience, and you're just showing sort of the next level of this, which is populating that, extending it, and then putting it into a prototyping tool.

  10. 26:4428:53

    Modifying the prototype

    1. RM

      And then what's really nice about this, and we just generated it on the fly, so I'm gonna have to see where the code is, but if we go into the files, we have a nice breakdown, and if we go into lib, which is often where the data ends up, we have a sample data, um, file. And so we can go in, and we can change this. Let's say, you know, actually we want Marcus, um, to be called Mark rather than Marcus. We can go in here, let me see where Marcus is. Change his name to Mark. Just need to reload.

    2. CV

      Ah, there we go, Mark.

    3. RM

      And now we've changed, changed Mark. Same thing with the photo. Like, you know, this is kind of a good Paris photo, but we can fi- probably find something better from Unsplash. So let's just search for Paris. This one's a great one. Copy the image address. Come back here. If we look, we've got the cover photo. We can just replace that, and then again, reload. Boom.

    4. CV

      Yeah! Oh.

    5. RM

      New photo.

    6. CV

      It looks so nice.

    7. RM

      It does look really nice. It's coming together.

    8. CV

      [upbeat music] This episode is brought to you by Persona, the B2B identity platform, helping product, fraud, and trust and safety teams protect what they're building in an AI-first world. In 2024, bot traffic officially surpassed human activity online, and with AI agents projected to drive nearly 90% of all traffic by the end of the decade, it's clear that most of the internet won't be human for much longer. That's why trust and safety matters more than ever. Whether you're building a next-gen AI product or launching a new digital platform, Persona helps ensure it's real humans, not bots or bad actors, accessing your tools. With Persona's building blocks, you can verify users, fight fraud, and meet compliance requirements, all through identity flows tailored to your product and risk needs. You may have already seen Persona in action if you've verified your LinkedIn profile or signed up for an Etsy account. It powers identity for the internet's most trusted platforms, and now it can power yours, too. Visit withpersona.com/howIAI to learn more.

  11. 28:5334:40

    Benefits of this approach

    1. CV

      You're replacing these sort of piecemeal, but if you wanted to stamp out a bunch of different versions of this completely, you're working with just the, the data file, right?

    2. RM

      Yeah, so we can actually just go back into Claude.

    3. CV

      Mm-hmm.

    4. RM

      And now we can just say something like, "Now generate an itinerary for the same travelers going to Thailand."

    5. CV

      Yep.

    6. RM

      So I'll get that. This, um, chat already has all the context. It knows what the schema is. It's gonna go back out to Unsplash to grab all those photos, and it's gonna generate a new itinerary. Same people-

    7. CV

      Yep

    8. RM

      ... different trip. And then once we have that JSON file, we can actually take that and apply it directly to, um, the prototype.

    9. CV

      It, it just-- I was, I was just thinking about, again, going back to when we had to, like, walk uphill both ways in Photoshop for our designs. Like, the speed at which you can create versions of your design is really helpful. And, you know, one of the things that I'm thinking about here is, "Great, go ahead and localize this in Spanish," or, "Go ahead and localize this in another-

    10. RM

      Yeah

    11. CV

      ... language. Let me see what that looks like." Um, or even when you wanna extend the design, going back to, and maybe this is just my, my engineering brain likes this, go back to, well, let's update the data model first and then let the design cascade out of the data model, as opposed to putting buttons on the front end and then working our way back into, into the data model, I think is just a really nice primitive on which to standardize your prototyping efforts.

    12. RM

      Absolutely. Um, and then it allows you to be much more flexible in terms of what you're doing and allows you to work on the functionality separate from-... the data model. So for example, here, let's say I wanted to add a feature where, you know, I wanna be able to see blank cards if people have time in between activities, so we can kinda see where the free time is in the day. It'll go in, and it'll implement this functionality using that dataset. So if we wanna put a-

    13. CV

      Got it

    14. RM

      ... new dataset in set in here or change anything, the functionality is totally dynamic rather than baked into the prototype, which it often is.

    15. CV

      Awesome. I really like this. And it, you know, again, we- I- we're looking at the data modeling, the design of things, but I'd be remiss not to mention how helpful it is to have the content researcher, especially on a consumer experience, of, "What hotel should I put in? What attractions should I [chuckles] put in?"

    16. RM

      Yeah.

    17. CV

      "What do they l- actually look like?" And maybe maybe your designer's been to Paris, maybe they have not. Um, and you certainly don't want them spending time googling, like, the the top, you know, hotels in Paris for people in their 20s and 30s. This does, in addition to doing the scaffolding, it actually does the right research on the content and what to put in here and feels pretty realistic.

    18. RM

      And you might actually have two different itineraries, one for, you know, someone old, who's older who's going to Paris for the third or fourth time, someone who's younger who's going p- to Paris for the first time, and then you could test this feature with itineraries that really make sense for the user who's using, uh, the tool. So now we have the free time cards added in. We've got four hours, four and a half hours between, um, checking into our hotel and our dinner. We can look at day two and see that we've got, uh, a couple of other time blocks in here. So now, um, Claude is completely done generating our Thailand itinerary, and so we can actually just swap out the itinerary. If we go into the code, um, we can see, okay, we've got, uh, the data here. Let me, um, copy that over, and then I can just replace this and then reload.

    19. CV

      Hey!

    20. RM

      And now we have a Thailand trip.

    21. CV

      That was thrilling to watch. [laughing]

    22. RM

      [chuckles] All super easy.

    23. CV

      And the- those free time blocks stayed. We have great photos here. Sophia and Emma. I bet he's named Marcus, though, 'cause we made that edit in a different tool, so.

    24. RM

      I think so, yeah, if we were-

    25. CV

      Yeah, yeah, yeah.

    26. RM

      We luck- we, we overwrote that-

    27. CV

      Yep, he's Marcus

    28. RM

      ... but that's easy enough to change.

    29. CV

      Okay. Yeah, that's easy to change.

    30. RM

      We can go back here, uh, and update it.

  12. 34:4036:20

    Structured Midjourney prompting

    1. CV

      we use the Unsplash MCP to get these real kind of like free stock images here into your prototypes, but I know you've also been working on generating great photos yourself. So you wanna show us a little bit about how to use Midjourney again, unlike how Claire does it, which is just, like, float in the Midjourney model till something cool comes out, with a little bit more structure than that?

    2. RM

      Yeah, let's do it. Um, so I was playing around with Midjourney a lot-

    3. CV

      Mm

    4. RM

      ... trying to get, um, mock data for a project I was working on. Um, and I was working with a designer named Finn Sturdy, uh, who has done just a brilliant job of kind of figuring out how to get stellar results out of Midjourney-

    5. CV

      Mm-hmm

    6. RM

      ... that feel really curated and feel like a creative director has, um, helped to design them. And as we looked at, like, how he was prompting things, we discovered a few things about how you can use really specific wording within your prompts to elevate the images. Uh, so let's just start out with, like, a very simple, simple prompt. And I think what's great about these tools is even if you're not very descriptive, you're still [chuckles] gonna get pretty good results. I mean, you could see the, the content on Midjourney is already beautiful, no matter what you're looking at.

    7. CV

      Yeah. Yeah, I-

    8. RM

      But let's say we want to just have a stock image of an office chair. You might just type in "office chair." I know you're probably gonna type in more than that, uh, but let's see what it generates. So it's gonna go through. It's gonna look at references. These are still pretty nice office chairs. Um, the photo is nice.

    9. CV

      Mm-hmm.

    10. RM

      But is it really usable? Like, is it the sort of thing that you would drop into a catalog or something like that? It probably isn't there. Yeah.

  13. 36:2044:27

    The subject-setting-style framework for better image prompts

    1. CV

      Yeah.

    2. RM

      And the way that we can get to a much better end result is to think about three things: the subject, the setting, and the style. So I'll use a new prompt. This is, um... This will actually generate much better results. We're very clear about the subject that we want, which is an empty, stylish office chair.... and then really clear about the setting, and the setting includes both the, both the placement in the room and the lighting. Um, lighting is a really key part of setting that photographers think a lot about, and if you talk about the lighting in the prompt, you're gonna get much better results. And then the last thing we want is a particular style. Um, and there's a couple of things that can help with defining style. The sort of thing that a lot of people do is they try to describe it, um, but that's generally not how photos are tagged. Um, and so the idea here is to think about, well, how would a photographer actually describe a particular photo? Um, and they might use cultural references or location references, and then oftentimes they'll use camera metadata or other information about the shoot. And so in this particular case, we've added in, ah, a keyword for the film stock that we want to emulate, Fujicolor C200, which is a very warm, ah, film stock that generates, um, really beautiful, kind of like golden hour type of results. So I've started that generation, and now we're gonna get something much more usable than the initial prompt.

    3. CV

      Yeah, and as somebody who was a early Midjourney user, I think Midjourney is like the gateway drug to consumer-

    4. RM

      It is, yeah [laughs]

    5. CV

      ... AI. If you have, like, a parent that has not yet bought into AI or does not understand what it can do, get them in Midjourney. You're gonna get some weird Facebook posts, but they're gonna [laughs] they're gonna be unlocked on something really, really special. But what I would say is I- I'm just shocked at how fast it's gotten. It used to be-

    6. RM

      It is so fast

    7. CV

      ... so treacherously slow-

    8. RM

      Yeah

    9. CV

      ... and it's fast now. Okay, we got a pretty- I mean, I want that to be in my office. Who cares about the chair?

    10. RM

      Sure.

    11. CV

      [laughs]

    12. RM

      So beautiful. We've got the Italian-

    13. CV

      Mm-hmm

    14. RM

      ... kind of cultural cues.

    15. CV

      Mm-hmm.

    16. RM

      We've got the beautiful lighting. It's a great chair. This is definitely a usable photo. We've also- you know, I think one of the things Midjourney excels at is, ah, giving you these different variations. So here's another one, another one, another one-

    17. CV

      Yep

    18. RM

      ... all really beautiful and very usable if that's something that you're looking, looking for. And what was key here is thinking about the prompt in terms of how a photographer might describe things, rather than, ah, telling the AI what you want. So, for example, um, you know, we could change the setting, and a photographer's probably not gonna say, "I want really soft lighting."

    19. CV

      Mm-hmm.

    20. RM

      Instead, they're gonna describe the setting, which is an autumn raining morning. So we're gonna take this exact same scene and move it into, um, a different lighting setting, which will change the mood quite a bit.

    21. CV

      And as one of the, the four people in technology that have a liberal arts degree [laughs] I have to call out, I do think this moment where we're using natural language to generate assets across the board, especially media assets, art literacy is really important. The ability to describe design is really important. I don't think people spend enough time articulating what taste means, articulating what elegance means-

    22. RM

      100%

    23. CV

      ... what quality, what style means, and even going through this practice of understanding reference art styles, reference, you know, devices like digital cameras or film, um, locations. I, I think, like, language is now such a foundation for technology, that if you're not investing in your linguistic skills, um, you're gonna miss out on your ability to create these, like, high-quality assets, at least at this point.

    24. RM

      I totally agree. And I think the two, you know, fundamental inputs into creating something are taste and craft. And so taste is the ability to know what's good or what's- what you want. Craft is the ability to actually achieve that vision.

    25. CV

      Mm-hmm.

    26. RM

      And with AI, it's completely 10X to everyone in terms of the craft. Anyone now can create photos or music or other, other things-

    27. CV

      Yep

    28. RM

      ... but the taste is really important.

    29. CV

      Yep.

    30. RM

      How do we take that incredible power, um, and use it to create something that meets the needs that we have, whether creatively or professionally?

  14. 44:2748:54

    Using camera metadata to refine your results

    1. RM

      And I think what's interesting here is there's a lot of meaning that comes with the film stocks. So not only are we getting a different style, we're also getting interesting compositions, and I think that goes to how these models were trained. Because, you know, they're trained on data that likely has descriptions or metadata around the images, and they're trying to create a mapping between the language and the image. And so when you use photographic language, it looks at the higher-quality, uh, photos in the data set.

    2. CV

      Well, and, you know, there's a lot of, and rightfully so, anxiety in the arts and the creative professions around some of these tools, and when I hear you speak about how to get higher-quality assets out of here, I think what a head start folks with a creative, with a photography, with an arts background actually have in this kind of world, where it still is really anchored in the technical aspects of the media. And so if you know photography terms, then you can actually prompt... I've seen a lot of, like, f-stop ter- uh, you know, terminology-

    3. RM

      Yeah

    4. CV

      ... and prompts, all those sorts of things. You can actually prompt it significantly better than someone off the street who's like, "I know how to write code, but I don't understand what Kodak film stocks are out there." And so, you know, I just, I do think for folks in the arts, I hope you can look at some of these opportunities and see where you actually have a leap ahead of folks. And bringing something like this with your own creative vision, with your own, um, photography or art, I think is gonna be a really interesting way people build even, you know, more amazing things in the future. So I think it's, it's awesome, and I wanna see more people in the arts actually in here-

    5. RM

      Absolutely

    6. CV

      ... and stuff.

    7. RM

      This lowers the barrier. Um-

    8. CV

      Yeah

    9. RM

      ... you know, and there's good things about that and bad things about that.

    10. CV

      Yeah.

    11. RM

      But I think it's better. You know, the more people in the world that feel empowered to create, uh-

    12. CV

      Yeah

    13. RM

      ... the better we'll all feel, the better we'll all be.

    14. CV

      And not everybody can afford these fancy cameras [chuckles] either.

    15. RM

      No. [chuckles]

    16. CV

      You know? Um, oh, amazing. Okay, so you have a, a, a person now that we're generating.

    17. RM

      So I'm generating a person. Um, so young man with brown hair, uh, and eyes at golden hour, and I've added in some of that camera metadata that you were talking about. So Leica, that's like an $8,000 camera, right? So that's [chuckles] not accessible to most people. But by mentioning it here, it puts the image generation model into the space that it's learned from around those cameras, which makes for more beautiful and more aesthetic images. 50-millimeter lens is a very common focal length for portraits. Uh, F1.2 says I want a really blurry background, like incredibly blurry background, so it's kind of ethereally- ethereal-looking. And then Fujifilm Provia, um, is a good portrait film stock that people use. Uh, and so here we've got a great, uh, image that embodies that, uh, and we c- can go through and see the other ones. And all of these have sort of an aesthetic quality that's sometimes hard to get out of AI. They're not, they're not in that uncanny valley that we often see with, um, image, images that are generated of people. And I can kind of see- show you, you know, if we actually generate an image, but we don't include any of that camera information, sometimes the results are more in that uncanny valley.

    18. CV

      Now, what I do have to call out here is all of these generated images have a quite mournful aesthetic [chuckles] to them. There's rain. These men are looking very-

    19. RM

      Yeah

    20. CV

      ... concerned through these windows.

    21. RM

      [chuckles]

    22. CV

      And so I'm gonna challenge you after we get off the podcast, I want you to email me a happy young man-

    23. RM

      Okay, all right

    24. CV

      ... in the bright morning sun.

    25. RM

      That sounds good. [chuckles]

    26. CV

      In the bright morning sun.

    27. RM

      We will do that. [chuckles]

    28. CV

      [chuckles] Okay. Oh, so you did do a portrait of a young man with brown hair and eyes, so same original subject prompt without all the location, lighting, film, camera metadata, and we got, like, sketches.

    29. RM

      Yeah, we got-

    30. CV

      Very interesting

  15. 48:5454:38

    Lightning round and final thoughts

    1. CV

      tools. Let's do a couple lightning round questions and get back-

    2. RM

      Yeah, sounds good

    3. CV

      ... get you back to your very, uh, uh-... far breadth work of generating JSON to two AI prototypes- [chuckles] to AI Midjourney photography prompting. So, um, you know, as I said, enterprise girl, B2B, square boxes and forms. What I love about what you're showing us is there is a lot of work in consumer that can be really accelerated by I- A- AI. So there are two questions on this point. One is, what do you think AI PMs and product teams in consumer products really need? How, how, how are the skills that they need to develop different than maybe ones that are working in B2B? What do you think are the opportunities are for consumer product teams with AI?

    4. RM

      I think AI for consumer is incredibly exciting, and there's a whole lot of consumers that are using the big tools like Claude and ChatGPT. I think one of the nice things about B2B is the ROI of AI is usually pretty clear. Like, if we can accelerate a workflow, um, we can make someone faster, we can make someone more capable, the reason why is, is very clear, and so businesses are adopting this stuff very quickly. For consumers, it's not always really clear what the consumer value proposition is and what problem you're solving for them, and not every problem is worth solving for consumers. So just 'cause you can do it with the technology, doesn't mean that people want to actually do it. And so I think really good consumer AI is grounded in an understanding of consumer psychology and consumer needs, and then maps in, well, how does AI fit with that psychology and those needs? Rather than starting from a technology-first solution and saying, "Okay, you know, we can do all these really cool things. Let's create a consumer app around that, um, and hope that we're solving a need." And so there's a little bit of magic, I think, that has to happen with consumer, and a way I think to de-risk that is by focusing on those needs and really understanding the underlying psychology.

    5. CV

      Well, and I would just say, as you were saying, that some of the kind of like psychological needs that I think are underserved simply by the limitations of technology, time, and space on teams are, like, extreme levels of devi- delight, which is, how can you create really rich, engaging, delightful experiences? Those, like, beautiful, you know, parts of the app that tend to get shaved off in scope reduction exercises. I think that's a real opportunity. And then the other thing is making products feel really personalized, either to the place you're at, the people that you're with, or what, what we know about you. And so it doesn't have to look like a chatbot, but if you can think, "What, what could I do today that I couldn't do yesterday for this user?" I think there are a lot of answers where AI really unlocks your ability to deliver something very special, even if it looks like a tag or a comment or a photo. Um, and so it's, you know, what is the tool behind the scenes versus what is the expression of the product? I think you can differentiate a little bit.

    6. RM

      And I think that delight piece is so important. A lot of times as PMs, we prioritize things as must-have, nice to have, won't do. And I used to tell my teams, "If we cut all our nice to haves, our product is not gonna be nice to have, and we have to reserve some of our time for the delightful things that make the product stand out."

    7. CV

      Yep. Oh, I love it. Okay, and then my last question is, other than giving it reference locations where you fantasize yourself to be, uh, when AI is not doing what you want, what is your prompting strategy? How do you get it back on track? Do you have any tricks?

    8. RM

      I try to be very encouraging, and I've been using the word "elite" a lot.

    9. CV

      [chuckles]

    10. RM

      So you are an elite sales coach, or you are an elite photographer, and so just elevate its expectations of itself-

    11. CV

      Okay

    12. RM

      ... and sometimes that will, uh, help it generate better results. And I think what it's doing is it's... A lot of prompting is like, what space of the training data set do you wanna be in to get a result? And when you use word, the encouraging words, it's not that you're actually encouraging the AI, it's that, you know, those words are associated with really high-quality output, and it puts it in a different training space.

    13. CV

      Okay, I love it. And again, I think this is the prompting strategy of choice of parents who are always telling their kids, like: "You can do... I know, I know that you're a capable kid. I know that you can do... I know you can do your homework." Okay, well, where can we find you, and how can we be helpful to you?

    14. RM

      Yeah, um, so I've got a Substack, so Ravi on Product, um, and you can find me at ravi-mehta.com. Um, you can also find me on, on LinkedIn, so please follow me. I've got, um, a class that Brian Balfour and I launched with Reforge. It's on AI strategy. So the question that we were answering with that class, which is really important for us as product builders, is not only how do you understand technol- the technology, how do you integrate it into your product, but what does this mean for you competitively? What do you need to do to create a product that's going to win in the market, I think, in the most intense environment that we've seen in the history of tech? Uh, so we launched that in April, had a really great first cohort. Uh, we're, uh, launching the next cohort in October. So check that out if you're interested in learning more about AI strategy. That's available through Reforge.

    15. CV

      Awesome. Well, thank you so much for showing us all your amazing workflows. They're very useful.

    16. RM

      Awesome. Thank you so much for having me. This has been, uh, really fun.

    17. CV

      [upbeat music] Thanks so much for watching. If you enjoyed the show, please like and subscribe here on YouTube, or even better, leave us a comment with your thoughts. You can also find this podcast on Apple Podcasts, Spotify, or your favorite podcast app. Please consider leaving us a rating and review, which will help others find the show. You can see all our episodes and learn more about the show at howiaipod.com. See you next time! [upbeat music]

Episode duration: 54:38

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode _yQMGHHl49g

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome