An exclusive inside look at GPT-5

An exclusive inside look at GPT-5

How I AIAug 7, 202540m

Claire Vo (host)

Engineer-first tone and behaviorChatPRD PRD side-by-side: GPT-5 vs GPT-4.1Business discovery vs implementation focus (who/why vs what/how)Functional requirements and technical considerations qualityPrototype generation differences (v0)Tool-calling intensity and token/performance concernsCanvas prototyping and image-generation spatial awareness

In this episode of How I AI, featuring Claire Vo, An exclusive inside look at GPT-5 explores gPT-5 review: engineer-first model excels at code and specs Claire Vo shares an early-access, workflow-driven evaluation of OpenAI’s GPT-5, arguing it feels “built by engineers for engineers” with standout strength in coding, technical writing, and functional requirements detail.

GPT-5 review: engineer-first model excels at code and specs

Claire Vo shares an early-access, workflow-driven evaluation of OpenAI’s GPT-5, arguing it feels “built by engineers for engineers” with standout strength in coding, technical writing, and functional requirements detail.

In side-by-side tests within ChatPRD, GPT-5 tends to jump quickly to implementation (“what/how”) versus GPT-4.1’s more business/discovery framing (“who/why”), which can be a mismatch for stakeholder-facing artifacts.

She finds GPT-5’s verbosity and specificity can produce stronger downstream prototyping outcomes (more components/ideas), even if the raw PRD can feel overly dense for alignment.

Beyond developer use, she highlights improvements in ChatGPT Canvas/front-end taste and notably stronger image-generation spatial awareness (tested via a “bathroom remodel” benchmark), while flagging tradeoffs like heavy tool-calling and bullet-pointy style.

Key Takeaways

GPT-5 is optimized for execution, not discovery.

Across PRD brainstorming and feature ideation, GPT-5 rapidly converges on concrete features and implementation details, while GPT-4. ...

Get the full analysis with uListen AI

For functional requirements and tech specs, GPT-5 clearly outclasses GPT-4.1.

Vo highlights GPT-5’s unusually detailed, engineer-friendly requirements (edge cases, warnings, prioritized tables) and technical considerations, making it well-suited for engineering handoff and spec writing.

Get the full analysis with uListen AI

GPT-5’s “developer artifacts” leak into non-dev documents.

Even when asked for a prose PRD, GPT-5 adds code-like elements (e. ...

Get the full analysis with uListen AI

Verbosity is a tradeoff: better build fidelity, worse readability for stakeholders.

More detail can help engineers and coding agents implement accurately, but can dilute the core narrative for executives or cross-functional partners who need concise alignment and decision-ready summaries.

Get the full analysis with uListen AI

More detailed PRDs can yield richer prototypes—even if uglier by default.

In the v0 prototype comparison, GPT-4. ...

Get the full analysis with uListen AI

In coding tools, GPT-5 is fast and high-quality but aggressively tool-hungry.

Vo reports strong real-world shipping performance (refactors, large changes) yet frequent hits to tool-call limits, raising questions about efficiency, token usage, and how IDEs will optimize around GPT-5 behavior.

Get the full analysis with uListen AI

GPT-5 meaningfully improves spatial reasoning for image tasks.

Using bathroom layout and paint/tile mockups, Vo finds GPT-5 follows spatial instructions more reliably than GPT-4o and produces clearer labeled outputs (including paint names/codes), suggesting stronger consumer utility for design/planning scenarios.

Get the full analysis with uListen AI

Notable Quotes

From my very first interaction, I felt like this was a engineer built by engineers for engineers.

Claire Vo

GPT-5… loves a bullet point list.

Claire Vo

Tell me what to build, tell me exactly how the features work… give me something to code.

Claire Vo

Girlfriend loves to call a tool.

Claire Vo

My benchmark is: Can it reasonably help with my bathroom remodel?

Claire Vo

Questions Answered in This Episode

In your ChatPRD tests, what exact prompt changes reduced GPT-5’s “markdown bullet + code artifact” tendency without sacrificing its functional-requirements depth?

Claire Vo shares an early-access, workflow-driven evaluation of OpenAI’s GPT-5, arguing it feels “built by engineers for engineers” with standout strength in coding, technical writing, and functional requirements detail.

Get the full analysis with uListen AI

Where did GPT-5’s “jumping to solutions” cause the biggest product mistake risk (e.g., missing the ‘why’, wrong metric, wrong persona), and how would you mitigate that in a PM workflow?

In side-by-side tests within ChatPRD, GPT-5 tends to jump quickly to implementation (“what/how”) versus GPT-4. ...

Get the full analysis with uListen AI

You noted GPT-4.1 was more business-oriented and even more critical on homepage feedback—do you think this is model personality, safety tuning, or instruction-following differences?

She finds GPT-5’s verbosity and specificity can produce stronger downstream prototyping outcomes (more components/ideas), even if the raw PRD can feel overly dense for alignment.

Get the full analysis with uListen AI

For the v0 prototype comparison: what inputs did you pass through (full PRD vs sections), and would a hybrid pipeline (GPT-4.1 for narrative + GPT-5 for requirements) outperform either alone?

Beyond developer use, she highlights improvements in ChatGPT Canvas/front-end taste and notably stronger image-generation spatial awareness (tested via a “bathroom remodel” benchmark), while flagging tradeoffs like heavy tool-calling and bullet-pointy style.

Get the full analysis with uListen AI

How often did GPT-5 hit tool-call limits in Cursor, and what patterns triggered it (repo size, search tasks, refactors)? Any practical rules to curb tool overuse?

Get the full analysis with uListen AI

Transcript Preview

Claire Vo

GPT-5 is the newest model released from OpenAI, and from my very first interaction, I felt like this was a engineer built by engineers for engineers. It writes good code, it refactors, it's thoughtful, and girlfriend loves to call a tool. If you have a good idea, and you really just need to get down to what are the technical implementation of this feature, I think GPT-5 is tremendously better at that than GPT-4, which again, is, like, actually pretty light on functional requirements. If your use case is getting things to humans, like business users or stakeholders, you might like a GPT-4.1 o3 output. A little bit more business-oriented, really no complaints. It's exceptional at coding. This is a highly technical model. I think it's gonna be a daily driver for lots of folks. [upbeat music] Welcome back to How I AI. I'm Claire Vo, product leader and AI obsessive, here on a mission to help you build better with these new tools. Today, I'm doing something a little bit different. I'm walking you through the newly released GPT-5 model from OpenAI and giving you my honest takes on a couple workflows that I personally use. We're gonna look at GPT-5 for product managers and engineers, investigate some stylistic choices that the model has made, and also go through a couple personal workflows that I find useful and see if side by side, GPT-5 outperforms other models. Let's get to it. To celebrate twenty-five thousand YouTube followers on How I AI, we're doing a giveaway. You can win a free year to my favorite AI products, including v0, Replit, Lovable, Bolt, Cursor, and of course, ChatPRD, by leaving a rating and review on your favorite podcast app and subscribing to YouTube. To enter, simply go to howiaipod.com/giveaway, read the rules, and leave us a review and subscribe. Enter by the end of August, and we will announce our winners in September. Thanks for listening. So before we get into how this model performs, let's talk about what the model is. GPT-5 is the newest model released from OpenAI, and they were generous enough to give me a little bit of early access to play with the model and really start to understand its strengths and weaknesses. And from my very first interaction with GPT-5, I felt like this was a engineer built by engineers for engineers. This is a highly technical model, both in capabilities and style, and this is gonna be one that you're really gonna reach for on a daily basis if you are coding, testing the technical bounds of these LLMs, or solving deeply complex problems. But it might have some pieces for the business thinkers out there or the product owners out there that might not work for your use case, and we're gonna show exactly what I mean by that in just a second. Now, I have been pretty familiar with the OpenAI ecosystem for quite some time and have been using the OpenAI models almost exclusively for my own product, ChatPRD. That being said, I do work with a variety of models and model providers in my day-to-day workflows. So when I'm coding using Cursor, I'm often using Claude 4, Claude Sonnet 4, Gemini 2.5, o3 from OpenAI. In ChatPRD, I again am using a lot of the OpenAI models, 4o, 4.1, even did a little, uh, test with 4.5 when that first came out, and I use a variety of different out-of-the-box AI tools as well. So I'm using ChatGPT relatively often, occasionally go into Claude, have my whole stable of different AI coding tools, which again, choose and fine-tune their own models. So I do feel like I'm pretty familiar with the model ecosystem, at least the commercial model ecosystem, and have really developed a sense of where these models perform well for specific use cases and where they don't. And I'm the kind of user and AI power user that really selects the model for the use case. So I was really excited to get access to GPT-5 because I wanted to know the answer to the question, which is: Where does this model fit on my team? I don't think of myself as a single model employer. [chuckles] I really think of models as part of a team and tools as part of a team, and each model has their own personality and capabilities. Each tool has its own personality and capabilities, and I think that rather than think, "Is this an upgrade?" I think, "Is this an addition to my team, and where would I put them into play?" So the first thing that I did when I got access to GPT-5 is I went straight to the use case that I know, love, and think about the most, which is actually ChatPRD and our core chat and document generation implementation. It's a common use case for product managers using AI to generate product requirements documents. It's a place where I've spent a lot of time prompt testing, model testing, and really optimizing the experience for both matching the stylistic tone I want for the product, as well as getting great user feedback on outputs. And we've really A/B tested this pretty significantly and to depth in ChatPRD, and landed most recently on GPT-4.1, and a variety of tools and prompts being the best stack for our users, and in July, we had a ninety-six percent satisfaction rate with our documents. So that's how I'm really thinking about it. I'm thinking, "What model is highest performance?" Cost really doesn't come into play, but it will later, and then do, do users love it? And I consider myself a proxy for the product manager and engineering user, so I feel like I have a pretty good sense of what will perform well in this use case and won't. So when I got access to GPT-5, what I did is I went ahead and used LaunchDarkly AI Configs, which lets me on demand switch the model that I'm using in local or production, and I started testing GPT-5. And what I'm gonna show you on my screen right now is really a side-by-side representation of the results.... So GPT-4.1, our core model that we use on ChatPRD, is on the left, and GPT-5 is on the right. And a couple things right out the gate that I noticed, and in fact, I had to prompt around, is GPT-5, when I first tested it, it spoke like a developer. This is actually tuned a little bit for prompt on the right side. It just wanted to write me markdown bullet point lists. And I gave that feedback to the OpenAI team, did a little bit of prompt engineering, and I think it's a little bit more natural language when you speak to it, but you're definitely gonna see GPT-5, she loves a bullet point list. So we're gonna get lots of bullets, and we're gonna call lots of tools. That's what-- something you're definitely gonna see in this episode. But if you look at it side by side, to start off, they are pretty similar responses, and I think that's really a representation of they share the same system prompt and context in ChatPRD. So this is the exact same system prompt, exact same context. It's coming back, and it's just really asking me questions about what I want to achieve with my product when I ask it to brainstorm new features. Now, where you start to see it diverge is what it starts to focus on when asking to brainstorm new features. And so if you look at GPT-4.1's response here, the questions are really about business impact. You get a lot of discovery around what metric you wanna change, who is your persona, what, what is your business goal? And I've noticed that throughout my side-by-side evaluation, this is just one example. GPT-4.1 and some of the older models just came at the problem from a more general but more business-oriented lens. But GPT-5 on the right really came to features quickly, and I think this is an important, important point for product managers to note. Because you know us product managers, we love to ask a good why, and we really love to understand the problem, and what you see in GPT-5 is a jumping to the solution, and I think that's a reflection of the way it was trained and the place that GPT-5 fits in the sort of ecosystem of OpenAI models. It's very clear that the coding model wars are heating up, that the IDE wars are heating, heating up, that the coding tool wars are heating up, and this really... this model really feels like an answer for engineering use cases more than anything. And what I thought was interesting is we'll get to those engineering use cases, I think it's quite exceptional at writing code, but that sort of angle into execution of engineering tasks even bleeds into the conversational aspect of the model. And so you can even see the point of view of the model, if you can call it that, is really different from 4.1, which we're using on the left, which really comes from a business point of view. You'll see very quickly, GPT-5 is getting to an execution engineering point of view. So it's just something to consider as you look at these models side by side, what you're really gonna get out of them and where they might be most applicable in your use case. And so right off the gate, we're seeing 4.1 be more business oriented, 5.0 be a little bit more technically oriented. [lips smack] And then I ask it to focus on free-to-paid conversion, and again, we get pretty similar ideas. So again, this isn't the most radical product area to focus on. It's well-trodden, well-documented. You know, both of these models probably have access to best-in-class growth tactics. So you'll see the kind of features be very similar across the two, but if you really inspect, you will see that the description of the features for 4.1 on the left are much more user-centric and much more business-centric. So it's really like a who, why question. If you look at GPT-5, again, I find this so fascinating, it's really a what, how answer, and I think that really sums up how I would say my interactions with this model has been. You, you still get a little bit more of that, like, business user discovery from, you know, 4.1 or 4.0, 0.3 even. GPT-5's like, "Tell me what to build, tell me exactly how the features work, give me numbers, give me user stories, give me something to code." And so I just thought it was really interesting to see that the ideas themselves, again, pretty similar, but the way those ideas are executed are very different. And you'll start to see the chats branch here, and you'll start to see the GPT-5 chat really branch into wanting to get into technical code, which has its pros and cons, and you'll really see the GPT-4.1 model really stay in this business, kind of like high-level mindset. And so, as an app builder focused on product managers, what am I thinking to myself? I'm thinking, "Well, my, my product's a product manager. It needs to talk to engineers, but it's a product manager." And so I'm unsure if my users are gonna love GPT-5 because it skips that step of product management thinking and gets right to what to build, which again, engineering side of my brain loves. So I'm gonna pull these docs up side by side and really show you what the PRD that got generated from each of these models looked like. And again, pretty similar prompts, pretty similar inputs. You can see right out the gate... I mean, I told you, it's an engineer for engineers. It tried to put this code block comment at the top of the document. Again, just a pure signal. [chuckles] This is, you know, trained to write technical documents and trained to write code. Even when you tell it to write like a prose document, like a PRD, you see artifacts like this, which are code base, which I find very, very, very interesting. And so if I'm looking at these, these PRDs side by side, a couple things that you're gonna notice-... GPT-5 writes more. It is a, it is significantly more detailed in its content, and I think there are pros and cons to that. I think when you're trying to define something for a engineer or a coding agent to execute, the more detailed you can get, the better. When you are trying to align stakeholders as product managers or other business users might need to do, sometimes a level of detail too far can actually obscure the primary message that you're trying to get across. And so I'm looking at these side by side, and I'm really thinking, "Do I want five business goals for this product? Are these the right business goals, and are they artificially too, too precise on the GPT-5, or are they, like, perfectly precise?" And so it was just something that I observed in looking at these side by side. Now, if we scroll down, really interesting. Again, the personas are a lot more detailed. There are more of them, and the use cases are very specific, but on the GPT-5 model, [chuckles] the use cases are very feature-centric, and on the GPT-4 model, they're very, like, what I'm trying to achieve as a user specific. And so I thought it was really interesting to just kind of compare and contrast both of these. Again, GPT-5, very detailed. Where I love GPT-5 and prefer it over the 4.1 model is the functional requirements are exceptional. The formatting got a little weird, but you can see here there's a prioritized list in a table. There's lots of details about soft warnings, hard warnings. I mean, these are the kinds of things that the best engineers [chuckles] are gonna ask you about how this stuff works. And so if you have a good idea, and you really just need to get down to what are the technical implementation of this feature, I think GPT-5 is tremendously better at that than, um, GPT-4, which again, is, like, actually pretty light on functional requirements. I think you could say the same for user experience. Again, you're just gonna get a lot more detail out of GPT-5 in terms of describing the user experience in prose. And so if you are using any of the prototyping models, like a v0, a Lovable, a Bolt, a Magic Patterns, whatever those might be, the more specific you can be about describing the user experience in prose, the happier you're gonna be with your prototype, and I think 4.1 is actually pretty high level, and 5 is, is, is pretty exceptional at that. Now, the narrative is an interesting one, [chuckles] interesting one. You know, GPT-5's a little longer. I will say, like, it's not a terrible writer, so I don't think that its prose is necessarily cold or not compelling or not lyrical, which are things, as somebody who has a liberal arts degree, I really care about. It's just a little bit more detailed, and I think, you know, writing shorter prose is also a virtue, and so you really need to think about, Do you need as many words? Is simpler better? Are the details really valuable here versus in, in another version? Now, again, another place where I think GPT-5 obviously outperforms 4.1 in a side-by-side is technical consideration. So if you are an engineer, and you need to write a tech spec, I would highly recommend GPT-5 over any of the other models that I tested. It is just very specific. It speaks in the language that an engineer would understand. It's really detailed in its analysis of requirements, and so I do think it is a really nice technical writer, and I think engineering teams, docs teams, are gonna be quite happy with it. I honestly think product managers might not need to be writing this part of a PRD, so maybe there's a division of labor here that happens naturally or in your AI tools. But again, GPT-5 is really gonna outperform on technical considerations and detail across the board. So that's a side by side, but these PRDs don't operate in a vacuum. They are artifacts generated for another purpose, and so what I wanted to do is actually generate a prototype based on those different PRDs. So if we go back to my general analysis, I thought that GPT-4.1, business-oriented, higher level, maybe easier to read as a reader because it's not so dense, not as technical, not as detailed. GPT-5, engineer, engineer, engineer, very detailed, perhaps overly so. But the real question is, do I get a better prototype, one shot, out of those prompts versus another? And this is where I think things get interesting because I would say to you, if your use case is getting things to humans, you might not wanna... And, and those humans are not engineers. Engineers, I love you. You're humans, but I'm gonna put you in a different category for just the sake of this argument. If you are trying to get this to business users or other stakeholders in your company, you might like a GPT-4o 4.1.0.3 output, a little bit more business-oriented, a little slight-slightly more condensed, easier to read, not so much excessive detail. If you are trying to get this to an engineer, you're-- I think you're gonna be happier with a GPT-5. And so what's interesting about the side by side is, honestly, for a prototype and visual style, I like what 4.1 prompting did into... This is our v0 integration. I like what 4.1 prompted into v0 and the outcome here. It's colorful. It's clear. I understand, you know, what's happening here. I think this looks nice. Meta observation, I could not get v0 via GPT-5 to generate color. It's, like, [chuckles] all very gray, um, and blue, but you can see on the left side with 4.1, for whatever reason, whatever prompt was behind the scene, which I'll have to go look at-... we got a little bit more color and a little bit more design. It's much simpler, it looks nice, it's visually appealing, but I feel like GPT-5 over here on the right gave me-- and I'm just gonna make it a little bigger so you all can see, gave me a lot more to work with. And what I mean is, I tend to think of these prototypes as inspiration for implementation, not implementation itself. So I'm never, like, gonna ship this. This is not what ChatPRD looks like. It's not what our product looks like. I'm-- but I'm really looking for ideas on upsells and free to paid ideas, and I just think the fact that they put so much detail into the PRD means they put so much into the prototype, which means I have a lot of components to choose from when I really want to make my product better. And so I have locked spaces, I have upgrade widgets, I have free trial details, I have I'll try it later, I have upgrade now. But, I mean, I just have... There is just as much in here as I want to pick. And when you're looking at prototypes as an ideation space, honestly, I think taking a abundance mindset and generating as much as possible and being like, "I'll never use that. Oh, I like this," is a lot better. And so I think the verbosity of GPT-5 in terms of technical specifications and user experience actually output more interesting ideas when given to a prototyping tool. So that was a really interesting observation for me. I wasn't sure that I would love it, and I actually didn't love it on first pass, but once I started to click through, I was like: Man, it really thought of a lot here. And I think that's because it was given quite a bit of detail. So that's just one little side by side on prototype generation. I wanna give you one last observation in this specific ChatPRD use case, which I found quite interesting, which is I gave it a copy of our homepage, and I asked it to change things. And this is what I find interesting. As much as I thought that Ch-- GPT-5 was a pretty cold, straightforward, detailed engineer, GPT-4 was much... 4.1 was much meaner to me. It was much more critical, and I thought that was kind of interesting. GPT-4.1 starts out, and this makes me feel bad about my homepage, but just says, "Not up to standard." Very straightforward. GPT-5 was like, "Eh, that's pretty good. Areas to improve." And what's interesting about the instructability and promptability of the model is I actually went back and gave it another pass and said, "Could you be a little bit more critical of my homepage?" Same prompt. And again, GPT-4.1 was legitimately cri- legitimately critical, cruelly critical, if you look at it. And GPT-5 really again started with, like, the shit sandwich, excuse-- pardon my French, but it really started with, "Here's what's not working," or, "Here's what's working, here's what's not working, but, like, you can make it better." And, and I think this is interesting. One of the things that you really have to test as an application builder is, working with LLMs, is can you tune it via prompts effectively? Now, again, these two side-by-sides are using the exact same prompts. I have not prompted to the strengths and/or weaknesses of GPT-5. I've just simply been giving it similar side-by-side content, context, and prompting, and it was just really interesting to see how you can massage the LLM responses to meet your needs. So my general conclusion remains the same through the side by side, which is functionally, this thing is built to code, and this thing is built to help you code, and you're gonna be very happy with the strengths of that. But it might have some drawbacks on the other side, especially as an application developer, a business user, and then we'll get to it. I actually think it's got some strengths from the consumer perspective. Today's episode is brought to you by ChatPRD. I know that many of you are tuning in to How I AI to learn practical ways you can apply AI and make it easier to build. That's exactly why I built ChatPRD. ChatPRD is an AI co-pilot that helps you write great product docs, automate tedious coordination work, and get strategic coaching from an expert AI CPO. And it's loved by everyone, from the fastest growing AI startups to large enterprises with hundreds of PMs. Whether you're trying to vibe code a prototype, teach a first-time PM the ropes, or scale efficiently in a large organization, ChatPRD helps you do better work fast. And we're integrated with the tools you love: v0.dev, Google Drive, Slack, Linear, Confluence, and more, so you don't have to change your workflow to accelerate with AI. Try ChatPRD free at chatprd.ai/howiai, and let's make product fun again. So let's go really quickly into coding, and then I'll zip back around to a couple personal use cases, and we will get you to using GPT-5. So let's talk about coding for just a little bit. And before I get to that, I do have to give OpenAI true and unsponsored props here. I think that the OpenAI team continues to outperform on API design, capabilities, and developer support. One of the reasons that, for ChatPRD, honestly, that I have centralized on a lot of the OpenAI models, is that it's not the models themselves are exceptional compared to ones by Anthropic or other providers. It's really not that. It is quite simply the API designs, developer tools, ecosystems, and essential primitives that get exposed under, you know, on top of these models, are just much easier to work with as a software engineer developing LLM-backed tools. I've been very happy with many of the upgrades, not just to the GPT-5 model, but with the GPT-5 model.... some increased improvements in tool calling, reasoning, all these sort of parameters and controls that you have over the model that, as an application developer, make me very happy. So I'm not gonna go into that too deeply. If anybody wants to talk about it, I'll chat with you all day about it, but I think the API improvements here are worth taking a look at, and you should check out the documentation. Now, using Open-- or using GPT-5 to code, I'm gonna just, just show you two things. One, it's my favorite right now, and I am a model switcher. The n- nothing stresses me out more than someone selecting Auto in Cursor. Like, Auto Model Select, I cannot... I cannot imagine. It really stresses me out. Like, you just leave it to the forces that be to choose your model? No, no, no, no, you have to be very opinionated with your model. And so I historically, using Cursor, just as an example, I'm really prescriptive with what, what model I choose, and you can say this is all made-up stuff. I use Sonnet 4 a lot for front-end work. I think it does pretty good front-end work. I use, uh, 2.5.03 quite a bit in the past for deeper technical work, been pretty happy with it. I do think 2.5 is clinically depressed. It's always so sad in its thinking, so Google friends out there, please just cheer it up a little bit. I don't mean my mean prompts. And then I have recently been testing GPT-5 here for a couple, couple weeks, and it's been really interesting because I got access to GPT-5 when I was shipping a very major feature, I mean, thousands and thousands of lines. And I will tell you, one, the performance of the model is very fast, so I've been very happy with the performance of the model. It's allowed me to do a lot very quickly. Two, it's-- I mean, it's good. It writes good code. It refactors. It's thoughtful, and let's take that word "thoughtful" and talk about one of my primary observations on this model. Girlfriend loves to call a tool. So [chuckles] if you, if you look over here on the right, man, I have rarely hit Cursor's 25 tool call limit in a single call in many, many moons. I have not hit that in a long time, and I hit it really consistently with GPT-5. It will take advantage of tools. It is a tool-calling beast, and so you can see here on the left side, it's reading, it's searching, it's reading, it's searching, it's reading, it's searching. Honestly, sometimes it felt a little inefficient and ineffective, and this will be one of my questions as these get rolled out into production in these coding tools. Will token usage, will tool calling and performance start to become an issue? But man, she loves a, a tool call. The second thing you'll see here is it loves bullet points. It will talk to you in bullet points all day and all night. It loves, loves, loves bullet points, and so you'll see it talk to you [chuckles] like an engineer might talk to you in Slack, lots of bullet points. But that being said, the code I am happy with, the quality I'm happy with. It's a great engineering partner. As I said, you want one of these on your team. So we didn't go too deep into coding, but again, GPT-5 is now my daily driver. I love it, and it's really great when you're actually using the code in production. So again, gonna repeat myself, I really do think this is a great engineer's model, and you're gonna really like it for that use case. But let's switch over and look at ChatGPT and how GPT-5 actually operates in their core product. Okay, so one thing you'll know is you'll have two options here, or at least I had two options here, GPT-5 and GPT-5 Thinking. I'm gonna use Thinking for specifically prototyping and design in ChatGPT. So I think that with the GPT-5 Thinking, it is possible that ChatGPT really becomes a viable option for folks trying to do some high-level prototyping inside an AI tool. I love the specialty tools. I love v0, Lovable Bolt, all those. Of course, I work in Cursor, but if you're very just trying to design something, one of the things I noticed about GPT-5 is it's got great front-end design taste and actually makes things that look pretty good. So I'm gonna go ahead and turn on Canvas, which allows ChatGPT to generate some images, and I'm gonna drop in a copy of the ChatPRD homepage. So you can see it's very pink. We love her. And I'm actually gonna write just a really simple prompt here. I'm gonna say, "Design and prototype a blog for ChatPRD, matching our style." Okay, that's it. So GPT-5 is gonna use that reference image. It's gonna think. It loves to think. We can actually expand this thinking right now and see how it thinks through generating this. It's got good front-end design guidelines, and then it's going to actually generate the code here in line in Canvas. And I've done this a couple times with GPT-5 in ChatGPT, and the thing that I've been most impressed with is it's classy. She's classy, and I think a lot of the prototyping tools sometimes have a, a pretty standard, boring, and repetitive style for their AI-generated front end. And I would just say that GPT-5, in my, you know, anecdotal experience, has had a little bit more polish, a little bit h- more high-quality design sense than some of those other offerings right out the box. Now, they all have their strengths. I'm certainly gonna keep them in my rotation, but it was a nice observation to say, in particular, on front end and user experience design, this was particularly nice. So let's take a look at it and see if I actually got that right. And what do we have? Oh, let's just allow... okay, allow access. You know, it's not terrible. I think we're struggling with a couple issues here. I actually raised this to the, um, OpenAI team. Struggles a little bit with background and text-to-color contrast. It could be an issue with the code in CSS. It could be an issue with the model.... it really replicated my gradient that I like to use. Didn't quite do the logo, but I didn't expect it to, but kind of got to a good sense of what my header looks like. And then again, came in here and generated what I think is just a generally nice component here. And then this I really like. I think this looks quite lovely for a blog post. Again, not pixel perfect but I think a little bit nicer, um, than you might see an out-of-the-box previously with some of the other models from OpenAI and in Canvas. So I've been relatively happy with, with that, and think that, you know, for somebody looking to do some front-end prototyping, it can be pretty nice, but again, we've got to solve this text-on-background issue. So OpenAI team, get to, get to that fix quickly. Now, a couple other things I wanna show you before we wrap up the episode is just a personal use case where I actually did another side-by-side of GPT-5 and GPT-4, and I really saw GPT-5 shine. So you all may have your easels and benchmarks that you're evaluating the technical and mathematical strengths of your models against, and I have my own benchmark that I am testing all models against, and that benchmark is: Can it reasonably help with my bathroom remodel? Yes, you heard it here. Can it reasonably help with my bathroom remodel? Now, I've been doing a lot of things with GPT-4 on my bathroom remodel, including experimenting with whether or not different layouts will be up to code, what I could possibly do, generating screenshots of what my bathroom might look like. It's all very thrilling, and I've actually been okay happy with what 4o has done for me. So if you want to see what kind of high-quality AI-powered work I'm doing, um, with ChatGPT right now, I'm really trying to explain to my contractor exactly how I want my new bathroom laid out. And so I have been prompting 4o with these prompts, like, "I need a bas- bathtub with fixtures at one end, a tile- a level tile ledge at the other, with eight inches and four-inch tile shelves on the wall." Picture, generate. It's very good prompting here. And halfway through this chat, I really switched to GPT-5, and I will tell you, I can show you exactly where I did. Right around here, I was switching to GPT-5, and I was very happy with the actual outcome and layout that the image generation did in this instance. I've actually struggled a lot with image generation of room layouts. I think that interior design is such a fun use case of AI, and I have actually had a really challenging time getting AI to interpret my prompting correctly, where things are on the left wall versus the right wall versus the back wall, up, down, left, right, what's inside the room, what's outside the room, and I will say, I think that GPT-5 did a quite lovely job of it. Had to ask it for a couple do-overs, but if you are curious, this is a little bit of [chuckles] what my new, tiny San Francisco bathroom might look like. But I took it a little bit further, and I took it further and also did a side-by-side comparison of 4o versus, uh, GPT-5. And if we all remember, we love 4o's image generation capabilities. When this first came out, everybody was thrilled with the performance of the 4o image gen model. It could write text. It was really instructable. The image generations were beautiful. It was very, very fun, very memeable, super exciting. And I will say my experience with the GPT-5 Plus image generation has been exceptional, and it's actually gotten better at all those things we know and love in 4o. So text generation, good, and one of the things that I really noticed about GPT-5 is it has a much better spatial awareness in both, um, code, so when you're instructing it to lay out things, as well as in image generation. So it was something that really came across to me, is spatial awareness, and you'll see that in this side by side I'm about to show you. So again, Claire's benchmark for bathroom renovations. We will come up with some sort of really effective acronym for that, and we will publish it in an academic paper, but this is what [chuckles] I'm working on right now. I picked out a couple tile samples at the tile store, very exciting stuff, and I took my ugly iPhone photos and uploaded them here, and I said, "What Benjamin Moore paints," 'cause I like a Benjamin Moore paint, "will this green tile wall match? And can you help me with this?" Now, this is actually a pretty hard task. I wasn't sure how the model had indexed the sense of color. Honestly, this is a new use case for me, and what was so fascinating is I not only got colors that matched each of the tiles, I got specific names of those colors. The text is very crisp, very clear, and, and spelled correctly, and even the paint codes for those paint samples. Was not expecting this at all. I was, in fact, not expecting an image at all. I was expecting them to just give me a couple, like, green-colored paint samples, and instead, they actually mapped it out here. And I just asked it what it would recommend. It gave me some options, and then it said, "Do you wanna do a full mock-up?" And I said, "Yep, do a full mock-up with Hyde Park." And I was really blown away by this, and you'll even see the sense of it side by side when I show you what 4o generated. So instead of giving me a kind of plain mock-up, it really followed the instructions of with where these tile samples are gonna go and where the paint was gonna go and gave me sort of a 3D rendering.... that I could look at, and this is the version I love the most, which is it actually followed my instructions. It said, "Half wall of tile, black on the floor, marble on the walls, High Park," and it gave me this beautiful layout of exactly what my walls and floors and stuff would look like. I was really impressed with this. Now, I asked it to paint the wall. It did an okay job. It didn't know what wall I was talking about, but again, this gave me a really good sense of what my bathroom remodel was going to look like, and now I'm gonna go to the Benjamin Moore paint store and ask them to pull High Park 467. Um, actually, I should check. It has been consistently 467 throughout. Oh, um, yeah, throughout, so it seems like consistent reference for the paint number. I thought this was really interesting, and I just want to go to a side-by-side of what GPT-4 generated with the same prompt. So I'm gonna show you that quickly, and then we will wrap up. So if you look on the left, I did the same prompt into GPT-4, and you can see just the mock-up that it did was a little less sensical, honestly, and didn't actually match what my description was of the uses of these tiles and paints. And so again, I give you this as a use case that I think is pretty practical, applicable to other use cases a common consumer might think about. How do I design my room? How do I pick an outfit? Um, how do I lay out my backyard? You know, how do I organize my books? And I really do think GPT-5's sense of space, plus improved image generation options, might be a reason that consumers reach for it. It's just yet to be seen how they train the in-chat model to have a little bit less of that developer bent and a little bit more friendly consumer orientation. So to sum everything up with a high-level takeaway about GPT-5, for engineers, by engineers. As an engineer, this is a technical thinker, a technical writer, an exceptional coder. You know, for a product person, it may give you more features, how and what, as opposed to who and why, so you'll have to really think about what kind of asset you're generating or why you might use this, um, model in production or in your day-to-day workflows and make sure that it's just the appropriate tool for the job. From coding, really no complaints. It's exceptional at coding. I've been very happy with it. I've shipped tons of stuff, um, using this model. I think it's exceptional. My only complaints is, you know, try something other than a bullet point and maybe to- call, like, one fewer tool if you don't really need it. So we'll see how, uh, ultimately, the coding tools optimize around the strengths and weaknesses of this model, but I think it's gonna be a daily driver for lots of folks, depending on cost and access. And then the final thing, I think ChatGPT is gonna get a major upgrade in specific areas, especially canvas, front-end design, as well as image generation, good sense of spatial awareness, and let's just make sure it has a cute personality to go with all those technical chops. So that is my summary of GPT-5. This is our first deep-dive episode of How I AI. Please let us know in the comments if you like and want more content like this. I'm happy to walk through my favorite models, my favorite tools, and my favorite creators in more detail. Thanks, and we'll talk to you soon. [upbeat music] Thanks so much for watching. If you enjoyed the show, please like and subscribe here on YouTube, or even better, leave us a comment with your thoughts. You can also find this podcast on Apple Podcasts, Spotify, or your favorite podcast app. Please consider leaving us a rating and review, which will help others find the show. You can see all our episodes and learn more about the show at howiai pod.com. See you next time!

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome