How I AIHow Notion designers ship live prototypes in minutes | Brian Lovin (Product designer)
CHAPTERS
Why prototypes should “encounter reality” early (and why AI changed the speed limit)
Brian explains his philosophy for designing B2B SaaS: move from static mocks toward real usage as quickly as possible. AI coding tools let him prototype dramatically faster—often in a near-production environment—so design flaws surface earlier (loading states, responsiveness, interaction details).
Prototype Playground: a simple Next.js repo as a shared prototyping hub
Prototype Playground is described as a straightforward Next.js project where every designer/PM/engineer can create isolated prototype pages. The payoff is centralization: everyone’s work is visible and reusable, making it easy to borrow ideas and code across the team.
What the UI looks like: browsing, templates, and “Notion-y” shared styles
Brian walks through how the playground presents prototypes and how templates accelerate building familiar Notion-like layouts. Shared colors, typography, and icons help prototypes feel realistic without re-creating the design system from scratch.
How it got created and deployed: lightweight ops, real collaboration constraints
Claire asks about the operational reality: who built it, how it’s hosted, and what approvals were needed. Brian emphasizes the minimal overhead—basic Next.js + Vercel—with just enough process to make it accessible to the team.
Adoption reality: who uses it, why some don’t, and linking external prototypes
Brian is candid that he’s the heaviest user, with a smaller core group using it frequently. To include different workflows, he added support for external prototype links (e.g., other tools or personal stacks) so the hub can still be a centralized index.
Creating a new prototype without boilerplate: in-app “New” button and file-based metadata
Brian demonstrates the fastest way to start: generating a prototype folder and files automatically rather than manually creating Next.js routes. The system is intentionally backend-less: prototypes are just files on disk, committed and shared via GitHub.
Brian’s working setup: three-pane workflow + voice prompting + Plan Mode
Brian shows his personal setup: Claude Code in terminal alongside the editor and live preview, plus a voice-to-prompt tool (Monologue) to iterate faster. He stresses using Plan Mode first, then actually reading the plan to catch mistakes early.
Teaching Claude to self-check: linting, MCP tools, and reducing human babysitting
Brian shares a core principle: when the AI asks you to do something, teach it to do that step itself. He builds habits like automatic linting, plus browser-driven verification using MCP tools (Playwright/Chrome DevTools) so Claude can validate functionality (e.g., confetti on play).
Slash commands for fast onboarding: “/create-prototype” and command definitions
To make the playground more approachable, Brian adds custom slash commands—structured prompts that can also run scripts. He shows how commands are defined (name/description/instructions/examples) and why examples dramatically improve reliability.
Figma → code pipeline with MCP: extraction, implementation, and a verification loop
Brian demonstrates a dedicated “/figma” command that orchestrates Figma MCP and browser verification. It checks prerequisites, extracts design data, implements the UI, then loops comparing the output to the Figma frame until changes stabilize—often reaching ~80% fidelity quickly.
Reducing “icon hallucinations” with Claude Skills + bundled scripts (Find Icon)
Brian explains a recurring pain: models guess icon names incorrectly. He created a “Find Icon” skill that programmatically searches a large internal icon set (thousands of files) using a script, including synonym logic (e.g., ‘search’ vs ‘magnifying glass’).
One-command deployment for non-engineers: “/deploy” to branch, PR, CI monitoring, and fixes
To lower intimidation around Git and deployment, Brian built a deploy command that checks prerequisites (GitHub CLI/auth), creates a branch, commits, opens a PR, and monitors CI until checks are green—auto-fixing and pushing updates when needed. The goal is end-to-end sharing without requiring deep Git knowledge.
How AI changes design work: when Figma is enough vs when code is required (especially for AI UX)
In lightning-round discussion, Brian explains he still spends most time in Figma—but AI experiences demand code-first prototyping. For chat/agent products, realistic model behavior (errors, delays, follow-up questions) can’t be accurately designed with static “golden path” mocks.
Tooling preferences and prompting advice: why Opus 4.5 + what to do when AI ‘isn’t listening’
Brian describes why Claude/Opus 4.5 feels best for his workflow, while still using Cursor features for quick fixes. His most pragmatic prompting advice: quality correlates with human clarity and energy—if prompts get sloppy, step away and come back rested.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome