Lenny's PodcastVibe coder Lazar Jovanovic: How to plan before AI ships slop
How Lovable's first vibe coder spends 80% planning and 20% executing; he runs parallel prototypes and uses sources-of-truth docs to beat context limits.
At a glance
WHAT IT’S REALLY ABOUT
Professional vibe coding emerges: AI-assisted building prioritizes clarity and taste
- Lazar Jovanovic, Lovable’s first “vibe coding engineer,” describes a new AI-era role: shipping internal and external products by steering AI agents rather than hand-writing code.
- His core claim is that coding is becoming commoditized; the differentiators are clarity of intent, judgment, taste, and user experience—skills that determine whether AI amplifies quality or “garbage faster.”
- He shares concrete workflows for getting better outputs: parallel prototyping for clarity, heavy upfront planning via PRD-style documents, and maintaining “sources of truth” to compensate for LLM context limits.
- The episode also covers practical debugging tactics, why engineers still matter for infrastructure/maintenance, and how to turn vibe coding into a job by building in public and showcasing apps instead of resumes.
IDEAS WORTH REMEMBERING
5 ideasTreat AI as an amplifier—judgment determines whether output is magic or slop.
Lazar argues AI accelerates whatever you already are: if your intent and taste are weak, you just “produce garbage faster.” The competitive edge shifts from raw output speed to decision quality and product sense.
Optimize for clarity, not speed—spend ~80% planning and 20% executing.
He found early that rushing prompts creates rework and token waste. Investing a day in planning (requirements, sequencing, design intent) saves massive time and credits later.
Run parallel prototypes to discover the best direction quickly.
Instead of iterating endlessly on one build, he starts multiple projects in parallel: a voice brain-dump, a refined typed prompt, design references (e.g., Mobbin/Dribbble), and even code snippets/templates. Comparing outputs sharpens clarity and reduces downstream thrash.
Use “sources of truth” docs to beat context-window limits.
Because agents forget earlier details as chats grow, Lazar externalizes memory into Markdown docs (masterplan.md, implementation plan, design guidelines, user journeys, tasks.md) and points the agent to read them before acting. This keeps work coherent as projects scale in files and complexity.
Define agent behavior once with rules/agent files to stop repeating yourself.
He sets persistent instructions (e.g., “read PRDs, check tasks.md for the next task, then implement and tell me how to test”) so prompts become minimal (“proceed with next task”). This reduces context churn and keeps token budget focused on solving.
WORDS WORTH SAVING
5 quotesYou don't need a company to hire you, you can hire yourself as a professional vibe coder first.
— Lazar Jovanovic
AI, regardless of your background, is an amplifier. If you don't know what you're doing, you're just gonna produce garbage faster.
— Lazar Jovanovic
I like to use the Aladdin and the Genie analogy… The first wish is, 'I wanna be taller.' Genie makes me 13 feet tall because I was not specific.
— Lazar Jovanovic
I spent 80% of my time in planning and chatting, and only 20% in executing the plan.
— Lazar Jovanovic
Coding is gonna be like calligraphy… It's gonna be so rare that it's gonna become an art.
— Lazar Jovanovic
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome