Skip to content
How I AIHow I AI

The beginner's guide to coding with Cursor | Lee Robinson (Head of AI education)

Lee Robinson is the head of AI education at Cursor, where he teaches people how to build software with AI. Previously, he helped build Vercel and Next.js as an early employee. In this episode, he demonstrates how Cursor's AI-powered code editor bridges the gap between beginners and experienced developers through automated error fixing, parallel task execution, and writing assistance. Lee walks through practical examples of using Cursor's agent to improve code quality, manage technical debt, and even enhance your writing by eliminating common AI patterns and clichés. *What you'll learn:* 1. How to use Cursor's AI agent to automatically detect and fix linting errors without needing to understand complex terminal commands 2. A workflow for running parallel coding tasks by focusing on your main work while the agent handles secondary features in the background 3. Why setting up typed languages, linters, formatters, and tests creates guardrails that help AI tools generate better code 4. How to create custom commands for code reviews that automatically check for security issues, test coverage, and other quality concerns 5. A technique for improving your writing by creating a custom prompt with banned words and phrases that eliminates AI-generated patterns 6. Strategies for managing context in AI conversations to maintain high-quality responses and avoid degradation 7. Why looking at code—even when you don't fully understand it—is one of the best ways to learn programming *Brought to you by:* Google Gemini—Your everyday AI assistant Persona—Trusted identity verification for any use case *Where to find Lee Robinson:* Twitter/X: https://twitter.com/leeerob Website: https://leerob.com *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo *In this episode, we cover:* (00:00) Introduction to Lee (02:04) Understanding Cursor's three-panel interface (06:27) The importance of typed languages, linters, and tests (11:28) Demo: Using the agent to automatically fix lint errors (15:17) Running parallel coding tasks with the agent (18:50) Setting up custom rules (23:24) Understanding the different AI models (24:48) Micro-slicing agent chats for better success (27:22) Tips for effective agent usage (29:00) Using AI to improve your writing (35:47) Lightning round and final thoughts *Tools referenced:* • Cursor: https://cursor.com/ • ChatGPT: https://chat.openai.com/ • JavaScript: https://developer.mozilla.org/en-US/docs/Web/JavaScript • Python: https://www.python.org/ • TypeScript: https://www.typescriptlang.org/ • Git: https://git-scm.com/ *Other references:* • Linting: https://en.wikipedia.org/wiki/Lint_(software) _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Claire VohostLee Robinsonguest
Sep 22, 202545mWatch on YouTube ↗

CHAPTERS

  1. Why Cursor matters: making coding less intimidating (with Lee Robinson)

    Claire introduces Lee Robinson and frames the episode around helping less-technical builders learn to ship software using Cursor. The key theme is that seeing and reading code—rather than hiding it—can be a powerful path to learning.

  2. What Cursor is: an AI code editor with multiple models and built-in “skills”

    Lee explains Cursor’s role in the AI-builder tool ecosystem: a code editor that integrates frontier models while also using custom models for coding-specific actions. The product spans beginner-friendly assistance through power-user automation.

  3. Navigating Cursor’s three-panel IDE: files, code, and the autonomy slider

    The conversation breaks down Cursor’s UI into a simple mental model: file tree on the left, active code in the middle, and the agent/chat on the right. Lee describes increasing autonomy from autocomplete to full agent-driven code changes.

  4. From vibe coding to learning by reading: how beginners can get started with a codebase

    Claire suggests practical on-ramps for beginners: scaffold with a vibe-coding tool, download the code, and open it in Cursor to learn. Cursor’s diff view and ability to explain code supports a human-in-the-loop learning loop.

  5. Quality guardrails that help both humans and agents: types, linters, formatters, tests

    Lee argues that classic software engineering guardrails make AI coding more reliable. Typed languages, linting, formatting, and tests provide fast feedback loops that the agent can use to self-correct and verify fixes.

  6. Demo: “Fix the lint errors” — letting the agent run commands and verify repairs

    Lee demonstrates a minimal prompt—“Fix the lint errors”—and the agent autonomously runs the project’s lint command, diagnoses issues, applies changes, and re-runs lint to confirm success. The key takeaway is outcome-based prompting and self-verification.

  7. Learning from the agent: reviewing diffs and graduating to manual terminal use

    Claire emphasizes inspecting the agent’s changes to build intuition and confidence. Even if the agent runs terminal commands, users can learn them and re-run checks themselves inside Cursor’s terminal.

  8. Parallelizing work: keeping focus while the agent “cooks” on the side

    Lee describes using the agent for parallel tasks while continuing manual work in the editor. This pattern evolves AI from a code generator into a true pair programmer as users learn when to delegate versus take control.

  9. Custom rules and reusable commands: codifying preferences and preventing repeat mistakes

    Lee explains using Cursor rules when a model repeats the same mistakes. He shows defining reusable agent commands (like a code review) and building them iteratively as living prompts tailored to the repo.

  10. Using @mentions for better context: files, Git branches/commits, and more

    The episode highlights the @mention menu as a key mechanism for giving structured context to the agent. Lee explains how referencing Git branches and commits helps the model review exactly what changed.

  11. Choosing models without overwhelm: Auto mode vs picking reasoning models

    Claire asks why Lee keeps the agent on Auto model selection, and Lee explains it as a beginner-friendly default that optimizes for quality, speed, and availability. Advanced users can then intentionally choose specific models for tasks like deeper reasoning.

  12. Micro-slicing agent chats: managing context limits and avoiding degraded quality

    They discuss context usage as a percentage and how long chats can reduce quality as the model “forgets” earlier details. The recommended approach is to start new chats per discrete feature and keep side questions in separate threads.

  13. Beyond code: using AI as a writing “linter” with banned words and LLM-pattern checks

    Lee shows a large writing prompt he uses (often in the ChatGPT macOS app) to polish emails, posts, and messages. The prompt functions like a linter/formatter for prose—banning generic marketing phrases and flagging telltale AI patterns.

  14. Lightning round: who should use which tools, how to learn, and how to re-prompt under stress

    Lee maps the ecosystem from vibe-coding tools for first-time builders to Cursor as the bridge toward real maintenance and professionalism. He recommends JavaScript or Python for learning fundamentals and stresses calm, specific re-prompts when models go off track.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome