How I AIThe beginner's guide to coding with Cursor | Lee Robinson (Head of AI education)
CHAPTERS
Why Cursor matters: making coding less intimidating (with Lee Robinson)
Claire introduces Lee Robinson and frames the episode around helping less-technical builders learn to ship software using Cursor. The key theme is that seeing and reading code—rather than hiding it—can be a powerful path to learning.
What Cursor is: an AI code editor with multiple models and built-in “skills”
Lee explains Cursor’s role in the AI-builder tool ecosystem: a code editor that integrates frontier models while also using custom models for coding-specific actions. The product spans beginner-friendly assistance through power-user automation.
Navigating Cursor’s three-panel IDE: files, code, and the autonomy slider
The conversation breaks down Cursor’s UI into a simple mental model: file tree on the left, active code in the middle, and the agent/chat on the right. Lee describes increasing autonomy from autocomplete to full agent-driven code changes.
From vibe coding to learning by reading: how beginners can get started with a codebase
Claire suggests practical on-ramps for beginners: scaffold with a vibe-coding tool, download the code, and open it in Cursor to learn. Cursor’s diff view and ability to explain code supports a human-in-the-loop learning loop.
Quality guardrails that help both humans and agents: types, linters, formatters, tests
Lee argues that classic software engineering guardrails make AI coding more reliable. Typed languages, linting, formatting, and tests provide fast feedback loops that the agent can use to self-correct and verify fixes.
Demo: “Fix the lint errors” — letting the agent run commands and verify repairs
Lee demonstrates a minimal prompt—“Fix the lint errors”—and the agent autonomously runs the project’s lint command, diagnoses issues, applies changes, and re-runs lint to confirm success. The key takeaway is outcome-based prompting and self-verification.
Learning from the agent: reviewing diffs and graduating to manual terminal use
Claire emphasizes inspecting the agent’s changes to build intuition and confidence. Even if the agent runs terminal commands, users can learn them and re-run checks themselves inside Cursor’s terminal.
Parallelizing work: keeping focus while the agent “cooks” on the side
Lee describes using the agent for parallel tasks while continuing manual work in the editor. This pattern evolves AI from a code generator into a true pair programmer as users learn when to delegate versus take control.
Custom rules and reusable commands: codifying preferences and preventing repeat mistakes
Lee explains using Cursor rules when a model repeats the same mistakes. He shows defining reusable agent commands (like a code review) and building them iteratively as living prompts tailored to the repo.
Using @mentions for better context: files, Git branches/commits, and more
The episode highlights the @mention menu as a key mechanism for giving structured context to the agent. Lee explains how referencing Git branches and commits helps the model review exactly what changed.
Choosing models without overwhelm: Auto mode vs picking reasoning models
Claire asks why Lee keeps the agent on Auto model selection, and Lee explains it as a beginner-friendly default that optimizes for quality, speed, and availability. Advanced users can then intentionally choose specific models for tasks like deeper reasoning.
Micro-slicing agent chats: managing context limits and avoiding degraded quality
They discuss context usage as a percentage and how long chats can reduce quality as the model “forgets” earlier details. The recommended approach is to start new chats per discrete feature and keep side questions in separate threads.
Beyond code: using AI as a writing “linter” with banned words and LLM-pattern checks
Lee shows a large writing prompt he uses (often in the ChatGPT macOS app) to polish emails, posts, and messages. The prompt functions like a linter/formatter for prose—banning generic marketing phrases and flagging telltale AI patterns.
Lightning round: who should use which tools, how to learn, and how to re-prompt under stress
Lee maps the ecosystem from vibe-coding tools for first-time builders to Cursor as the bridge toward real maintenance and professionalism. He recommends JavaScript or Python for learning fundamentals and stresses calm, specific re-prompts when models go off track.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome