How I AIThe senior engineer's guide to AI coding: Context loading, custom hooks, and automation
CHAPTERS
Why senior engineers need “power-user” AI workflows (feat. John Lindquist)
Claire sets expectations: this episode is for experienced engineers who want production-grade results, not just “vibe coding.” John Lindquist is introduced as a heavy user of Claude Code/Cursor, focused on speed and quality through repeatable workflows.
Compressing codebase understanding with context + diagrams
John argues the biggest lever is context: AI performs best when it understands how the application works, not just style rules. Diagrams—especially Mermaid—act like a compressed representation of system behavior that models can ingest quickly.
Demo: Mermaid diagrams as machine-friendly architecture maps
They walk through Mermaid diagrams embedded in Markdown and why they’re hard for humans but efficient for LLMs. John shows how a diagram can answer questions like “explain the authentication flow” without prompting the model to explore the repo.
Preloading diagrams into Claude Code using Append System Prompt
John demonstrates a repeatable technique: append a system prompt that concatenates all diagram files (via glob + cat) into Claude’s startup context. This front-loads tokens but removes repeated repo scanning and accelerates subsequent tasks.
Organizing a “memory” directory and exploring Claude Code’s command surface area
Claire highlights two meta-lessons: structure repo context deliberately (not just one generic markdown file), and learn the tool’s built-in commands. John emphasizes he uses append-system-prompt constantly because it changes the interaction model.
The rise of specialized file formats for LLMs (beyond Markdown)
They zoom out: Mermaid is one example of emerging “LLM-oriented” formats that machines parse better than humans. John and Claire discuss compression, metadata, images/video, and using models (like Gemini) to process long media into usable notes.
When to generate diagrams: PR-time documentation and legacy code acceleration
Claire shares generating diagrams/docs on PR close via automation; John agrees PR is a strong milestone. For legacy repos, John’s common use case is generating diagrams to quickly bootstrap AI-assisted development without months of onboarding.
Mermaid diagram use cases: compliance, security reviews, and customer data-flow requests
Claire notes diagram generation isn’t just for AI speed—it’s valuable for external demands like SOC2, security questionnaires, and customer-specific data-flow diagrams. These artifacts used to be tedious, but can now be generated on demand from code.
Aliasing common AI commands for speed (models, permissions, project modes)
John shows how he creates shell aliases to launch Claude in specific modes quickly—choosing models, enabling dangerous permissions, or loading diagram context. The goal is minimal friction: one or two letters to start the right workflow.
Beyond aliases: building custom CLIs to wrap AI workflows (Gemini ‘Sketch’)
John demos a personal CLI tool that wraps Gemini to generate website concept images via a short terminal questionnaire. Claire highlights why terminal UIs are powerful: constrained surfaces reduce UI distraction and speed up prototyping.
Automating code quality with Claude hooks: stop hooks + typecheck + auto-fix
They introduce hooks as a way to run commands automatically when Claude stops (finishes a response). John builds a stop hook that checks for file changes, runs TypeScript typecheck, feeds errors back to Claude, and can proceed to commit if clean.
Hook implementation details and gotchas (JSON output, logging, local vs shared settings)
John explains that hooks communicate back to Claude via a JSON object printed to stdout—so stray logs can break the protocol. They also clarify how to store hooks locally vs sharing across the team via repo settings, enabling organization-wide leverage.
What else to automate: formatting, linting, complexity, duplicates, dependency checks
Beyond TypeScript errors, John lists other guardrails that fit naturally into hooks—similar to pre-commit/pre-push checks but shifted earlier into the AI loop. Claire notes hooks can also apply to non-code deliverables (docs/workflows) as post-completion checks.
Terminal UI vs IDE, selling AI to skeptics, and reset strategies when models drift
John argues both terminal and IDE are necessary: CLIs excel at configurable launches, while IDEs provide diagnostics and extension ecosystems. For skeptical teams, he emphasizes AI’s value in issue orientation and investigation; when things drift, export and get a second model’s critique or reset via revert, with planning modes reducing drift overall.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome