How I AIHow I built an Apple Watch workout app using Cursor and Xcode (with zero mobile-app experience)
CHAPTERS
From GPT voice notes to a structured workout tracker idea
Terry explains the pain of staying consistent at the gym and how existing fitness apps feel like “homework.” Using the GPT mobile app as speech-to-text sparks the idea: a workout app that can understand spoken sets and automatically tag them into structured data for analytics.
App walkthrough: Sign in with Apple and cross-device syncing
Terry demos the iPhone app’s login and explains the UX choice of Sign in with Apple to reduce friction. He highlights that authentication and state sync across iPhone and Apple Watch so users can log workouts from either device.
Voice-powered logging on Apple Watch and iPhone (end-to-end demo)
The core interaction is demonstrated: Terry speaks an exercise, weight, and reps, and the app captures a transcript and populates a structured entry with time and exercise context. Logging works from watch or phone and syncs across devices so users can record with whatever they have on them.
History, consistency analytics, and exercise progression views
Terry shows the analytics layer that makes the data useful over time: 7/30/90-day consistency, top exercises, and drill-down history per exercise. He demonstrates progression tracking (e.g., scatter plots) to see whether strength is improving.
Mobile development workflow: “dual-wielding” Cursor + Xcode
Because iOS/watchOS require Xcode for building, running, and debugging, Terry uses Cursor for editing and Xcode for compilation, simulator/device runs, and error diagnosis. He explains the practical friction of mobile iteration compared to web development and why this split workflow works best today.
Starting from zero: scrappy v1 (Voice Memos → Python → GPT → spreadsheet)
Terry outlines an MVP path that began without a full app: record workouts as Apple Watch Voice Memos, copy them to a computer, transcribe/tag with GPT via a Python script, and output to Excel. The limitations of unstructured spreadsheet data drive the move toward a database and backend API.
A three-step AI workflow in Cursor: PRD Create → Review → Execute
Terry formalizes collaboration with the model using custom Cursor rules that mirror a product/engineering org. He generates a PRD from an issue, has a separate “reviewer model” critique it for missing context, then executes with a phased checklist and pause points.
Making AI reliable: checklists, file targeting, and incremental delivery
To reduce hallucinations and wasted time, Terry explicitly tells the model which files/endpoints to touch instead of letting it search the whole repo. He adds safety checks (no placeholders, real paths, error handling) and pauses between phases, with manual QA/build checks in Xcode.
Token conservation and “vibe refactoring” to control complexity
Terry explains that large files and verbose generations degrade performance and increase confusion—both for Cursor and for him. He introduces “vibe refactoring”: using AI not just to ship features quickly, but to regularly reorganize and clean up code so future AI work stays effective.
Optimizing for an AI teammate: file-size targets and refactor planning
Terry sets an explicit codebase design principle: keep files small enough (often ~200–400 lines) so the model can navigate and modify them efficiently. He uses quick diagnostics (like line counts) to find “god files” and generates refactor PRDs/checklists to split responsibilities safely.
Rubber-ducking with AI: turning generated code into learning
To avoid being a “black-box” builder, Terry uses a rubber-duck rule: have the model explain files line-by-line, summarize changes, and even quiz him on functions. This accelerates learning, improves debugging intuition, and helps him spot model mistakes faster over time.
Design on the subway: index-card prototypes → GPT-4 upscaling → Figma UI kit
For mobile UX exploration—even offline—Terry sketches screens on index cards that match phone proportions. He uploads photos to GPT-4’s image capabilities to upscale/iterate on layouts, then uses tools (e.g., UXPilot) and Apple’s Figma UI libraries to assemble higher-fidelity designs quickly.
Human creativity and the “last 10%” + lightning round takeaways
They discuss how AI can get designs and code to “good enough,” but the final polish and differentiated craft remain difficult and valuable for humans. In the lightning round, Terry asks for better mobile debugging ergonomics, notes models are generally fine for mobile code, and shares risk mitigation via frequent Git commits.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome