Skip to content
How I AIHow I AI

How I built an Apple Watch workout app using Cursor and Xcode (with zero mobile-app experience)

Terry Lin is a product manager and developer who built Cooper’s Corner, an AI-powered fitness tracking app that works across iPhone and Apple Watch. Frustrated with traditional fitness apps that require extensive setup and manual logging, Terry created a solution that lets users simply speak their exercises, weights, and reps. The app automatically structures this data and provides analytics on workout consistency and progress. In this episode, Terry shares his vibe-coding process using Cursor and Xcode and explains how he optimizes his codebase for AI collaboration. *What you’ll learn:* 1. How Terry built a voice-powered fitness tracker that works across iPhone and Apple Watch 2. His “dual-wielding” workflow, using Cursor for coding and Xcode for building and debugging 3. Terry’s three-step process for working with AI: create, review, and execute 4. Why optimizing your codebase for AI collaboration can dramatically improve productivity 5. How to use index cards and GPT-4 to rapidly prototype mobile interfaces 6. A technique for “vibe refactoring” that keeps code organized and optimized for both human and AI readability 7. His “rubber duck” technique to better understand generated code and improve your learning process *Brought to you by:* Paragon—Ship every SaaS integration your customers want: https://useparagon.com/HowIAI Miro—A collaborative visual platform where your best work comes to life: http://miro.com/ *Where to find Terry Lin:* LinkedIn: https://www.linkedin.com/in/itsmeterrylin/ GitHub: https://github.com/itsmeterrylin *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo *In this episode, we cover:* (00:00) Introduction to Terry and his fitness tracker app (02:30) Demo of the voice-powered workout tracking across devices (06:40) Analytics and history views for tracking consistency (07:20) Dual-wielding Cursor and Xcode for mobile development (09:05) Building a v1 using AI tools (11:19) A three-step AI workflow: create, review, execute (19:38) Token conservation and vibe refactoring explained (23:25) Optimizing file sizes for better AI performance (25:28) Using “rubber duck” rules to learn from AI-generated code (28:13) Prototyping with index cards and GPT-4 (31:20) Human creativity and the last 10% (32:29) Lightning round and final thoughts *Tools referenced:* • Cursor: https://cursor.sh/ • Xcode: https://developer.apple.com/xcode/ • GPT-4: https://openai.com/gpt-4 • UX Pilot: https://uxpilot.ai/ • Figma: https://www.figma.com/ • Linear: https://linear.app/ *Other references:* • Apple UI Kit: https://developer.apple.com/design/human-interface-guidelines/ _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Claire VohostTerry Linguest
Sep 15, 202536mWatch on YouTube ↗

CHAPTERS

  1. From GPT voice notes to a structured workout tracker idea

    Terry explains the pain of staying consistent at the gym and how existing fitness apps feel like “homework.” Using the GPT mobile app as speech-to-text sparks the idea: a workout app that can understand spoken sets and automatically tag them into structured data for analytics.

  2. App walkthrough: Sign in with Apple and cross-device syncing

    Terry demos the iPhone app’s login and explains the UX choice of Sign in with Apple to reduce friction. He highlights that authentication and state sync across iPhone and Apple Watch so users can log workouts from either device.

  3. Voice-powered logging on Apple Watch and iPhone (end-to-end demo)

    The core interaction is demonstrated: Terry speaks an exercise, weight, and reps, and the app captures a transcript and populates a structured entry with time and exercise context. Logging works from watch or phone and syncs across devices so users can record with whatever they have on them.

  4. History, consistency analytics, and exercise progression views

    Terry shows the analytics layer that makes the data useful over time: 7/30/90-day consistency, top exercises, and drill-down history per exercise. He demonstrates progression tracking (e.g., scatter plots) to see whether strength is improving.

  5. Mobile development workflow: “dual-wielding” Cursor + Xcode

    Because iOS/watchOS require Xcode for building, running, and debugging, Terry uses Cursor for editing and Xcode for compilation, simulator/device runs, and error diagnosis. He explains the practical friction of mobile iteration compared to web development and why this split workflow works best today.

  6. Starting from zero: scrappy v1 (Voice Memos → Python → GPT → spreadsheet)

    Terry outlines an MVP path that began without a full app: record workouts as Apple Watch Voice Memos, copy them to a computer, transcribe/tag with GPT via a Python script, and output to Excel. The limitations of unstructured spreadsheet data drive the move toward a database and backend API.

  7. A three-step AI workflow in Cursor: PRD Create → Review → Execute

    Terry formalizes collaboration with the model using custom Cursor rules that mirror a product/engineering org. He generates a PRD from an issue, has a separate “reviewer model” critique it for missing context, then executes with a phased checklist and pause points.

  8. Making AI reliable: checklists, file targeting, and incremental delivery

    To reduce hallucinations and wasted time, Terry explicitly tells the model which files/endpoints to touch instead of letting it search the whole repo. He adds safety checks (no placeholders, real paths, error handling) and pauses between phases, with manual QA/build checks in Xcode.

  9. Token conservation and “vibe refactoring” to control complexity

    Terry explains that large files and verbose generations degrade performance and increase confusion—both for Cursor and for him. He introduces “vibe refactoring”: using AI not just to ship features quickly, but to regularly reorganize and clean up code so future AI work stays effective.

  10. Optimizing for an AI teammate: file-size targets and refactor planning

    Terry sets an explicit codebase design principle: keep files small enough (often ~200–400 lines) so the model can navigate and modify them efficiently. He uses quick diagnostics (like line counts) to find “god files” and generates refactor PRDs/checklists to split responsibilities safely.

  11. Rubber-ducking with AI: turning generated code into learning

    To avoid being a “black-box” builder, Terry uses a rubber-duck rule: have the model explain files line-by-line, summarize changes, and even quiz him on functions. This accelerates learning, improves debugging intuition, and helps him spot model mistakes faster over time.

  12. Design on the subway: index-card prototypes → GPT-4 upscaling → Figma UI kit

    For mobile UX exploration—even offline—Terry sketches screens on index cards that match phone proportions. He uploads photos to GPT-4’s image capabilities to upscale/iterate on layouts, then uses tools (e.g., UXPilot) and Apple’s Figma UI libraries to assemble higher-fidelity designs quickly.

  13. Human creativity and the “last 10%” + lightning round takeaways

    They discuss how AI can get designs and code to “good enough,” but the final polish and differentiated craft remain difficult and valuable for humans. In the lightning round, Terry asks for better mobile debugging ergonomics, notes models are generally fine for mobile code, and shares risk mitigation via frequent Git commits.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome