How I AIHow to digest 36 weekly podcasts without spending 36 hours listening | Tomasz Tunguz
CHAPTERS
- 0:00 – 3:32
Why Tomasz built a “podcast ripper” to keep up with 36 shows
Tomasz explains the core problem: he wants insights from dozens of weekly podcasts but doesn’t have time to listen. His solution is an automated pipeline that downloads episodes, transcribes them, and produces skimmable outputs he can read quickly.
- 3:32 – 5:06
Architecture overview: downloading feeds, converting audio, transcribing locally
Tomasz walks through the system he built (Parakeet Podcast Processor) and the main plumbing that turns audio files into text. The toolchain emphasizes local processing and modular steps that can be swapped as models improve.
- 5:06 – 6:59
Transcript cleanup: using an LLM as a transcript editor
After transcription, Tomasz cleans transcripts by removing filler words while preserving technical content and length. He demonstrates a “transcript editor” prompt and describes why cleaning mattered more earlier in the project than it does now.
- 6:59 – 10:20
Orchestration + storage: tracking processed episodes with DuckDB
To make the workflow reliable, Tomasz stores processing metadata locally so episodes aren’t reprocessed unnecessarily. He describes an orchestrator that pulls daily transcripts from the database and runs summarization prompts in batches.
- 10:20 – 12:38
Daily digest outputs: summaries, key topics/themes, and the most valuable quotes
Tomasz shows what the daily generated document looks like: each podcast gets host/guest context, a comprehensive summary, key topics, and key themes. He emphasizes that curated quotes are the highest-signal output for his workflow.
- 12:38 – 15:31
From content to action: investment theses, tweets, and company discovery
Beyond summarization, the pipeline generates venture-style “investment theses,” draft tweets, and lists of companies mentioned in episodes. These outputs connect podcast listening to concrete next steps like market maps, CRM enrichment, and outreach.
- 15:31 – 17:34
Why the terminal: speed, low latency, and scriptability
Claire probes why Tomasz stays in the terminal instead of building a UI. Tomasz argues the terminal offers the lowest interaction latency, reduces frustration, and makes it easy to script automations across email, CRM actions, and AI tools.
- 17:34 – 18:25
Hyper-personal tools vs off-the-shelf apps: “glove-like fit” with modern AI
They discuss why personalized internal tools are now practical: AI reduces the cost of building and modifying bespoke software. Tomasz highlights how quickly he can tweak workflows (like reordering sections or emailing digests) using tools like Claude Code.
- 18:25 – 22:00
Podcast insights to blog drafts: the blog post generator pipeline
Tomasz introduces a second system that turns a specific podcast quote/topic into a blog post draft. It uses the podcast transcript as context and pulls relevant prior writing to shape content and tone, though a demo bug appears during search.
- 22:00 – 26:13
The “AP English teacher” grader: setting a quality bar and iterating
To improve drafts, Tomasz uses an evaluation prompt that grades the post like an AP English teacher, then revises until it reaches roughly an A−. He describes why hooks and conclusions matter most and how he runs multiple improvement passes.
- 26:13 – 28:16
Style matching is hard: model voices, personal quirks, and linking limitations
Both agree AI struggles to capture a writer’s authentic rhythm—especially in short-form. Tomasz compares model “personalities,” notes AI’s tendency toward grammatical perfection, and shares unresolved challenges like automatically linking to related posts.
- 28:16 – 30:28
Prompt + rubric details: what his generator optimizes for (brevity, no headers, flow)
Tomasz reveals the blog generator prompt design and the structural constraints he enforces from his own analytics. He explains dynamic style extraction from related posts and why he avoids headers due to dwell-time impacts.
- 30:28 – 35:14
AI for writing education + wrap-up: future tiny teams and “AI model cage matches”
They broaden to education: AI can handle first-pass grammar/structure feedback, freeing teachers to focus on creativity. Tomasz then answers rapid questions: his vision of a 30-person $100M company and his tactic of using multiple models to critique each other when outputs degrade.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome