Y CombinatorTokenmaxxing: How Top Builders Use AI To Do The Work Of 400 Engineers
FREQUENTLY ASKED QUESTIONS
Direct answers grounded in the episode transcript. Tap any timestamp to verify against the source.
What was the $200 Claude Code rebuild behind Garry's List?
Garry Tan says Garry's List began as a third rebuild of his old blogging platform, Postrous. Postrous was dead simple blogs by email, became a top 200 website, and was bought by Twitter for about $20 million. After Twitter shut the startup down, he rebuilt it as Posthaven with Brett Gibson. In January, he rebuilt it again with Claude Code: not for millions of dollars or a small team, but roughly $200 for a Claude Code Max account and about five days. The new version was not only a full-featured blog platform. Garry says it also had full RAG and agentic retrieval, could crawl sources, read his old tweets, run deep research on topics, and produce fully sourced reports for Garry's List articles about California, San Francisco, and LA.
▸ 3:29 in transcriptWhat does tokenmaxxing mean in Garry Tan's AI workflow?
Tokenmaxxing means spending extra tokens when more context can make the work more complete and reality-grounded. Garry uses the idea while explaining Garry's List research. For the equivalent of five or $10 of Opus calls, he says the system can do work that would otherwise require someone to read dozens of articles, read books, annotate sources, and assemble context. His rule is not to settle for one source when the system can gather 20, compare disagreements, and feed that broader context into the core prompt. He connects this to his boil the ocean philosophy: if incremental machine work improves completeness, accuracy, or usefulness, spend for it. He does not frame tokenmaxxing as replacing people. The human still supplies the agency, need, desire, and judgment about what is worth investigating or building.
▸ 6:17 in transcriptWhy does Garry Tan ask Claude for ASCII diagrams before coding?
Garry Tan asks for ASCII diagrams because they force the model to load the problem structure before it writes code. In GStack's origin story, he says Claude would sometimes get confused, write bugs, or fail to be complete. His fix was to ask, before implementation, for diagrams of data flows, inputs and outputs, user flows, and error messages. He also names state machines, dependency graphs, processing pipelines, and decision trees as useful diagram types. Once Claude drew those structures, Garry says it loaded all of the context in and did the work more completely. The same paragraph ties this to plan-eng-review, architecture review, code quality, and tests. For him, the diagrams are not decoration. They are a planning device that helps the agent boil the ocean better before touching code.
▸ 10:07 in transcriptWhat does thin harness, fat skills mean for AI agents?
Thin harness, fat skills means reusing the agent loop and putting more judgment into plain-English skills. Garry says the harness is the core loop that takes user input, gives it to the LLM, runs tool calls, and executes what the model decides. He argues builders should not keep rebuilding that loop when strong harnesses already exist. The differentiator should be the markdown, the checklists and instructions that teach the agent how to do the work, like a wedding planner writing down how to run another wedding. He contrasts that with deterministic code, which executes exact steps but does not understand special cases, motivation, or who the user is. The hard part of agentic engineering is deciding what belongs in LLM latent space, what belongs in code, and then backing the system with unit and integration tests.
▸ 21:29 in transcriptWhat is Garry Tan's personal AI warning about corporate feeds?
Garry Tan frames personal AI as a choice between user-controlled tools and opaque corporate systems. He says every person will eventually have their own AI, with their own data, integrations, prompts, and control over what they see. The alternative is a hosted system that resembles a Facebook feed, where users do not know who wrote the algorithm, who benefits from it, or what business model sits behind it. He compares the coming shift to the personal computer revolution. In his view, owning prompts matters because a product manager or developer below the API line will not understand a user's needs or what that person uniquely cares about. That is why he repeats the episode's opening question: whether people will control their tools, or their tools will control them.
▸ 34:31 in transcript
Answers are AI-generated from the transcript and may contain errors. Tap a question to verify against the source.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome