Skip to content
All-In PodcastAll-In Podcast

Sam Altman: Getting Fired (and Re-Hired) by OpenAI, Agents, AI Copyright issues

(0:00) Welcoming Sam Altman to the show! (2:28) What's next for OpenAI: GPT-5, open-source, reasoning, what an AI-powered iPhone competitor could look like, and more (21:56) How advanced agents will change the way we interface with apps (33:01) Fair use, creator rights, why OpenAI has stayed away from the music industry (42:02) AI regulation, UBI in a post-AI world (52:23) Sam breaks down how he was fired and re-hired, why he has no equity, dealmaking on behalf of OpenAI, and how he organizes the company (1:05:33) Post-interview recap (1:10:38) All-In Summit announcements, college protests (1:19:06) Signs of innovation dying at Apple: iPad ad, Buffett sells 100M+ shares, what's next? (1:29:41) Google unveils AlphaFold 3.0 Follow Sam: https://twitter.com/sama Follow the besties: https://twitter.com/chamath https://twitter.com/Jason https://twitter.com/DavidSacks https://twitter.com/friedberg Follow on X: https://twitter.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://twitter.com/yung_spielburg Intro Video Credit: https://twitter.com/TheZachEffect Referenced in the show: https://twitter.com/EconomyApp/status/1622029832099082241 https://sacra.com/c/openai https://twitter.com/tim_cook/status/1787864325258162239 https://openai.com/index/introducing-the-model-spec https://twitter.com/SabriSun_Miller/status/1788298123434938738 https://www.archives.gov/founding-docs/bill-of-rights-transcript https://twitter.com/ClayTravis/status/1788312545754825091 https://www.inc.com/bill-murphy-jr/warren-buffett-just-sold-more-than-100-million-shares-of-apple-reason-why-is-eye-opening.html https://www.youtube.com/watch?v=snbTCWL6rxo https://www.digitimes.com/news/a20240506PD216/apple-ev-startup-genai.html https://www.theonion.com/fuck-everything-were-doing-five-blades-1819584036 https://blog.google/technology/ai/google-deepmind-isomorphic-alphafold-3-ai-model #allin #tech #news

Jason CalacanishostSam AltmanguestChamath PalihapitiyahostDavid FriedberghostGuestguest
May 10, 20241h 43mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Sam Altman on AGI, OpenAI Turmoil, AI Law, and Biology’s Future

  1. Sam Altman joins the All-In Podcast to discuss OpenAI’s product roadmap, business strategy, and the dramatic boardroom episode that briefly ousted him as CEO. He emphasizes a shift from big, punctuated model releases toward continuously improving AI systems, and stresses the importance of lowering latency and cost so advanced models can reach free users. Altman dives into open vs. closed source models, AI copyright and artist rights, safety and regulation, and what truly useful AI assistants and agents might look like. The episode closes with a debrief among the hosts, including reactions to Altman’s comments, Google’s AlphaFold 3 breakthrough, and the broader trajectory of AI and Big Tech.

IDEAS WORTH REMEMBERING

5 ideas

OpenAI is shifting from big version jumps to continuous improvement

Altman downplays the idea of a single, splashy GPT‑5 launch and instead points to how much GPT‑4 has quietly improved since release. He suggests future progress will look less like 3→4→5 and more like a continuously upgraded intelligence layer that just keeps getting better for users. This implies product teams should design around a moving target: capabilities will steadily increase without clear “version boundaries,” so roadmaps and integrations should assume ongoing, incremental gains rather than rare step changes.

Cost and latency are central bottlenecks—and major opportunities

Serving GPT‑4-class models to free users is still “very expensive,” and latency and throughput remain rate-limited by NVIDIA, chips, data centers, and energy. Altman expects “huge algorithmic gains” that could effectively double usable compute by making models twice as efficient, alongside long-term hardware and energy scale-up. Builders should anticipate that: (1) per-token costs will fall substantially; (2) near-instant interactions will become normal; and (3) entirely new real-time, embedded AI experiences (voice, agents, always-on assistants) will become feasible as these constraints relax.

OpenAI’s moat is the full intelligence layer, not just model weights

Altman is explicit that OpenAI’s ambition is not merely to have the “smartest set of weights,” but to provide a useful intelligence layer: product, tooling, reliability, safety, price, and ecosystem. He expects many capable models to exist—open and closed—and believes OpenAI’s enduring value will come from product quality and scaffolding around the models. For startups, this means competing only at model level is risky; more durable opportunities lie in specialized products, UX, verticalization, and orchestration of models and tools rather than raw model training.

Assistants should act like senior employees, not sycophantic alter-egos

Altman contrasts two AI futures: (1) an AI “extension of self” that acts as your ghost/alter-ego, and (2) a distinct, highly competent ‘senior employee’ that pushes back, reasons, and sometimes refuses tasks. He strongly prefers—and expects—the second path. This is a design spec: successful agents will need to model user goals, understand constraints, offer warnings and alternatives, and exercise judgment, not just obey prompts. Product builders should embrace friction and pushback where appropriate rather than striving for blind compliance.

IP fights will increasingly move from training to inference-time behavior

While OpenAI believes it has a legally “reasonable position” on current training data use, Altman predicts that the real controversies will increasingly center on what models are allowed to do at inference time. Even if a model never trained on Taylor Swift’s songs, generating “a song in the style of Taylor Swift” raises distinct economic and ethical questions about likeness, style, and compensation. He highlights opt-in/opt-out controls and new economic models (similar to sampling in music) as likely components of future solutions—critical context for media, creators, and platforms.

WORDS WORTH SAVING

5 quotes

What we're trying to do is not make the sort of smartest set of weights that we can, what we're trying to make is this useful intelligence layer for people to use.

Sam Altman

Intelligence is just this emergent property of matter, and that's like a rule of physics or something.

Sam Altman

I want a great senior employee… someone who will sometimes not do something I ask, or say, ‘I can do that if you want, but here’s what I think would happen… are you really sure?’

Sam Altman

Even if these people were true world experts, I don't think they could get [AI law] right looking out 12 or 24 months.

Sam Altman

I wonder if the future looks more like universal basic compute than universal basic income, and everybody gets a slice of GPT‑7’s compute.

Sam Altman

OpenAI model roadmap, GPT-5, and continuous improvement of GPT-4Costs, latency, chips, and AI infrastructure (compute, energy, supply chain)Open vs. closed source AI, on-device models, and business strategyAgents, assistants, new device form factors, and app ecosystem disruptionTraining data, copyright, artist rights, and inference-time IP issuesAI safety, regulation, and the boardroom crisis that fired and rehired AltmanUBI vs. “universal basic compute” and long-term socioeconomic impacts

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome