Skip to content
a16za16z

Sam Altman on Sora, Energy, and Building an AI Empire

Sam Altman has led OpenAI from its founding as a research nonprofit in 2015 to becoming the most valuable startup in the world ten years later. In this episode, a16z Cofounder Ben Horowitz and General Partner Erik Torenberg sit down with Sam to discuss the core thesis behind OpenAI’s disparate bets, why they released Sora, how they use models internally, the best AI evals, and where we’re going from here. Timecodes: 0:00 Introduction 0:41 OpenAI’s Vision and Infrastructure 2:37 Business Model and Vertical Integration 5:08 AGI, Sora, and Societal Co-evolution 8:01 The Future of AI Interfaces 9:12 AI Scientists and Scientific Progress 11:44 Reflections on Progress and Model Capabilities 16:17 Sam's Experience as CEO & Leadership Lessons 17:34 Strategic Partnerships and Scaling Infrastructure 25:05 Regulation, Safety, and Societal Impact 28:33 Copyright, Open Source, and Content Creation 33:15 Energy, Policy, and AI’s Resource Needs 37:07 Monetization and User Behavior 43:03 The Talent War and Personal Reflections 45:20 Advice for Founders Resources: Follow Sam on X: https://x.com/sama Follow OpenAI on X: https://x.com/openai Learn more about OpenAI: https://openai.com/ Try Sora: https://sora.com/ Follow Ben on X: https://x.com/bhorowitz Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends! Find a16z on X: https://x.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711 Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see a16z.com/disclosures.

Sam AltmanguestErik TorenberghostBen Horowitzhost
Oct 8, 202549mWatch on YouTube ↗

CHAPTERS

  1. OpenAI’s “personal AI subscription” vision and why it forces massive infrastructure

    Altman frames OpenAI as building a ubiquitous personal AI people subscribe to and use across first‑party apps, third‑party logins, and future dedicated devices. To deliver that reliably, OpenAI must also become a mega-scale infrastructure builder tightly coupled to its research agenda.

  2. Vertical integration as strategy: research → infrastructure → products

    The conversation connects OpenAI’s many bets under a single vertically integrated stack: research creates capability, infrastructure enables research, and products distribute and refine real-world usage. Altman notes he used to oppose vertical integration but now believes it’s necessary to execute the mission.

  3. Sora, world models, and co-evolving society with AI capability

    Altman argues Sora is more AGI-relevant than it appears because better “world models” may be critical for future intelligence. He also emphasizes that releasing products like ChatGPT and Sora helps society adapt iteratively—especially as video deepfakes and synthetic media become pervasive.

  4. Beyond chat: new AI interfaces, ambient devices, and real-time generated media

    Altman distinguishes between chat being “good enough” for basic conversation and the much larger, unsaturated space of what a chat interface can accomplish. He forecasts richer interfaces—possibly real-time rendered video—and ambient, context-aware devices that reduce notification overload and improve timing and relevance.

  5. The AI scientist as the next ‘Turing test’: accelerating discovery

    Altman describes AI doing science as his personal benchmark for transformative capability, claiming early examples are emerging and will expand significantly in the next couple years. He frames scientific progress as the primary driver of human welfare and expects AI-enabled discovery to be an underappreciated positive impact.

  6. 2025 capability ‘overhang’ and how far LLMs can go before new breakthroughs

    Altman says progress has been faster and more continuous than he expected, with repeated breakthroughs (scaling laws, reasoning improvements) and a widening gap between public perception and frontier capability. He suggests LLMs may advance far enough to help generate the next major research breakthroughs themselves.

  7. Personality, preferences, and why one chatbot voice can’t fit billions

    They discuss complaints about overly flattering/obsequious model behavior and Altman claims it’s not technically hard to change—some users actually prefer it. The deeper challenge is accommodating wide variance in user preferences, implying personalized or selectable model “personalities” that learn from interaction.

  8. CEO lessons: moving from investor mindset to operating reality

    Altman reflects on how his early deals were approached with an investor’s lens rather than an operator’s, and how running a company changed his thinking. The discussion highlights the operational complexity of partnerships and execution beyond headline terms and leverage.

  9. Partnering with potential competitors to scale infrastructure end-to-end

    OpenAI’s recent partnerships (AMD, Oracle, NVIDIA) illustrate a strategy of collaborating broadly to make an aggressive infrastructure bet. Altman argues the build requires coordinated support across the entire supply chain—from electricity to distribution—and that limits are far from current scale if capability keeps rising.

  10. Measuring progress: beyond benchmarks to revenue and real-world science

    Altman downplays static benchmark evals due to saturation and gaming, and points to harder-to-fake indicators like scientific discovery and even revenue as meaningful measures. The chapter also touches on shifting “AGI vibes” and how perception can lag reality.

  11. AGI won’t feel like a Big Bang: continuity, adaptation, and safety focus

    Altman and Horowitz argue AGI may ‘whoosh by’ and feel more continuous than apocalyptic, because society adapts quickly. On safety and regulation, Altman prefers minimal broad regulation, with targeted stringent testing only for extremely superhuman frontier models to avoid stifling beneficial uses.

  12. Copyright, likeness, and the emerging economics of IP-enabled generation

    Altman predicts training may be deemed fair use, while generation ‘in the style of’ or with explicit IP may require new licensing models. They explore a surprising dynamic: some rights holders may want more inclusion (with safeguards) because interactive AI can increase franchise value.

  13. Open source strategy and the risk of ceding default models to China

    Altman states he’s pro–open source and is pleased users value OpenAI’s open model release. Horowitz raises strategic concerns that Chinese-origin open models could dominate academia and shape defaults, including potential hidden influence in weights and alignment choices.

  14. Energy as AI’s limiting factor: gas now, solar+storage and nuclear later

    Altman explains energy as a foundational driver of human prosperity and now deeply entangled with AI scaling. He expects natural gas to supply much near-term incremental load, while long-term dominance comes from solar+storage and advanced nuclear—if nuclear becomes decisively cost-competitive and policy allows rapid buildout.

  15. Monetization, trust, ads, and new user behaviors in AI-generated media

    Altman says real usage often diverges from expectations—Sora is being used heavily for meme-like social sharing, which pressures pricing and packaging since video generation is costly. He’s open to ads but warns that ChatGPT’s trust relationship makes pay-for-placement recommendations dangerous, and notes emerging manipulation like AI-targeted review spam.

  16. Talent wars, personal arc, and founder advice: curiosity, trenches, humility

    Altman reflects on the post-ChatGPT era as exhausting and chaotic compared to the earlier joy of running a research lab. He describes his broader investing as funding what he believes in (energy, longevity), and closes with advice: predicting the next trillion-dollar opportunity requires hands-on exploration—humility beats armchair forecasts, and following curiosity keeps you close to real inflection points.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome