Sam Altman: Getting Fired (and Re-Hired) by OpenAI, Agents, AI Copyright issues

Sam Altman: Getting Fired (and Re-Hired) by OpenAI, Agents, AI Copyright issues

All-In PodcastMay 10, 20241h 43m

Jason Calacanis (host), Sam Altman (guest), Chamath Palihapitiya (host), David Sacks (host), David Friedberg (host), David Friedberg (host), Chamath Palihapitiya (host), Jason Calacanis (host), Guest (guest), Guest (guest), Jason Calacanis (host)

OpenAI model roadmap, GPT-5, and continuous improvement of GPT-4Costs, latency, chips, and AI infrastructure (compute, energy, supply chain)Open vs. closed source AI, on-device models, and business strategyAgents, assistants, new device form factors, and app ecosystem disruptionTraining data, copyright, artist rights, and inference-time IP issuesAI safety, regulation, and the boardroom crisis that fired and rehired AltmanUBI vs. “universal basic compute” and long-term socioeconomic impacts

In this episode of All-In Podcast, featuring Jason Calacanis and Sam Altman, Sam Altman: Getting Fired (and Re-Hired) by OpenAI, Agents, AI Copyright issues explores sam Altman on AGI, OpenAI Turmoil, AI Law, and Biology’s Future Sam Altman joins the All-In Podcast to discuss OpenAI’s product roadmap, business strategy, and the dramatic boardroom episode that briefly ousted him as CEO. He emphasizes a shift from big, punctuated model releases toward continuously improving AI systems, and stresses the importance of lowering latency and cost so advanced models can reach free users. Altman dives into open vs. closed source models, AI copyright and artist rights, safety and regulation, and what truly useful AI assistants and agents might look like. The episode closes with a debrief among the hosts, including reactions to Altman’s comments, Google’s AlphaFold 3 breakthrough, and the broader trajectory of AI and Big Tech.

Sam Altman on AGI, OpenAI Turmoil, AI Law, and Biology’s Future

Sam Altman joins the All-In Podcast to discuss OpenAI’s product roadmap, business strategy, and the dramatic boardroom episode that briefly ousted him as CEO. He emphasizes a shift from big, punctuated model releases toward continuously improving AI systems, and stresses the importance of lowering latency and cost so advanced models can reach free users. Altman dives into open vs. closed source models, AI copyright and artist rights, safety and regulation, and what truly useful AI assistants and agents might look like. The episode closes with a debrief among the hosts, including reactions to Altman’s comments, Google’s AlphaFold 3 breakthrough, and the broader trajectory of AI and Big Tech.

Key Takeaways

OpenAI is shifting from big version jumps to continuous improvement

Altman downplays the idea of a single, splashy GPT‑5 launch and instead points to how much GPT‑4 has quietly improved since release. ...

Get the full analysis with uListen AI

Cost and latency are central bottlenecks—and major opportunities

Serving GPT‑4-class models to free users is still “very expensive,” and latency and throughput remain rate-limited by NVIDIA, chips, data centers, and energy. ...

Get the full analysis with uListen AI

OpenAI’s moat is the full intelligence layer, not just model weights

Altman is explicit that OpenAI’s ambition is not merely to have the “smartest set of weights,” but to provide a useful intelligence layer: product, tooling, reliability, safety, price, and ecosystem. ...

Get the full analysis with uListen AI

Assistants should act like senior employees, not sycophantic alter-egos

Altman contrasts two AI futures: (1) an AI “extension of self” that acts as your ghost/alter-ego, and (2) a distinct, highly competent ‘senior employee’ that pushes back, reasons, and sometimes refuses tasks. ...

Get the full analysis with uListen AI

IP fights will increasingly move from training to inference-time behavior

While OpenAI believes it has a legally “reasonable position” on current training data use, Altman predicts that the real controversies will increasingly center on what models are allowed to do at inference time. ...

Get the full analysis with uListen AI

AI regulation should target extreme frontier risks, not everyday models

Altman argues for international oversight focused narrowly on future frontier systems capable of catastrophic harm—e. ...

Get the full analysis with uListen AI

The OpenAI board coup was mission-driven but badly executed

On his firing, Altman describes being abruptly removed by the non-profit board (after they also removed Greg Brockman), then courted to return amid staff and investor revolt. ...

Get the full analysis with uListen AI

Notable Quotes

What we're trying to do is not make the sort of smartest set of weights that we can, what we're trying to make is this useful intelligence layer for people to use.

Sam Altman

Intelligence is just this emergent property of matter, and that's like a rule of physics or something.

Sam Altman

I want a great senior employee… someone who will sometimes not do something I ask, or say, ‘I can do that if you want, but here’s what I think would happen… are you really sure?’

Sam Altman

Even if these people were true world experts, I don't think they could get [AI law] right looking out 12 or 24 months.

Sam Altman

I wonder if the future looks more like universal basic compute than universal basic income, and everybody gets a slice of GPT‑7’s compute.

Sam Altman

Questions Answered in This Episode

You suggested that future fights will center on inference-time behavior rather than training data; concretely, what rules or mechanisms would you support to govern ‘style-based’ prompts like “in the style of Taylor Swift” without stifling legitimate inspiration and parody?

Sam Altman joins the All-In Podcast to discuss OpenAI’s product roadmap, business strategy, and the dramatic boardroom episode that briefly ousted him as CEO. ...

Get the full analysis with uListen AI

You’ve said reasoning is the missing piece for transformative applications like scientific discovery—what specific technical bets (architectural changes, training regimes, tool integrations) are you most excited about to move from today’s pattern-matching LLMs to genuinely robust reasoning systems?

Get the full analysis with uListen AI

Looking back at the November board crisis with some distance, what governance structure—board composition, veto rights, alignment checks—do you now believe is optimal for an AGI-focused organization, and what would you change if you were designing OpenAI’s governance from scratch?

Get the full analysis with uListen AI

You floated ‘universal basic compute’ as an alternative to universal basic income; how might that actually be implemented in practice (allocation, markets, regulation), and what prevents a few large platforms from capturing all of that AI-generated value anyway?

Get the full analysis with uListen AI

You distinguish between an AI ‘alter ego’ and a ‘senior employee’ that pushes back—what safeguards and design principles are needed to ensure that assistants don’t become either manipulative gatekeepers or overly deferential tools, especially when they’re mediating access to critical services like healthcare or finance?

Get the full analysis with uListen AI

Transcript Preview

Jason Calacanis

I first met our next guest, Sam Altman, almost 20 years ago when he was working on a local mobile app called Loopt. We were both backed by Sequoia Capital, and in fact, we were both in the first class of Sequoia Scouts. He did investment in a little unknown fintech company called Stripe, I did Uber, and in that tiny experimental fund-

Sam Altman

You did Uber? I've never heard that before. (laughs)

Chamath Palihapitiya

Yeah. (laughs) I think so. Possible.

David Sacks

Cool.

Chamath Palihapitiya

I've got the starting ready. (laughs)

David Friedberg

You should write a book, Jacob. (laughs)

Chamath Palihapitiya

Maybe. What's going on? Let your winners ride.

Jason Calacanis

Rain Man, David Sa-

Chamath Palihapitiya

What's going on?

David Sacks

And it's sad. We open sourced it to the fans, and they've just gone crazy with it.

Chamath Palihapitiya

Love you Betsy.

David Sacks

What's his... queen of quinoa.

Chamath Palihapitiya

Going all in.

Jason Calacanis

That tiny experimental fund that Sam and I were part of as Scouts is Sequoia's highest multiple returning fund. Couple of low digit millions turned into over 200 million, I'm told.

Sam Altman

Really?

Jason Calacanis

And then he did... Yeah, that's what I was telling you-

Sam Altman

Wow.

Jason Calacanis

... about Rudolph, yeah. And he did a stint at Y Combinator, where he was president from 2014 to 2019. In 2016, he co-founded OpenAI with the goal of ensuring that artificial general intelligence benefits all of humanity. In 2019, he left YC to join OpenAI full-time as CEO. Things got really interesting on November 30th of 2022. That's the day OpenAI launched ChatGPT. In January 2023, Microsoft invested $10 billion in November 2023. Over a crazy five-day span, Sam was fired from OpenAI, everybody was gonna go work at Microsoft, a bunch of heart emojis went viral on X/Twitter, and people started speculating that the team had reached artificial general intelligence, the world was gonna end, and suddenly, a couple days later, he was back to being the CEO of OpenAI. In February, Sam was reportedly looking to raise $7 trillion for an AI chip project. This, after it was reported that Sam was looking to raise a billion from Masayoshi-san to create an iPhone killer with Jony Ive, the co-creator of the iPhone. All of this while ChatGPT has become better and better, and a household name. It's having a massive impact on how we work and how work is getting done, and it's reportedly the fastest product to hit 100 million users in history in just two months. And check out OpenAI's insane revenue ramp-up. They reportedly hit 2 billion in ARR last year. Welcome to the All-In Podcast, Sam Altman.

Sam Altman

Thank you. Thank you, guys.

Jason Calacanis

Sacks, you wanna lead us off here?

David Sacks

Okay, sure. I mean, I, I think the whole industry is waiting with bated breath for the release of GPT-5. I guess it's been reported that it's launching sometime this summer, but that's a pretty big window. Can you narrow that down? I guess, where, where are you in the release of GPT-5?

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome