OpenAI Rebooted: What's Next for the Company?

OpenAI Rebooted: What's Next for the Company?

PivotNov 30, 20239m

Kara Swisher (host), Scott Galloway (host)

Sam Altman’s reinstatement as OpenAI CEO and board restructuringTension between OpenAI’s original safety mission and its $90B commercial valueLimits of ESG, social-benefit structures, and corporate virtue signalingRole of government regulation versus self-governance in tech and AIMicrosoft’s influence and expected board presence at OpenAIComposition and expertise needed on a reformed OpenAI boardCritique of the effective altruism movement and tech-elite philosophies

In this episode of Pivot, featuring Kara Swisher and Scott Galloway, OpenAI Rebooted: What's Next for the Company? explores openAI’s Turbulent Reboot Sparks Clash Between Capital, Safety, And Governance Kara Swisher and Scott Galloway dissect Sam Altman’s rapid return as OpenAI CEO, the near-total overhaul of its board, and Microsoft’s strengthened influence after the governance crisis. They frame the drama as a broader battle between capital and humanity, arguing that OpenAI’s original safety-first mission has been sidelined by enormous commercial value. The conversation critiques ESG, social-benefit branding, and effective altruism as largely cosmetic or elitist substitutes for real democratic regulation. They also debate what kind of people should join OpenAI’s new board to balance technical expertise, media savvy, disinformation knowledge, and genuine concern for societal harms.

OpenAI’s Turbulent Reboot Sparks Clash Between Capital, Safety, And Governance

Kara Swisher and Scott Galloway dissect Sam Altman’s rapid return as OpenAI CEO, the near-total overhaul of its board, and Microsoft’s strengthened influence after the governance crisis. They frame the drama as a broader battle between capital and humanity, arguing that OpenAI’s original safety-first mission has been sidelined by enormous commercial value. The conversation critiques ESG, social-benefit branding, and effective altruism as largely cosmetic or elitist substitutes for real democratic regulation. They also debate what kind of people should join OpenAI’s new board to balance technical expertise, media savvy, disinformation knowledge, and genuine concern for societal harms.

Key Takeaways

OpenAI’s crisis revealed the triumph of commercial interests over its safety-first origins.

Galloway characterizes the episode as “capital smothering humanity,” arguing that once OpenAI became a $90B enterprise, its original quasi-nonprofit mission was inevitably marginalized.

Get the full analysis with uListen AI

For‑profit companies are excellent at making money but poor guardians of the public interest.

The hosts contend that corporations should not be trusted with broad social or ethical mandates and that relying on corporate conscience or complex structures is naive.

Get the full analysis with uListen AI

Government regulation, not ESG branding, is essential for managing AI’s societal risks.

They argue that ESG, social-benefit labels, and “Sandbergian” narratives serve as distractions that delay real oversight; robust democratic institutions and fair taxation are needed to regulate powerful tech firms.

Get the full analysis with uListen AI

Microsoft emerged stronger but exposed, highlighting the need for direct governance power.

Nadella’s response and Microsoft’s likely board seat at OpenAI show how vulnerable major investors are without formal governance rights in critical AI infrastructure.

Get the full analysis with uListen AI

OpenAI’s board must combine deep technical expertise with societal and media literacy.

Swisher and Galloway call for board members who understand AI, disinformation, copyright/media issues, and real-world harms—potentially including academics, media figures, and practitioners who aren’t typical “tech bros.”

Get the full analysis with uListen AI

Effective altruism has suffered a major reputational blow in the wake of OpenAI and FTX.

They describe EA as elitist and quasi‑cultish, with a small group claiming to know what’s best for humanity, and link it rhetorically to Sam Bankman-Fried’s downfall and Silicon Valley’s recurring grandiose ideologies.

Get the full analysis with uListen AI

Lived experience of harm should inform AI governance and product design.

Galloway echoes Swisher’s view that designers often lack first‑hand experience with online abuse and threats, leading to inadequate protections; more diverse perspectives at the table could change product priorities.

Get the full analysis with uListen AI

Notable Quotes

At a very reductive analysis, you have capital versus humanity, and I would argue that capital literally smothered humanity in its sleep.

Scott Galloway

For‑profit companies are so amazing at generating profits, they shouldn’t be trusted to do anything else.

Scott Galloway

All these Byzantine structures and virtue signaling are nothing but a Sandbergian move to serve as a weapon of mass distraction.

Scott Galloway

It’s a small group of people who knows what’s best for the rest of humanity. I feel like I’d rather have the dirty mass do it.

Kara Swisher

We need democratic institutions to regulate these companies…and then let’s use those taxes to hire outstanding people who try to prevent a tragedy to the commons.

Scott Galloway

Questions Answered in This Episode

How can OpenAI realistically balance its commercial momentum with its original safety-focused mission without neutering either goal?

Kara Swisher and Scott Galloway dissect Sam Altman’s rapid return as OpenAI CEO, the near-total overhaul of its board, and Microsoft’s strengthened influence after the governance crisis. ...

Get the full analysis with uListen AI

What specific forms of government regulation would effectively manage AI risks without stifling innovation?

Get the full analysis with uListen AI

How should major investors like Microsoft structure their governance rights to avoid being blindsided by nonprofit-style boards in future critical tech ventures?

Get the full analysis with uListen AI

What criteria should guide the selection of new OpenAI board members to ensure real diversity of expertise, experience, and power?

Get the full analysis with uListen AI

Does the collapse in credibility of effective altruism signal a broader backlash against elite-driven tech philosophies, and what might replace them?

Get the full analysis with uListen AI

Transcript Preview

Kara Swisher

Sam Altman is back as the CEO of OpenAI following that whirlwind of chaos last week, which I think we did a very good job covering it. Um, OpenAI's board is getting a major rehaul with nearly all of its members replaced. Not all of them though. The initial board includes Bret Taylor as chairman, former Treasury Secretary Larry Summers. Where did he come from? And Adam D'Angelo, the only current board member remaining. I understand he was quite stubborn about that. Uh, Sam appears to be ready to head back to work, posting on X, "With the new board and with Satya's support, I'm looking forward to returning to OpenAI and building on our strong partnership with Microsoft." It was interesting that was his first tweet. Um, w- how do you look at it? Back, looking back at it, who's the winners, losers, et cetera? I hate to use that reductive a term but it really, there really, a lot was happening here.

Scott Galloway

I think in the fullness of time, the, the thing that happened here, or what I've been thinking a lot about is that if you think about the initial mission of OpenAI, I don't even think it was supposed to be a company. I think they initially saw it as a research institute or a think tank that would help, that would study and analyze and make recommendations around the possibilities and more importantly the dangers of AI. And then they discovered, "We're really good at this," and then all of a sudden it, it was a company that became worth $90 billion. So if you look at the shareholder side of this and the products and the value, the economic value they create, call that capital, and then you look at the structure, and a big component of the structure and a big part of their mission was to think about humanity or AI in the context of humanity. So at a very reductive analysis, you have capital versus humanity, and I would argue that capital literally, literally smothered humanity in its sleep. Whether you think it was a good decision or a bad decision, but when there was $90 billion in an amazing product and the leader in the most exciting new technology emerged, all that fun, nice stuff, important stuff about humanity and the fears of AI, that shit got smothered in its sleep.

Kara Swisher

I'm not sure it got smothered, Scott. I think it got, "Let's put this over in..." Yeah, I get it, I get it, I get it. It's like, "Let's put this over here on the shelf."

Scott Galloway

It'll just be very interesting to see how this moment ages because if... Uh, and, and it all comes back to the same thing for me, and that is for-profit companies are so amazing at generating profits, they shouldn't be trusted to anything else. They shouldn't be trusted to do anything else.

Kara Swisher

Mm-hmm. Fair.

Scott Galloway

What are they called? Social purpose, um, companies. No, private benefit, social benefit companies. ESG investing. I think it's the ultimate Sheryl Sandbergian move where we're gonna pretend that capitalism and the market can work things out on their own, that when people buy Patagonia or they buy dolphin-friendly tuna or they learn about transparency or some fake organization says that Southwest Airlines is an ESG (laughs) investment, it dilutes the need for democratic institutions to regulate these organizations.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome