OpenAI Battles Safety Concerns and High-Profile Exits | Pivot

OpenAI Battles Safety Concerns and High-Profile Exits | Pivot

PivotMay 21, 20248m

Kara Swisher (host), Scott Galloway (host)

Jan Leike’s resignation and the dismantling of OpenAI’s Superalignment (AI safety) teamOpenAI’s internal culture clash: safety concerns versus product speed and profitStringent off‑boarding, NDAs, and equity forfeiture provisions for departing employeesThe evolution of OpenAI from mission‑driven lab to aggressive, for‑profit powerhouseComparisons to Meta and other tech firms’ treatment of trust and safety teamsThe myth of ‘ethical’ founder-heroes versus the realities of capitalismThe need for government regulation versus relying on tech leaders’ self‑regulation

In this episode of Pivot, featuring Kara Swisher and Scott Galloway, OpenAI Battles Safety Concerns and High-Profile Exits | Pivot explores openAI’s Safety Revolt Exposes Tension Between Idealism, Profit, And Power Kara Swisher and Scott Galloway dissect recent turmoil at OpenAI, including the resignation of Superalignment head Jan Leike and the dissolution of the AI safety team. They frame the conflict as a fundamental clash between safety-focused idealists and profit-driven executives racing to dominate AI. The hosts argue OpenAI is revealing its true nature as a hard‑nosed, for‑profit company, despite its origins and rhetoric around safety and AGI risk. They conclude that real accountability must come from regulation and external pressure, not from the self‑policing of tech leaders or internal safety teams.

OpenAI’s Safety Revolt Exposes Tension Between Idealism, Profit, And Power

Kara Swisher and Scott Galloway dissect recent turmoil at OpenAI, including the resignation of Superalignment head Jan Leike and the dissolution of the AI safety team. They frame the conflict as a fundamental clash between safety-focused idealists and profit-driven executives racing to dominate AI. The hosts argue OpenAI is revealing its true nature as a hard‑nosed, for‑profit company, despite its origins and rhetoric around safety and AGI risk. They conclude that real accountability must come from regulation and external pressure, not from the self‑policing of tech leaders or internal safety teams.

Key Takeaways

OpenAI is prioritizing rapid product rollout over internal safety structures.

Leike’s resignation and the dissolution of Superalignment signal that safety work has been subordinated to shipping “shiny products” in the competitive race against big tech rivals.

Get the full analysis with uListen AI

Internal safety teams often function more as optics than as power centers.

Galloway likens AI safety and trust teams to Meta’s trust and safety units—built under public pressure, then sidelined or cut—suggesting they rarely have lasting influence over core business decisions.

Get the full analysis with uListen AI

OpenAI’s legal and equity terms reveal a hard‑edged, profit‑first culture.

The aggressive NDA and non‑disparagement clauses tied to vested equity, even if not enforced, show a willingness to use economic leverage to control ex‑employees’ speech.

Get the full analysis with uListen AI

Founders’ ethical branding should not substitute for external regulation.

The hosts argue that seeing Sam Altman as a uniquely responsible leader dulls public urgency for legislation, when in fact his job is to win commercially, not to regulate himself.

Get the full analysis with uListen AI

Mission‑driven origin stories can set companies up for backlash.

OpenAI’s initial quasi‑nonprofit, safety‑first posture clashes with its current for‑profit aggression, creating disillusionment among idealistic employees who expected values to trump revenue.

Get the full analysis with uListen AI

Employees who truly prioritize safety may be more effective outside big firms.

Swisher suggests that safety advocates should leave companies like OpenAI, organize independently, write, and lobby Congress directly rather than expect to win internal power struggles.

Get the full analysis with uListen AI

Capitalism, not techno‑heroism, is the core driver of AI development.

Both hosts stress that investors, employees, and founders are ultimately aligned around money, status, and winning the market, making profit the default logic shaping AI’s trajectory.

Get the full analysis with uListen AI

Notable Quotes

This is an issue, again, of speed versus safety.

Kara Swisher

They are a for‑profit company, and they’re not pretending to be anything.

Scott Galloway

Any group of people that decides to call themselves Superalignment should be fired.

Scott Galloway

They’re gonna have these things because what they wanna do is pretend that they care about the safety stuff… but they don’t.

Kara Swisher

And so it was capitalism after all.

Kara Swisher

Questions Answered in This Episode

How can AI safety research be meaningfully empowered within profit‑driven companies rather than relegated to optics?

Kara Swisher and Scott Galloway dissect recent turmoil at OpenAI, including the resignation of Superalignment head Jan Leike and the dissolution of the AI safety team. ...

Get the full analysis with uListen AI

What kinds of regulation would effectively check firms like OpenAI without stifling useful innovation?

Get the full analysis with uListen AI

Are safety‑first AI labs like Anthropic structurally more capable of prioritizing ethics, or will investor pressure reshape them similarly over time?

Get the full analysis with uListen AI

How should employees who care deeply about AI risk decide whether to stay inside big AI companies or leave and push from the outside?

Get the full analysis with uListen AI

What responsibilities, if any, do founders like Sam Altman have beyond maximizing shareholder value when their products may reshape society?

Get the full analysis with uListen AI

Transcript Preview

Kara Swisher

Things are getting messy at OpenAI again. Oh, man, is this the most, like, telenovela of a company? The company had other high profile departure again this week with the resignation of Jan Leike, uh, the head of Superalignment, which is the team focused on s- A- AI safety. That's the words. We're gonna be all super aligned. Uh, Leike explained his departure in a series of social media posts saying in part that OpenAI's safety culture and processes have taken a backseat to shiny products, and there's been a bunch of shiny products they showed off last week. Uh, Open- it's not- the things are not unrelated. Um, OpenAI has dissolved that Superalignment team. The company told Bloomberg the group will be integrated across research efforts to help achieve safety goals. Uh, S- Sam Altman put out a, a, a statement and also did OpenAI co-founder Gregg Brockman, uh, sharing their view of the p- the future. They said the company has "raised awareness of the risks and opportunities of AGI so the world can better prepare for it." Whether they're preparing for it or not is a big question. There's been at least 11 high profile exits in the last few months. Um, you know, this is an issue, again, of speed versus safety. They have been rolling out the products 'cause they're deathly terrified of getting rolled over by the big companies. I can feel it. I can feel such a Netscape moment for them. Um, you know, uh, we'll see. Uh, we'll see what'll happen here. But it's definitely a company still shaking off or dealing with these issues that they've had, these two different types of people who are, um, involved in this company, which is some that, uh, think this is a, a risk to humanity, others who are like, "Calm the fuck down. Let's make some stuff and we'll figure it out later." Um, uh, one of the things that got a lot of reporting was OpenAI's off-boarding agreements that have non-disclosure and non-disparagement provisions. Not uncommon, but theirs were particularly stringent. If a departing employee violated these provisions, they were in danger of losing all their vested equity according to Vox. Sam Altman confirmed in a tweet there was a provision about potential equity cancellation for departing employees, but it was never enforced. The company is currently changing that language. Uh, it sounds like they're just, like, just tough customers on that thing. Um, it is further than other people do. It's usually more talent-friendly in general in Silicon Valley. Scott, what are your thoughts on all this?

Scott Galloway

When Ilya was part of the board that-

Kara Swisher

Sutskever.

Scott Galloway

Yeah.

Kara Swisher

Yeah.

Scott Galloway

When, when he was part of the board that fired Sam Altman, (laughs) if you're gonna-

Kara Swisher

Mm-hmm.

Scott Galloway

... stab their prince, you, you better kill him. When he came back-

Kara Swisher

Yeah.

Scott Galloway

... Ilya became the information age equivalent of Prigozhin. He was dead man walking. He just wasn't-

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome