OpenAI Shares New Details About History with Elon Musk

OpenAI Shares New Details About History with Elon Musk

PivotMar 8, 202411m

Kara Swisher (host), Scott Galloway (host)

OpenAI’s legal and PR response to Elon Musk’s lawsuitPower, control, and ego in tech leadership and the ‘attention economy’OpenAI’s positioning alongside Big Tech in AI ethics pledgesCompetitive AI landscape: Anthropic, Amazon, Perplexity, and valuationsRole of government, regulation, and antitrust enforcement in AIInfluence of powerful AI leaders on policy and public messagingAI-driven risks: political disinformation and radicalization of young men

In this episode of Pivot, featuring Kara Swisher and Scott Galloway, OpenAI Shares New Details About History with Elon Musk explores openAI Confronts Elon Musk Lawsuit Amid AI Power, Hype, and Risks Kara Swisher and Scott Galloway analyze OpenAI’s aggressive legal and PR response to Elon Musk’s lawsuit, including the release of emails that portray Musk as primarily seeking control and wealth rather than pure altruism.

OpenAI Confronts Elon Musk Lawsuit Amid AI Power, Hype, and Risks

Kara Swisher and Scott Galloway analyze OpenAI’s aggressive legal and PR response to Elon Musk’s lawsuit, including the release of emails that portray Musk as primarily seeking control and wealth rather than pure altruism.

They frame Musk’s behavior as part of a broader ‘attention economy’ in which powerful tech figures chase visibility over credibility, and note growing pushback from other tech leaders against his conduct.

The conversation broadens to competitive dynamics in AI—Anthropic, Perplexity, Big Tech—and the limits of industry self-regulation, emphasizing the need for serious government involvement and antitrust enforcement.

Galloway then outlines what he sees as the two biggest AI risks: sophisticated election disinformation and the radicalization of vulnerable young men, including in the military, via AI-driven algorithms and companions.

Key Takeaways

OpenAI is reframing Elon Musk’s lawsuit as a dispute over control, not mission.

By releasing old emails and detailing Musk’s demands for majority equity, CEO status, and board control, OpenAI positions his legal claims as inconsistent with his self-styled image as a purely mission-driven altruist.

Get the full analysis with uListen AI

In the attention economy, visibility often trumps expertise and integrity.

Swisher and Galloway argue that figures like Musk and Bill Ackman feel compelled to opine on everything—regardless of domain expertise—because the system rewards constant attention, not thoughtful restraint.

Get the full analysis with uListen AI

Tech leaders are beginning to publicly push back on Musk’s impact on ‘brand tech.’

Comments from people like Dustin Moskovitz signal a willingness among some in tech to call out Musk’s behavior as harmful to the industry’s broader reputation, rather than silently tolerating it.

Get the full analysis with uListen AI

Industry pledges on ‘safe’ AI are strategically useful but structurally weak.

The multi-company open letter about building AI for a better future is treated as smart optics but largely symbolic, underscoring the need for real regulation rather than relying on voluntary virtue signaling.

Get the full analysis with uListen AI

Government capacity and antitrust enforcement are under-resourced at a critical moment.

The reported clawback of funds earmarked for the DOJ’s antitrust division is cited as a serious setback, given simultaneous major tech antitrust cases and the growing concentration of AI power.

Get the full analysis with uListen AI

AI governance must account for the self-interest of corporate leaders.

Galloway cautions policymakers that even thoughtful figures like Sam Altman or Jensen Huang ultimately represent shareholder interests, so advisory structures must prioritize the public interest, not corporate strategies.

Get the full analysis with uListen AI

The most acute near-term AI risks are election disinformation and radicalization.

They highlight realistic deepfakes that subtly amplify fears about political leaders, and the potential for AI-driven algorithms and ‘AI girlfriends’ to further isolate and radicalize vulnerable young men, including within the armed forces.

Get the full analysis with uListen AI

Notable Quotes

You gave us some money. You then wanted to be the CEO. You wanted to be the majority shareholder. And it’s clear you weren’t doing this for humanity, you were doing it for control and wealth.

Scott Galloway

It doesn’t even matter how heinous or stupid or irrational my take is, as long as I’m in the news. We’re in an attention economy.

Scott Galloway

These guys, let me just tell you, they’re just like him in the ways they fight back. They don’t roll over. And so this was a not-roll-over thing.

Kara Swisher

You have to get people on this board and advising you that are totally charged with protecting the commonwealth, not shareholders.

Scott Galloway

I think the biggest security threat to America, in terms of really the homeland, is a group of young men who are susceptible to AI algorithms that will weaponize them and radicalize them.

Scott Galloway

Questions Answered in This Episode

How might OpenAI’s public disclosure of internal emails with Elon Musk change public and legal perceptions of his lawsuit?

Kara Swisher and Scott Galloway analyze OpenAI’s aggressive legal and PR response to Elon Musk’s lawsuit, including the release of emails that portray Musk as primarily seeking control and wealth rather than pure altruism.

Get the full analysis with uListen AI

What mechanisms—beyond PR letters—are realistically available to ensure AI companies prioritize societal well-being over shareholder value?

They frame Musk’s behavior as part of a broader ‘attention economy’ in which powerful tech figures chase visibility over credibility, and note growing pushback from other tech leaders against his conduct.

Get the full analysis with uListen AI

How should policymakers balance input from powerful AI executives with independent voices that are not financially tied to AI profits?

The conversation broadens to competitive dynamics in AI—Anthropic, Perplexity, Big Tech—and the limits of industry self-regulation, emphasizing the need for serious government involvement and antitrust enforcement.

Get the full analysis with uListen AI

What concrete steps can governments and platforms take now to mitigate AI-driven election disinformation before it becomes pervasive?

Galloway then outlines what he sees as the two biggest AI risks: sophisticated election disinformation and the radicalization of vulnerable young men, including in the military, via AI-driven algorithms and companions.

Get the full analysis with uListen AI

How can society address the underlying social and economic conditions that make young men particularly vulnerable to AI-enabled radicalization?

Get the full analysis with uListen AI

Transcript Preview

Kara Swisher

OpenAI says it intends to move to dismiss all of Elon's Musk's recent legal claims. Uh, the company shared more about their mission and their relationship with Elon in a blog post on Tuesday, which included a number of old emails. I love that they used them. I used some in my book of his so you could understand what happened. According to OpenAI, they decided, uh, with Elon in late 2017 that the next step for the company was to create a for-profit entity, and he... it seems like he did agree. Elon wanted ma- but he wanted majority equity, initial board control, and to be CEO. What a fucking narcissist this guy is. He also withheld... He did give money to them, but he wanted everything for that. He also withheld fund- funding, uh, money during these discussions. Of course, they turned to, guess what, Reid Hoffman. There's other rich people, Elon. There's other rich people who think AI is cool. Uh, OpenAI is coming out swinging with this post. Uh, The Wall Street Journal has a piece on how Elon's Sam bromance turned toxic, uh, which, uh, we've talked about. I've known about this for a long time. Um, I don't kn- I don't know if he's... He's gonna move forward with the suit, but these letters are kind of, um, interesting. Um, also OpenAI is also on the PR move. OpenAI, Google, Meta, and 300 others signed an open letter this week pledging to build AI for a better future. This letter mentions a collective responsibility to maximize AI's benefit and limit the risks. Sam Altman posted about it on X, saying he was excited for the spirit of this letter. And, you know, it's a, it's a PR thing, of course. Um, so first, uh, uh, we'll get to that, but what do you think about, uh, their, their thing? Did you read their blog post?

Scott Galloway

I love it. I think that they're... I mean, for all the, for all the criticism, warranted criticism, that Jack Dorsey got, I actually thought his board, led by Bret Taylor, literally just dissected, picked apart, and m- just, just absolutely, um, drawed and quartered, uh, all of Elon's arguments in thoughtful, measured ways-

Kara Swisher

Mm-hmm.

Scott Galloway

... when he was trying to back out of an agreement that he, that he-

Kara Swisher

Yeah. Mm-hmm.

Scott Galloway

... contractually signed, uh, left, right, when he realized he'd overpaid for the thing in a fit of mania.

Kara Swisher

Mm-hmm.

Scott Galloway

So I think it's, uh, uh, I, I loved it. I mean, it's great reading. It's just like, okay, let's be clear. You gave us some money. You then wanted to be the CEO. You wanted to be the majority shareholder.

Kara Swisher

Mm-hmm.

Scott Galloway

And it's clear you weren't doing this for humanity, you were doing it for control and wealth.

Kara Swisher

Mm-hmm. That's correct.

Scott Galloway

And we said no-

Kara Swisher

Yeah.

Scott Galloway

... and found better options. Oh, and by the way, it worked out really well for us and our shareholders, just FYI.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome