
AI Doom vs Boom, EA Cult Returns, BBB Upside, US Steel and Golden Votes
Jason Calacanis (host), Chamath Palihapitiya (host), David Friedberg (host), Narrator, David Sacks (host), Jason Calacanis (host)
In this episode of All-In Podcast, featuring Jason Calacanis and Chamath Palihapitiya, AI Doom vs Boom, EA Cult Returns, BBB Upside, US Steel and Golden Votes explores aI Panic, EA Power Plays, and America’s Economic Crossroads Debated The hosts dissect the growing wave of AI doomerism, arguing that while AI poses genuine risks, a tight network of Effective Altruists, Anthropic, and former Biden officials is amplifying fear to justify heavy-handed global AI regulation and regulatory capture.
AI Panic, EA Power Plays, and America’s Economic Crossroads Debated
The hosts dissect the growing wave of AI doomerism, arguing that while AI poses genuine risks, a tight network of Effective Altruists, Anthropic, and former Biden officials is amplifying fear to justify heavy-handed global AI regulation and regulatory capture.
They contrast this with an accelerationist view: AI will massively boost productivity, spur capital deployment, lower costs for consumers, and ultimately create more and better jobs—even as some roles, especially entry-level and driving, are disrupted.
The conversation then pivots to U.S. fiscal policy and the so‑called “Big Beautiful Bill,” unpacking how mandatory vs. discretionary spending, GDP growth assumptions, and energy policy determine whether America can escape its debt spiral.
Finally, they debate industrial policy: Trump’s approval of the Nippon Steel–US Steel deal, the use of “golden votes” in strategic sectors, and whether government should take equity-style upside in national champions or stay out of markets entirely.
Key Takeaways
AI doomer narratives are being used to justify sweeping, centralized AI regulation.
Sacks argues that extreme claims about imminent mass job loss, bioweapon creation, or guaranteed extinction (X-risk) go far beyond available evidence and are tightly correlated with Anthropic fundraising milestones. ...
Get the full analysis with uListen AI
AI is likely to be a powerful job creator and productivity engine rather than a pure destroyer of work.
Friedberg frames AI as a massive increase in return on invested capital: one engineer can produce 20–50x more code, which makes new projects economically attractive rather than obsolete. ...
Get the full analysis with uListen AI
Job loss will be uneven—entry-level and monolithic roles are most exposed, but multifaceted white‑collar jobs are harder to fully automate.
Driving and level‑one support are cited as rare cases where entire roles may largely disappear. ...
Get the full analysis with uListen AI
The real AI threat may be government overreach and AI‑powered state control, not just rogue superintelligence.
Sacks argues the only AI dystopia we have concrete evidence for is government and Big Tech using AI to enforce ideological narratives and social control—evidenced by “woke AI” behavior and DEI mandates in the Biden AI executive order. ...
Get the full analysis with uListen AI
The U.S.–China AI race is real, but it’s an infinite, dual‑use competition, not a one‑time finish line.
Friedberg argues AI, like the Industrial Revolution or the internet, will ultimately enrich multiple countries without a single “winner. ...
Get the full analysis with uListen AI
Escaping the U.S. debt spiral hinges more on GDP growth and energy capacity than on marginal tax‑rate tweaks.
On the “Big Beautiful Bill,” they explain that reconciliation can only touch mandatory spending; DOGE cuts (discretionary) must be done separately, so blaming the bill for omitting them misunderstands Senate rules. ...
Get the full analysis with uListen AI
Industrial policy is back: strategic sectors may need protection and upside‑sharing, but how government participates is contested.
Trump’s green‑lighting of Nippon Steel’s US Steel acquisition, framed as a “partial ownership controlled by the USA,” sparks a debate over national champions. ...
Get the full analysis with uListen AI
Notable Quotes
“The threshold question is: should you fear government overregulation or should you fear autocomplete? And I would say you should not be so afraid of the autocomplete right now.”
— Chamath Palihapitiya
“These concerns are being hyped up to a level that there’s simply no evidence for. And the question is why, and I think that there is an agenda here that people should be concerned about.”
— David Sacks
“If a GPT is a glorified autocomplete, how did we used to do glorified autocomplete in the past? It was with new grads. New grads were our autocomplete.”
— Chamath Palihapitiya
“People are underestimating and under‑realizing the benefits at this stage of what’s going to come out of the AI revolution and how it’s ultimately going to benefit people’s availability of products, cost of goods, access to things.”
— David Friedberg
“The most important thing in terms of tax revenue is having a good economy, and this is why you don’t just want to have very high tax rates because they clobber your economy.”
— David Sacks
Questions Answered in This Episode
You argue that AI doomerism is being weaponized for regulatory capture—what specific guardrails or oversight mechanisms would you put in place to prevent Anthropic, OpenAI, or any one EA‑aligned group from effectively writing global AI regulations?
The hosts dissect the growing wave of AI doomerism, arguing that while AI poses genuine risks, a tight network of Effective Altruists, Anthropic, and former Biden officials is amplifying fear to justify heavy-handed global AI regulation and regulatory capture.
Get the full analysis with uListen AI
Friedberg paints a deflationary, abundance-driven future from AI, yet Chamath warns that entry‑level jobs are already being hollowed out—how should universities and employers concretely redesign early‑career pathways so new grads aren’t locked out of the AI economy?
They contrast this with an accelerationist view: AI will massively boost productivity, spur capital deployment, lower costs for consumers, and ultimately create more and better jobs—even as some roles, especially entry-level and driving, are disrupted.
Get the full analysis with uListen AI
Sacks frames China winning the AI race as a serious national security risk, while Friedberg sees AI more like the Industrial Revolution—what would actually have to happen in weapons systems or digital infrastructure for you to say, ‘China now has a decisive AI advantage and the U.S. is in danger’?
The conversation then pivots to U. ...
Get the full analysis with uListen AI
Chamath’s data shows U.S. power utilization basically at capacity; if you were ‘energy czar’ for four years, what exact mix of deregulation, incentives, and technologies (gas, nuclear, renewables, storage) would you prioritize to prevent AI‑driven growth from hitting a hard energy ceiling?
Finally, they debate industrial policy: Trump’s approval of the Nippon Steel–US Steel deal, the use of “golden votes” in strategic sectors, and whether government should take equity-style upside in national champions or stay out of markets entirely.
Get the full analysis with uListen AI
There’s clear disagreement on industrial policy: golden votes and national champions vs. pure trade tools and free markets. In practice, what objective criteria would you use to decide which sectors merit direct state equity or golden shares—and how would you prevent that from degenerating into politically connected corporate favoritism?
Get the full analysis with uListen AI
Transcript Preview
All right, everybody. Welcome back to the All-In Podcast, the number one podcast in the world. You got what you wanted, folks. The original quartet is here live from D.C. with a great shirt. Is that... is your haberdasher making that shirt or is that a Tom Ford? That white shirt is so crisp, so perfect. David Sacks, your-
Are you talking about me?
Your czar. Your czar is with the perfect white shirt.
I'll tell you, I'll tell you exactly what it is. I'll tell you what it is, you can tell me if it's right. Brioni.
Yes, of course, it's Brioni.
Bink.
Brioni spread collar. Look at that. Unbelievable.
How many years have I spent being rich?
When a man turns 50, the only thing he should wear is Brioni. The stitching is-
Looks very luxurious.
That's how Chamath knew, right? Chamath, how'd you figure it out, the stitching?
It's just how it lays with the collar.
To be honest with you, it's the button catch.
Hm.
Brioni has a very specific style of button catches. If you don't know what that means, it's because you're a fucking ignorant malcontent yourself. (laughs)
I'm looking it up right now.
Jason, right. Yeah, he's like, "I just answered on TV."
We're going all in. Don't let your winners ride. Rain man, David Sacks. We're going all in. And I said- We open sourced it to the fans and they've just gone crazy with it. Love you, West. Queen of Kin-Wa. Going all in.
All right, everybody. The All-In Summit is going into its fourth year, September 7th through 9th. And the goal is, of course, to have the world's most important conversations. Go to allin.com/yada-yada-yada to join us at the summit. All right. Uh, there's a lot on the docket, but there's kind of a very unique thing going on in the world, David. Everybody knows about AI doomerism, basically people who are concerned, uh, rightfully so, that AI could have some, you know, significant impacts on the world. Dario Amodei said he could see employment spike to 10 to 20% in the next couple years, they're 4% now as we've already talked about here. He told Axios that AI companies and government needs to stop sugarcoating what's coming. He expects a mass elimination of jobs across tech, finance, legal, and consulting. Okay, that's a debate we've had here. And entry level workers will be hit the hardest. He wants law mark- makers to take action and more CEOs to speak out. Polymarket thinks regulatory capture via this AI safety bill is very unlikely. US enacts AI safety bill in 2025 currently stands at a 13% chance, but... Uh, Sacks, you wanted to discuss this because it seems like there is more at work than just a couple of technologists with... I think we'd all agree there are legitimate concerns about job destruction or job and employment displacement that could occur with AI. We all agree on that, wh- we're seeing robo-taxis start to hit the streets and I don't think anybody believes that being a cab driver is going to exist as a job 10 years from now. So, there seems to be something here about AI doomerism, but it's being taken to a different level by a group of people maybe, uh, with a different agenda. Yeah?
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome