Skip to content
No PriorsNo Priors

No Priors Ep. 105 | With Director of the Center of AI Safety Dan Hendrycks

This week on No Priors, Sarah is joined by Dan Hendrycks, director of the Center of AI Safety. Dan serves as an advisor to xAI and Scale AI. He is a longtime AI researcher, publisher of interesting AI evals such as "Humanity's Last Exam," and co-author of a new paper on National Security "Superintelligence Strategy" along with Scale founder-CEO Alex Wang and former Google CEO Eric Schmidt. They explore AI safety, geopolitical implications, the potential weaponization of AI, along with policy recommendations. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @DanHendrycks Show Notes: 0:00 Introduction 0:36 Dan’s path to focusing on AI Safety 1:25 Safety efforts in large labs 3:12 Distinguishing alignment and safety 4:48 AI’s impact on national security 9:59 How might AI be weaponized? 14:43 Immigration policies for AI talent 17:50 Mutually assured AI malfunction 22:54 Policy suggestions for current administration 25:34 Compute security 30:37 Current state of evals

Sarah GuohostDan Hendrycksguest
Mar 5, 202536mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
March 5, 2025
Duration
36m
Channel
No Priors
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

This week on No Priors, Sarah is joined by Dan Hendrycks, director of the Center of AI Safety. Dan serves as an advisor to xAI and Scale AI. He is a longtime AI researcher, publisher of interesting AI evals such as "Humanity's Last Exam," and co-author of a new paper on National Security "Superintelligence Strategy" along with Scale founder-CEO Alex Wang and former Google CEO Eric Schmidt. They explore AI safety, geopolitical implications, the potential weaponization of AI, along with policy recommendations. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @DanHendrycks Show Notes: 0:00 Introduction 0:36 Dan’s path to focusing on AI Safety 1:25 Safety efforts in large labs 3:12 Distinguishing alignment and safety 4:48 AI’s impact on national security 9:59 How might AI be weaponized? 14:43 Immigration policies for AI talent 17:50 Mutually assured AI malfunction 22:54 Policy suggestions for current administration 25:34 Compute security 30:37 Current state of evals

SPEAKERS

  • Sarah Guo

    host
  • Dan Hendrycks

    guest

EPISODE SUMMARY

In this episode of No Priors, featuring Sarah Guo and Dan Hendrycks, No Priors Ep. 105 | With Director of the Center of AI Safety Dan Hendrycks explores aI, Geopolitics, and Nuclear Parallels: Dan Hendrycks’ Safety Playbook Dan Hendrycks, director of the Center for AI Safety, argues that AI safety is primarily a geopolitical and strategic problem, not just a technical alignment issue. He believes labs can mitigate obvious misuse (e.g., terrorism, bio/cyber help) but cannot meaningfully control macro outcomes driven by state competition, especially between the U.S. and China. Hendrycks lays out a deterrence framework he calls “Mutually Assured AI Malfunction” (MAIM), drawing analogies to nuclear strategy and advocating for espionage, cyber-sabotage options, and chip-tracking regimes to prevent destabilizing AI ‘superweapons’ and rogue-actor access. He also discusses the current state of AI evaluations, explaining Humanity’s Last Exam as a near-terminal benchmark for exam-style tasks, and forecasts a future where models become superhuman oracles in STEM long before they become competent agents at everyday tasks.

RELATED EPISODES

Amex Global Business Travel: The World’s First AI Take Private with Long Lake CEO Alexander Taubman

Amex Global Business Travel: The World’s First AI Take Private with Long Lake CEO Alexander Taubman

Baseten CEO Tuhin Srivastava on Custom Models, and Building the Inference Cloud

Baseten CEO Tuhin Srivastava on Custom Models, and Building the Inference Cloud

No Priors Ep. 27 | With Sarah Guo & Elad Gil

No Priors Ep. 27 | With Sarah Guo & Elad Gil

No Priors Ep. 6 | With Daphne Koller from Insitro

No Priors Ep. 6 | With Daphne Koller from Insitro

No Priors Ep. 5 | With Huggingface’s Clem Delangue

No Priors Ep. 5 | With Huggingface’s Clem Delangue

Andrej Karpathy on Code Agents, AutoResearch, and the Loopy Era of AI

Andrej Karpathy on Code Agents, AutoResearch, and the Loopy Era of AI

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome