Skip to content
Lex Fridman PodcastLex Fridman Podcast

Dawn Song: Adversarial Machine Learning and Computer Security | Lex Fridman Podcast #95

Dawn Song is a professor of computer science at UC Berkeley with research interests in security, most recently with a focus on the intersection between computer security and machine learning. Support this podcast by signing up with these sponsors: - Cash App - use code "LexPodcast" and download: - Cash App (App Store): https://apple.co/2sPrUHe - Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Dawn's Twitter: https://twitter.com/dawnsongtweets Dawn's Website: https://people.eecs.berkeley.edu/~dawnsong/ Oasis Labs: https://www.oasislabs.com Oasis Labs Twitter: https://twitter.com/OasisLabs PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 1:53 - Will software always have security vulnerabilities? 9:06 - Human are the weakest link in security 16:50 - Adversarial machine learning 51:27 - Adversarial attacks on Tesla Autopilot and self-driving cars 57:33 - Privacy attacks 1:05:47 - Ownership of data 1:22:13 - Blockchain and cryptocurrency 1:32:13 - Program synthesis 1:44:57 - A journey from physics to computer science 1:56:03 - US and China 1:58:19 - Transformative moment 2:00:02 - Meaning of life CONNECT: - Subscribe to this YouTube channel - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/LexFridmanPage - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Lex FridmanhostDawn Songguest
May 11, 20202h 12mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Dawn Song on hacking AI: vulnerabilities, defenses, and data ownership

  1. Lex Fridman and Dawn Song explore computer security across classic software bugs, human-focused social engineering, and emerging attacks on machine learning systems.
  2. They discuss adversarial machine learning in depth: how models can be fooled at inference and training time, including physical-world attacks on stop signs, backdoored facial recognition, and black-box attacks on real services like Google Translate.
  3. Song explains parallel privacy risks, showing how trained models can leak sensitive training data and how techniques like differential privacy and confidential computation can mitigate this.
  4. The conversation broadens to data ownership, blockchain-based responsible data economies, program synthesis as a path toward intelligent machines, and philosophical reflections on meaning, creativity, and scientific collaboration.

IDEAS WORTH REMEMBERING

5 ideas

Security vulnerabilities are unavoidable, but their impact can be reduced.

Formal verification and program analysis can prove specific properties (like memory safety) for real systems such as kernels and crypto libraries, yet the vast and evolving space of attack types means no complex real-world system can be guaranteed 100% secure.

The security weak point is increasingly human, not just code.

As systems harden, attackers shift “up the stack” to exploit people through phishing, social engineering, fake news, and deepfakes; AI-powered chatbots could act as user-side guardians that monitor conversations, challenge suspicious claims, and even interrogate attackers.

Machine learning models can be systematically fooled and backdoored.

Adversarial examples with tiny input perturbations can force misclassification, including robust physical attacks (e.g., perturbed stop signs) and training-time backdoors where a few poisoned samples cause models to behave normally except on special triggers like particular glasses frames.

Defenses work best when they exploit natural structure and redundancy.

Checks like spatial consistency in images (overlapping patches should yield similar segmentations) and temporal consistency in audio/video make life hard for attackers, and combining multiple sensors or modalities (vision, LIDAR, radar) further raises the bar for successful attacks.

Trained models can leak private training data unless designed otherwise.

Even without model internals, attackers can query language models trained on sensitive emails and recover actual Social Security or credit card numbers; training with differential privacy adds controlled noise in learning so models retain utility while sharply reducing such leakage.

WORDS WORTH SAVING

5 quotes

Security is job security.

Dawn Song

We are still at a very early stage of really developing robust and generalizable machine learning methods.

Dawn Song

It’s almost impossible to say that a real world system is 100% no security vulnerabilities.

Dawn Song

The weakest link of the system is oftentimes humans themselves.

Dawn Song

Once we teach computers to write software—to write programs—then I guess computers will be eating the world by transitivity.

Dawn Song

Inevitability of software vulnerabilities and limits of formal verificationShift of attacks from systems to humans (social engineering, deepfakes)Adversarial machine learning: inference-time and training-time attacksDefenses via consistency checks, multimodal sensing, and robustnessPrivacy threats from machine learning models and differential privacyData ownership, responsible data economy, and blockchain/secure computationProgram synthesis and the quest for intelligent machines and life’s meaning

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome