Modern WisdomThe Alignment Problem - Brian Christian | Modern Wisdom Podcast 297
Episode Details
EPISODE INFO
- Released
- March 20, 2021
- Duration
- 1h 16m
- Channel
- Modern Wisdom
- Watch on YouTube
- ▶ Open ↗
EPISODE DESCRIPTION
Brian Christian is a programmer, researcher and an author. You have a computer system, you want it to do X, you give it a set of examples and you say "do that" - what could go wrong? Well, lots apparently, and the implications are pretty scary. The Alignment Problem is one of the biggest challenges in AI research. Expect to learn why it's so hard to code an artificial intelligence to do what we actually want it to, how a robot cheated at the game of football, why human biases can be absorbed by AI systems, the most effective way to teach machines to learn, the danger if we don't get the alignment problem fixed and much more... Sponsors: Get 20% discount on the highest quality CBD Products from Pure Sport at https://puresportcbd.com/modernwisdom (use code: MW20) Get perfect teeth 70% cheaper than other invisible aligners from DW Aligners at http://dwaligners.co.uk/modernwisdom Extra Stuff: Buy The Alignment Problem - https://amzn.to/3ty6po7 Follow Brian on Twitter - https://twitter.com/brianchristian Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom #alignmentproblem #artificialintelligance #machinelearning - Listen to all episodes online. Search "Modern Wisdom" on any Podcast App or click here: iTunes: https://apple.co/2MNqIgw Spotify: https://spoti.fi/2LSimPn Stitcher: https://www.stitcher.com/podcast/modern-wisdom - Get in touch in the comments below or head to... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx Email: modernwisdompodcast@gmail.com
SPEAKERS
Brian Christian
guestChris Williamson
hostNarrator
other
EPISODE SUMMARY
In this episode of Modern Wisdom, featuring Brian Christian and Chris Williamson, The Alignment Problem - Brian Christian | Modern Wisdom Podcast 297 explores can We Align Powerful AI With Messy Human Values And Goals? Brian Christian and Chris Williamson explore the AI alignment problem—the gap between what we intend AI systems to do and what they actually optimize for in the real world. They connect classic thought experiments like the paperclip maximizer to concrete failures in facial recognition, criminal justice risk scores, social media feeds, and recommendation systems. The conversation examines why neural networks are so powerful yet opaque, how mis-specified objectives and biased data create real harms, and why fairness, governance, and incentives matter as much as raw technical capability. They close by discussing emerging technical work on things like inverse reinforcement learning and option value, and the broader societal challenge of deciding whose values future AI systems should serve.
RELATED EPISODES
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome




