Skip to content
Lex Fridman PodcastLex Fridman Podcast

Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367

Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, DALL-E, Codex, and many other state-of-the-art AI technologies. Please support this podcast by checking out our sponsors: - NetSuite: http://netsuite.com/lex to get free product tour - SimpliSafe: https://simplisafe.com/lex - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free EPISODE LINKS: Sam's Twitter: https://twitter.com/sama OpenAI's Twitter: https://twitter.com/OpenAI OpenAI's Website: https://openai.com GPT-4 Website: https://openai.com/research/gpt-4 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 4:36 - GPT-4 16:02 - Political bias 23:03 - AI safety 43:43 - Neural network size 47:36 - AGI 1:09:05 - Fear 1:11:14 - Competition 1:13:33 - From non-profit to capped-profit 1:16:54 - Power 1:22:06 - Elon Musk 1:30:32 - Political pressure 1:48:46 - Truth and misinformation 2:01:09 - Microsoft 2:05:09 - SVB bank collapse 2:10:00 - Anthropomorphism 2:14:03 - Future applications 2:17:54 - Advice for young people 2:20:33 - Meaning of life SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Sam AltmanguestLex Fridmanhost
Mar 25, 20232h 23mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Sam Altman on GPT-4, AGI risks, and reshaping work and power

  1. Sam Altman and Lex Fridman discuss GPT-4 as an early but pivotal step toward AGI, emphasizing both its remarkable capabilities and its current limitations. They unpack how large language models are built (data, scaling laws, RLHF, system messages) and how usability breakthroughs like ChatGPT changed public perception more than raw model improvements. A large portion of the conversation focuses on alignment, bias, safety testing, governance, and OpenAI’s unusual capped-profit structure amid competitive and political pressures. They also explore broader implications: impacts on jobs and programming, economic transitions, the possibility of superintelligence and fast takeoff, AI consciousness, and what it would mean for human meaning, democracy, and global coordination.

IDEAS WORTH REMEMBERING

5 ideas

Usability and alignment layers matter as much as raw model capability.

Altman argues that ChatGPT’s impact came less from a new base model and more from reinforcement learning with human feedback (RLHF) and a simple chat interface that made latent capabilities accessible, aligned, and easy to control.

Alignment work and capability gains are deeply intertwined, not orthogonal.

Techniques like RLHF and interpretability are framed as both safety and capability tools: they make systems safer and also more useful, challenging the simple idea that safety always trades off against performance.

OpenAI favors iterative public deployment to reduce one-shot catastrophic failure risk.

Altman insists that releasing imperfect systems early allows society and researchers to discover capabilities and failure modes that OpenAI couldn’t find alone, and to build norms, regulations, and technical safeguards before super-capable models arrive.

Bias and speech control will require user-level steerability, not a single global value set.

Because no one model can be universally viewed as “unbiased,” Altman expects broad, democratically set bounds plus per-country and per-user tuning (e.g., system messages) so people can shape behavior within agreed safety limits.

Near‑term risks like disinformation and economic shocks may arrive before true AGI.

Altman worries that LLMs could silently drive social media discourse, enable sophisticated propaganda, and rapidly disrupt labor markets well before any self-directed superintelligence appears, and that institutions are unprepared for this speed.

WORDS WORTH SAVING

5 quotes

We want to make our mistakes while the stakes are low.

Sam Altman

Better alignment techniques lead to better capabilities and vice versa.

Sam Altman

I think it would be crazy not to be a little bit afraid.

Sam Altman

This is the most complex software object humanity has yet produced, and it will be trivial in a couple of decades.

Sam Altman

I think we are doing better at nuance than people realize. Twitter kind of destroyed some, and maybe we can get some back now.

Sam Altman

GPT-4, ChatGPT, and the technical pipeline behind large language models (data, scaling, RLHF, system messages)Usability, prompting, and how AI is transforming programming and other knowledge workAlignment, bias, safety testing, and OpenAI’s philosophy of iterative public deploymentGovernance, OpenAI’s capped-profit structure, and competitive/market pressures (Big Tech, open source, regulation)Societal impacts: jobs, UBI, economic transitions, political stability, and disinformation risksExistential risks, AGI timelines, fast vs slow takeoff, and the control/alignment problemPhilosophical questions: AI consciousness, truth, human meaning, and multiple AGIs in a future society

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome