Skip to content
Lex Fridman PodcastLex Fridman Podcast

Greg Brockman: OpenAI and AGI | Lex Fridman Podcast #17

Lex Fridman and Greg Brockman on greg Brockman on steering AGI: power, safety, and human destiny.

Lex FridmanhostGreg Brockmanguest
Apr 3, 20191h 25mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Greg Brockman on steering AGI: power, safety, and human destiny

  1. Greg Brockman, co‑founder and CTO of OpenAI, discusses why he believes artificial general intelligence (AGI) is achievable relatively soon and will be the most transformative technology in human history. He explains OpenAI’s mission to ensure AGI benefits everyone, emphasizing the importance of setting the “initial conditions” — governance, incentives, and culture — under which such a technology is born. The conversation covers OpenAI’s unusual capped‑profit structure, its three‑pronged focus on capabilities, safety, and policy, and concrete examples like GPT‑2 and OpenAI Five in Dota as steps toward more general intelligence. Throughout, Brockman wrestles with risks such as AI‑driven disinformation and power concentration while arguing for a balanced, hopeful vision that highlights AGI’s potential to solve major scientific, economic, and environmental problems.

IDEAS WORTH REMEMBERING

5 ideas

The most powerful leverage today is in digital systems that scale globally.

Brockman contrasts slow, physical innovation with software and mathematics, arguing that code lets a single person’s idea impact the entire planet, placing AI at the center of future societal transformation.

You can’t control *whether* key technologies appear, but you can shape their birth conditions.

Invoking technological determinism (e.g., multiple simultaneous inventions), he argues that AGI is likely inevitable; the real human freedom lies in designing the initial norms, values, and governance that guide its emergence.

OpenAI is structured to prioritize its charter over profit, using a capped‑return model.

To raise the billions needed for AGI while avoiding unchecked shareholder pressure, OpenAI created OpenAI LP, where investor returns are capped and the nonprofit board is legally bound to prioritize the mission of benefiting humanity.

Safety and alignment are being approached as learnable problems, not hand‑coded rules.

Brockman likens aligning AGI with human values to how children learn morality from feedback, arguing that systems can learn human preferences from data, just as they already learn complex concepts like image recognition that we can’t formally specify.

Responsible disclosure will become essential as models like GPT‑2 scale in power.

GPT‑2 was treated as a test case for withholding full models when misuse (e.g., large‑scale fake news, impersonation) is plausible and benefits are uncertain, echoing how the security community evolved norms around vulnerability disclosure.

WORDS WORTH SAVING

5 quotes

You can’t really hope to create something no one else ever would. The real degree of freedom you have is to set the initial conditions under which a technology is born.

Greg Brockman

If you’ve really built a powerful system that is capable of shaping the future of humanity, the first question you should ask is: how do we make sure that this plays out well?

Greg Brockman

Our goal isn’t to be the ones to build AGI. Our goal is to make sure it goes well for the world.

Greg Brockman

Sometimes we see behaviors that emerge that are qualitatively different from anything we saw at small scale, and the original inventor looks at it and says, ‘I didn’t think it could do that.’

Greg Brockman

If we can build AI systems that make human existence more meaningful, sign me up.

Greg Brockman

Physical vs. digital world, leverage of software, and collective intelligenceTechnological determinism, initial conditions, and shaping the trajectory of AGIOpenAI’s mission, charter, and capped‑profit LP structureAGI safety, value alignment, and global policy/governance questionsGPT‑2, responsible disclosure, and the future of language modelsSelf‑play, OpenAI Five, and large‑scale reinforcement learning in DotaReasoning, generalization, consciousness, and the long‑term future of AGI

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome