Greg Brockman: OpenAI and AGI | Lex Fridman Podcast #17

Greg Brockman: OpenAI and AGI | Lex Fridman Podcast #17

Lex Fridman PodcastApr 3, 20191h 25m

Lex Fridman (host), Greg Brockman (guest)

Physical vs. digital world, leverage of software, and collective intelligenceTechnological determinism, initial conditions, and shaping the trajectory of AGIOpenAI’s mission, charter, and capped‑profit LP structureAGI safety, value alignment, and global policy/governance questionsGPT‑2, responsible disclosure, and the future of language modelsSelf‑play, OpenAI Five, and large‑scale reinforcement learning in DotaReasoning, generalization, consciousness, and the long‑term future of AGI

In this episode of Lex Fridman Podcast, featuring Lex Fridman and Greg Brockman, Greg Brockman: OpenAI and AGI | Lex Fridman Podcast #17 explores greg Brockman on steering AGI: power, safety, and human destiny Greg Brockman, co‑founder and CTO of OpenAI, discusses why he believes artificial general intelligence (AGI) is achievable relatively soon and will be the most transformative technology in human history. He explains OpenAI’s mission to ensure AGI benefits everyone, emphasizing the importance of setting the “initial conditions” — governance, incentives, and culture — under which such a technology is born. The conversation covers OpenAI’s unusual capped‑profit structure, its three‑pronged focus on capabilities, safety, and policy, and concrete examples like GPT‑2 and OpenAI Five in Dota as steps toward more general intelligence. Throughout, Brockman wrestles with risks such as AI‑driven disinformation and power concentration while arguing for a balanced, hopeful vision that highlights AGI’s potential to solve major scientific, economic, and environmental problems.

Greg Brockman on steering AGI: power, safety, and human destiny

Greg Brockman, co‑founder and CTO of OpenAI, discusses why he believes artificial general intelligence (AGI) is achievable relatively soon and will be the most transformative technology in human history. He explains OpenAI’s mission to ensure AGI benefits everyone, emphasizing the importance of setting the “initial conditions” — governance, incentives, and culture — under which such a technology is born. The conversation covers OpenAI’s unusual capped‑profit structure, its three‑pronged focus on capabilities, safety, and policy, and concrete examples like GPT‑2 and OpenAI Five in Dota as steps toward more general intelligence. Throughout, Brockman wrestles with risks such as AI‑driven disinformation and power concentration while arguing for a balanced, hopeful vision that highlights AGI’s potential to solve major scientific, economic, and environmental problems.

Key Takeaways

The most powerful leverage today is in digital systems that scale globally.

Brockman contrasts slow, physical innovation with software and mathematics, arguing that code lets a single person’s idea impact the entire planet, placing AI at the center of future societal transformation.

Get the full analysis with uListen AI

You can’t control *whether* key technologies appear, but you can shape their birth conditions.

Invoking technological determinism (e. ...

Get the full analysis with uListen AI

OpenAI is structured to prioritize its charter over profit, using a capped‑return model.

To raise the billions needed for AGI while avoiding unchecked shareholder pressure, OpenAI created OpenAI LP, where investor returns are capped and the nonprofit board is legally bound to prioritize the mission of benefiting humanity.

Get the full analysis with uListen AI

Safety and alignment are being approached as learnable problems, not hand‑coded rules.

Brockman likens aligning AGI with human values to how children learn morality from feedback, arguing that systems can learn human preferences from data, just as they already learn complex concepts like image recognition that we can’t formally specify.

Get the full analysis with uListen AI

Responsible disclosure will become essential as models like GPT‑2 scale in power.

GPT‑2 was treated as a test case for withholding full models when misuse (e. ...

Get the full analysis with uListen AI

Massive scale plus general methods can unlock qualitatively new behaviors.

OpenAI Five’s Dota performance showed that simply scaling a generic algorithm (PPO) with self‑play and huge compute can yield long‑horizon planning and out‑of‑distribution generalization its creator didn’t anticipate, supporting the “bitter lesson” that scalable, compute‑heavy methods are especially powerful.

Get the full analysis with uListen AI

Competition in late‑stage AGI could dangerously erode safety unless major players commit to collaboration.

Brockman worries about a ‘race to the bottom’ where teams cut safety corners to win, and says OpenAI pledges to cooperate with a leading project that shares its mission rather than try to recklessly overtake it — a norm he believes others must share.

Get the full analysis with uListen AI

Notable Quotes

You can’t really hope to create something no one else ever would. The real degree of freedom you have is to set the initial conditions under which a technology is born.

Greg Brockman

If you’ve really built a powerful system that is capable of shaping the future of humanity, the first question you should ask is: how do we make sure that this plays out well?

Greg Brockman

Our goal isn’t to be the ones to build AGI. Our goal is to make sure it goes well for the world.

Greg Brockman

Sometimes we see behaviors that emerge that are qualitatively different from anything we saw at small scale, and the original inventor looks at it and says, ‘I didn’t think it could do that.’

Greg Brockman

If we can build AI systems that make human existence more meaningful, sign me up.

Greg Brockman

Questions Answered in This Episode

How realistic is OpenAI’s hope that other major labs will adopt similar ‘late‑stage collaboration’ commitments for AGI, and what incentives could make that happen?

Greg Brockman, co‑founder and CTO of OpenAI, discusses why he believes artificial general intelligence (AGI) is achievable relatively soon and will be the most transformative technology in human history. ...

Get the full analysis with uListen AI

At what point should society decide that a model’s potential for misuse justifies withholding it, and who should have the authority to make that decision?

Get the full analysis with uListen AI

Can learning human values from data truly avoid amplifying our worst biases and injustices, rather than just our ‘better angels’?

Get the full analysis with uListen AI

If massive compute and scale are increasingly central, how can we ensure that researchers and countries without such resources still play a meaningful role in shaping AGI?

Get the full analysis with uListen AI

What concrete governance mechanisms could prevent AGI from concentrating power in the hands of a single company, country, or small elite while still enabling rapid innovation?

Get the full analysis with uListen AI

Transcript Preview

Lex Fridman

The following is a conversation with Greg Brockman. He's the co-founder and CTO of OpenAI, a world-class research organization developing ideas in AI, with a goal of eventually creating a safe and friendly artificial general intelligence, one that benefits and empowers humanity. OpenAI is not only a source of publications, algorithms, tools, and datasets, their mission is a catalyst for an important public discourse about our future with both narrow and general intelligence systems. This conversation is part of the Artificial Intelligence Podcast at MIT and beyond. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D. And now, here's my conversation with Greg Brockman. So in high school and right after, you wrote a draft of a chemistry textbook.

Greg Brockman

(laughs)

Lex Fridman

I saw that. It covers everything from basic structure of the atom to quantum mechanics. So it's clear you have an intuition and a passion for both the, uh, the physical world with chemistry and now robotics, to the digital world with, uh, AI, deep learning, reinforcement learning, so on. Do you see the physical world and the digital world as different? And what do you think is the gap?

Greg Brockman

A lot of it actually boils down to iteration speed, right? That I think that a lot of what really motivates me is, is building things, right, is the... Uh, you know, think about mathematics, for example, where you think really hard about a problem, you understand it, you write it down in this very obscure form that we call proof. But then, this is in humanity's library, right? It's there forever. This is some truth that we've discovered. And, you know, maybe only five people in your field will ever read it, uh, but somehow you've kinda moved humanity forward. And so I actually used to really think that I was going to be a mathematician, and, uh, then I actually started writing this chemistry textbook. One of my friends told me, "You'll never publish it because you don't have a PhD." So instead, I, I decided to build a website and try to promote my ideas that way, and then I discovered programming and, uh, I... You know, that in programming, you think hard about a problem, you understand it, you write it down in a very obscure form that we call a program. But then once again, it's in humanity's library, right? And anyone can get the benefit from it, and the scalability is massive. And so I think that the thing that really appeals to me about the digital world is that you can have this, this, this h- insane leverage, right? A single individual with an idea is able to affect the entire planet, um, and that's something I think is really hard to do if you're moving around physical atoms.

Lex Fridman

But you said, uh, mathematics, so if you look at the, the wet thing, eh, o- over here, our mind, do you, you ultimately see it as just math, as just information processing? Or, or is there some other magic as you've seen, if you've seen through biology and chemistry and so on?

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome