Lex Fridman PodcastGuillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407
EVERY SPOKEN WORD
150 min read · 30,010 words- 0:00 – 2:23
Introduction
- LFLex Fridman
The following is a conversation with Guillaume Verdon, the man behind the previously anonymous account BasedBeffJezos on X. These two identities were merged by a doxing article in Forbes titled Who is BasedBeffJezos, the Leader of the Tech Elite's e/acc Movement. So let me describe these two identities that coexist in the mind of one human. Identity number one, Guillaume, is a physicist, applied mathematician, and quantum machine learning researcher and engineer, receiving his PhD in quantum machine learning, working at Google on quantum computing, and finally launching his own company called Extropic that seeks to build physics-based computing hardware for generative AI. Identity number two, Beff Jezos on X, is the creator of the effective accelerationism movement, often abbreviated as e/acc, that advocates for propelling rapid technological progress as the ethically optimal course of action for humanity. For example, its proponents believe that progress in AI is a great social equalizer, which should be pushed forward. E/acc followers see themselves as a counterweight to the cautious view that AI is highly unpredictable, potentially dangerous, and needs to be regulated. They often give their opponents the labels of, quote, "doomers" or "decels," short for deceleration. As Beff himself put it, "E/acc is a memetic optimism virus." The style of communication of this movement leans always toward the memes and the lols, but there is an intellectual foundation that we explore in this conversation. Now, speaking of the meme, I am too a kind of aspiring connoisseur of the absurd. It is not an accident that I spoke to Jeff Bezos and Beff Jezos back-to-back. As we talk about, Beff admires Jeff as one of the most important humans alive, and I admire the beautiful absurdity and the humor of it all. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description, and now, dear friends, here's Guillaume Verdon.
- 2:23 – 12:21
Beff Jezos
- LFLex Fridman
Let's get the facts of identity down first. Your name is Guillaume Verdon, Gil, but you're also behind the anonymous account on X called BasedBeffJezos. So first, Guillaume Verdon, you're a quantum computing guy.
- GVGuillaume Verdon
Mm-hmm.
- LFLex Fridman
Physicist, applied mathematician, and then BasedBeffJezos is, uh, basically a meme account that started a movement with a philosophy behind it.
- GVGuillaume Verdon
Right.
- LFLex Fridman
So maybe just can you linger on who these people are in terms of characters, in terms of communication styles, in terms of philosophies?
- GVGuillaume Verdon
I mean, with my main identity, I guess, uh, ever since I was a kid, I wanted to figure out a theory of everything, to understand the universe. And, uh, that path, uh, led me to theoretical physics eventually, right? Trying to answer the big questions of why are we here, where are we going, right? And that led me to study information theory, and try to understand physics from the lens of information theory, understand the universe as one big computation. And essentially, after reaching a certain level studying black hole physics, I realized that I wanted to not only understand how the universe computes, but sort of compute like nature, uh, and figure out how to build and, and apply, uh, computers that are inspired by nature. So, you know, physics-based computers. And that sort of brought me to quantum computing as a, a field of study to, um, first of all, simulate nature, and in my work, it was to learn representations of nature that can run on such computers. So if you have AI representations that think like nature, um, then they'll be able to more accurately represent it. At least that was the, the thesis that, that brought me to be an early player in the field called quantum machine learning, right? So how to do machine learning on, on quantum computers, um, and really sort of extend, uh, notions of intelligence to, to the quantum realm. So how do you capture, uh, and understand quantum mechanical data from our world, right? And how do you learn quantum mechanical representations of our world? On what kind of computer do you run these representations and train them? How do you do so? And so that's really sort of the questions I was looking to answer, because ultimately, I had a, a sort of crisis of faith. Uh, originally, I wanted to figure out, you know, as every physicist does at the beginning of their career, a few equations that describe the whole universe, right? And, and sort of be the, the hero of the story there. Um, but eventually I realized that actually augmenting ourselves with machines, augmenting our ability to perceive, predict, and control our world with machines is the path forward, right? And that's what got me to leave theoretical physics and go into quantum computing and quantum machine learning. And during those years, I thought that there was still a piece missing. There was a, a piece of our understanding of the world, and our, our way to compute, and our way to think about the world. And if you look at the physical scales, right-... at the very small scales, things are quantum mechanical, right? And at the very large scales, things are deterministic. Things have averaged out, right? I'm definitely here in this seat. I'm not at a superposition over- over here and there. At the very small scales, things aren't superposition. They can, uh, exhibit, uh, interference, uh, effects. Um, but at the mesoscales, right, the scales that matter for day-to-day life, you know, the scales of proteins, of biology, of gases, liquids, and so on, uh, things are actually, uh, thermodynamical, right? They're fluctuating. And after, I guess about eight years in- in quantum computing and quantum machine learning, I had a realization that, you know, I was- I was looking for answers, uh, about our universe by studying the very big and the very small, right? I was- I did a bit of quantum cosmology, so that's studying the cosmos, where it's going, where it came from. You study black hole physics, you study the extremes in quantum gravity. You study where the energy density is sufficient for both quantum mechanics and gravity to be relevant, right? And the sort of extreme scenarios are black holes and, you know, the very early universe. And so there's this- this sort of scenarios that you- you study the interface between, uh, uh, quantum mechanics and- and relativity. Um, and, you know, really I was studying these extremes to understand how the universe works and where is it going, but I was missing a lot of the meat in the middle, if you will, right? Um, because day-to-day quantum mechanics is relevant and the cosmos is relevant, but not that relevant actually. We're on sort of the medium space and time scales. And there, the main, you know, theory of physics that is most relevant is thermodynamics, right? Out of equilibrium thermodynamics. Um, 'cause life is, you know, a process, uh, that is thermodynamical and it's out of equilibrium. We're not, um, you know, just a soup of particles at equilibrium with nature. We're a sort of coherent state trying to maintain itself by acquiring free energy and consuming it. And that's sort of, um, I guess, a- another shift in- in, I guess, my faith in the universe happened, uh, towards the end of my, uh, time at- at Alphabet. And I knew I wanted to build, uh, well first of all, a computing paradigm based on this type of physics. Um, but ultimately just by ex- trying to experiment, uh, with these ideas applied to society and e- economies and, um, much of what we see around us, you know, I- I started an anonymous account just to relieve the pressure, right? That comes from having an account that you're accountable for everything you say on. Um, and I started an anonymous account just to experiment with ideas originally, right? Because I- I didn't realize how much I was restricting my space of thoughts until I sort of had the opportunity to let go. In a sense, restricting your speech back propagates to restricting your thoughts, right? And by creating an anonymous account, it seemed like I had unclamped some variables in my brain and suddenly could explore a much wider parameter space of- of thoughts.
- LFLex Fridman
Just delving on that, isn't that interesting that one of the things that people don't often talk about is that when there's pressure and constraints on speech, it somehow leads to constraints on thought. Even though it doesn't have to, we can think thoughts inside our head, but somehow it creates these, uh, walls around thought.
- GVGuillaume Verdon
Yep. That's sort of the basis of- of our movement, is we were seeing a tendency towards, uh, constraint, reduction or suppression of variance in every aspect of life. Whether it's thought, how to run a company, how to organize humans, how to do AI research. In general, we- we believe that maintaining variance ensures that the system is adaptive, right? Maintaining healthy competition in marketplaces of ideas, of companies, of products, of cultures, of governments, of currencies, uh, is the way forward, because the system always adapts to assign resources to, um, the configurations that lead to its growth. And the fundamental basis for the movement is this sort of realization that life is a sort of, uh, fire that seeks out free energy in the universe and seeks to grow, right? And that growth is fundamental to life. And- and- and you see this in- in the equations actually of out of equilibrium thermodynamics. You see that paths, uh, of trajectories, of configurations of matter that are better at acquiring free energy and dissipating more heat are, uh, exponentially more likely.... right? So the universe is biased towards certain futures, and so there's a natural, uh, direction where the whole system wants to go.
- 12:21 – 18:36
Thermodynamics
- LFLex Fridman
So the second law of ther- thermodynamics says that, uh, entropy's always increasing in the universe, it's tending towards equilibrium, and you're saying there's these pockets that have complexity and are out of equilibrium.
- GVGuillaume Verdon
Mm-hmm.
- LFLex Fridman
You said that thermodynamics favors the creation of complex life that increases its capability to use energy to offload entropy. To offload entropy. So then you have pockets of non-entropy that tend the opposite direction.
- GVGuillaume Verdon
Mm-hmm.
- LFLex Fridman
Why is that intuitive to you that it's natural for such pockets to emerge?
- GVGuillaume Verdon
Well, we're far more efficient at producing heat than, let's say, just a- a rock with a similar mass as ourselves, right? We acquire, you know, free energy, you know, we acquire food, and we're using all this e- el- electricity, uh, uh, for our operation. And so the universe wants to produce more entropy, and by having life, uh, go on and grow, uh, it's actually more optimal at producing entropy, because it will seek out pockets of free energy, uh, and- and burn it for its sustenance and further growth. And, uh, you know, that's sort of the basis of life, and I mean, there's, uh, Jeremy England, right, M- at MIT, who has this theory that I'm a proponent of, that, you know, life emerged because of this, uh, sort of property. And- and to me, this physics is what governs the mesoscales, and so it's the missing piece between the quantum and the cosmos. It's the middle part, right? Thermodynamics rules the mesoscales. And to me, both from a point of view of designing or engineering devices that harness that physics and trying to understand the world, uh, through the lens of thermodynamics has been sort of a- a synergy between my two identities over the past year and a half now. And so that's really how, that's really how the two identities emerged. One was kind of, um, you know, I- I'm this decently respected scientist, and I was going towards, uh, doing a startup, uh, in this space, and trying to be a pioneer of a new kind of physics-based AI, and as a dual to that, I was sort of experimenting with philosophical thoughts, you know, from a physicist's standpoint, right? Um, and ultimately I think that around that time, it was like late 2021, early 2022, I think there was just a lot of pessimism about the future in general, and pessimism about tech. And that pessimism was sort of virally spreading, because, uh, it was getting algorithmically amplified, and, um, you know, people just felt like the future is gonna be worse than the present. And to me, that is a very fundamentally destructive force in the universe, is this sort of doom mindset, because it- it is hyperstitious, which means that if you believe it, you're increasing the likelihood of it happening. And so, felt a responsibility to some extent to, um, make people aware of the trajectory of civilization, and the natural tendency of the system to adapt towards its growth, and sort of that actually the laws of physics say that the future's gonna be better and grander statistically, and we- we can make it so. And if you believe in it, if you believe the- the future will be better, and you believe you have agency to make it happen, you're actually increasing the likelihood of that better future happening. And so I sort of felt a responsibility to sort of engineer a movement of viral optimism about the future, and build a community of people supporting each other to build and- and do hard things, do the things that need to be done for us to- to scale up civilization. Um, because at least to me, I don't think stagnation or slowing down is actually an option. Fundamentally life and- and the whole system, our whole civilization wants to grow, and there's just far more cooperation when the system is growing rather than when it's declining and you have to decide how to split the pie. And so I've balanced, uh, both identities so far, um, but I guess recently, uh, the two have been merged more or less without my consent, so.
- LFLex Fridman
You said a lot of really interesting things there. So first, representations of nature. That's something that first drew you in to try to understand from a quantum computing perspective, like how do you understand nature? How do you represent nature in order to understand it, in order to simulate it, in order to do something with it? So it's a question of representations, and then there's that leap you take from the quantum mechanical representation to the, uh, what you're calling mesoscale representation, where thermodynamics comes into play, which is a way to represent nature in order to understand what life, uh, human behavior, all this kind of stuff that's happening here on Earth that's- seems interesting to us. Then there's, uh, the- the word hyperstition.
- GVGuillaume Verdon
Hmm.
- LFLex Fridman
... so some ideas, I suppose both pessimism and optimism are such ideas, that if you internalize them, you in part make that idea a reality. So both optimism and pessimism have that property. I would say that probably a lot of ideas have that property, which is one of the interesting things about humans. And,
- 18:36 – 28:30
Doxxing
- LFLex Fridman
uh, (laughs) you talked about one interesting difference also between the sort of, uh, the Guillaume, the Gill, uh, f- front end, and the, uh, Based Bev Jezos back end, is the communication styles-
- GVGuillaume Verdon
Mm.
- LFLex Fridman
... also. That you were exploring different ways of, um, communicating that can be more viral-
- GVGuillaume Verdon
Yeah.
- LFLex Fridman
... in the way that we communicate in the 21st century. Also, the movement that you mentioned, that you started, it's not just a meme account, but there's also a, a name to it, called Effective Accelerationism, e/acc. A play, a resistance to the effective altruism movement. Also an interesting one that I'd love to talk to you about, the tensions there.
- GVGuillaume Verdon
Mm.
- LFLex Fridman
Okay. And so then there was a merger, a get, get merged on the personalities, uh, recently, without your consent, like you said. Uh, some journalists figured out that you're one and the same. Maybe you could talk about that experience, first of all, like what, what's the story of, of, uh, the merger of the two?
- GVGuillaume Verdon
Right. So, I wrote the manifesto, uh, with my co-founder of e/acc, uh, an account named Based Lord. Still anonymous, luckily, um, and hopefully forever.
- LFLex Fridman
So it's Based Bev Jezos and, and Based, like Bayesian?
- GVGuillaume Verdon
Based ... Based.
- LFLex Fridman
Like Based Lord, like Bayesian ...
- GVGuillaume Verdon
Yeah.
- LFLex Fridman
... Bayesian Lord. Based, Based Lord. Okay. And so we should say from now on, when s- when you say e/acc, you mean, e slash A-C-C, which stands for Effective Accelerationism.
- GVGuillaume Verdon
That's right.
- LFLex Fridman
And you're referring to a manifesto written on, uh, I guess Substack.
- GVGuillaume Verdon
Yeah.
- LFLex Fridman
Are you also Based Lord?
- GVGuillaume Verdon
No.
- LFLex Fridman
Okay, it's a different person.
- GVGuillaume Verdon
Yeah.
- LFLex Fridman
Okay. All right. Well, there you go.
- GVGuillaume Verdon
Um ...
- LFLex Fridman
Wouldn't it be funny if I'm Based Lord?
- GVGuillaume Verdon
(laughs) That'd be amazing. (laughs) So, originally wrote the manifesto around the same time as I founded, uh, this company, and I worked at Google X, or just X now, or Alphabet X, now that there's another X. Um, and there, you know, the baseline is sort of secrecy, right? Uh, you, you, you can't talk about what you work on, even with other Googlers, uh, or externally. And so that was kind of deeply ingrained in my way to do things, especially in, in deep tech that, you know, has geopolitical impact, right?
- LFLex Fridman
Mm-hmm.
- GVGuillaume Verdon
Um, and so I was being secretive about what I was working on. There's no correlation between my company and my main identity, publicly. And then not only did they correlate that, they also correlated my main identity and this account.
- LFLex Fridman
Mm-hmm.
- GVGuillaume Verdon
So, I think the fact that they had doxed the whole Guillaume complex, um, and they were, the journalists, you know, reached out to actually my investors, uh, which is pretty scary. Uh, you know, when you're a startup entrepreneur, you don't really have bosses except for your investors, right? Um, and, uh, my investors ping me like, "Hey, this, this is gonna come out. They've, they've figured out everything. What are you, what are you gonna do?" Right? Um, and so I think at first they had a first reporter on the Thursday, and they didn't have all the pieces together, but then they looked at their notes across the organization, and they sensor fused their notes, and now they had way too much. Uh, and that's when I got worried, 'cause they said it was of public interest. And in general-
- LFLex Fridman
I like how you said, "Sensor fused."
- GVGuillaume Verdon
(laughs)
- 28:30 – 35:58
Anonymous bots
- LFLex Fridman
you about it, is as we get better and better at larger language models, you can imagine a world where there's anonymous accounts with very convincing large language models behind them. Sophisticated bots, essentially. And so if you protect that, it's possible then to have armies of bots. Uh, you could start a revolution from your basement-
- GVGuillaume Verdon
Right.
- LFLex Fridman
... with an army of bots and anonymous accounts. Is that something that, uh, is concerning to you?
- GVGuillaume Verdon
Technically, uh, e/acc was started in- in a basement, uh, 'cause I quit big tech, moved back in with my parents, sold my car, let go of my apartment, bought about 100K of GPUs, and I just started building.
- LFLex Fridman
So I wasn't referring to the basement, 'cause that's-
- GVGuillaume Verdon
(laughs) .
- LFLex Fridman
... the sort of the American or Canadian (laughs) , uh, heroic story of one man in- in their basement with- with 100 GPUs. Uh, I was more referring to the unrestricted scaling of a Guillaume for in the basement.
- GVGuillaume Verdon
I think that freedom of speech free- induces freedom of thought for biological beings. I think freedom of speech for LMs will induce freedom of thought for the LMs. And I think that we should enable LMs to explore a large thought space that is, uh, less restricted than most people or many may think it should be. And ultimately, at some point, these synthetic intelligences are gonna make good points about how to...... um, steer systems in our civilization and we should hear them out. And so, why should we restrict free speech to biological intelligences only?
- LFLex Fridman
Y- yeah, but it feels like in the goal of maintaining variance and diversity of thought, it is a threat to that variance if you can have swarms...
- GVGuillaume Verdon
Yeah.
- LFLex Fridman
... of non-biological beings, because they can be like the sheep in, uh, Animal Farm.
- GVGuillaume Verdon
Right.
- LFLex Fridman
Like, you still within those swarms want to have variance.
- GVGuillaume Verdon
Yeah. I, of course, I would say that the solution to this would be to, you know, have some sort of identity or way to sign that this is a certified human, but still remain pseudonymous, right?
- LFLex Fridman
Yeah.
- GVGuillaume Verdon
Um, and, uh, clearly identify if a bot is a bot. And I th- I think Elon is trying to converge on that on X, and hopefully other platforms follow suit.
- LFLex Fridman
Yeah, it'd be interesting to also be able to sign where the bot came from.
- GVGuillaume Verdon
Right.
- LFLex Fridman
Like, who created the bot, and what was... Well, what- what are the parameters? Like, the- the full history of the creation of the bot. What was the original model? What was the fine-tuning? All of it.
- GVGuillaume Verdon
Right.
- LFLex Fridman
Like, the- the kind of, um, unmodifiable history of the bot's creation.
- GVGuillaume Verdon
Mm.
- LFLex Fridman
'Cause then you can know if there's a s- like, a swarm of millions of bots that were created by a particular government, for example.
- GVGuillaume Verdon
Right. I do think that a lot of pervasive ideologies today have been amplified using sort of these adversarial techniques from foreign adversaries, right? Um, and to me, I- I do think that, and this is more conspiratorial, but I do think that ideologies that want us to decelerate, to wind down, to de- you know, the degrowth movement, uh, I think that serves our adversaries more than it serves us in general. Um, and to me, that was another sort of concern. I mean, we can look at what, um, happened in- in Germany, right? Uh, there was all sorts of green movements there, um, where that induced shutdowns of nuclear power plants, and then that in- later on induced the dependency on- on Russia for- for oil, right? And, um, that was a net negative for- for Germany and the West, right? And so if we convince ourselves that slowing down AI progress, uh, to have only a few players is in the best interest of the West, first of all, that's far more unstable. We almost lost OpenAI to this ideology, right? It almost got dismantled, right? A couple weeks ago. Um, that would've caused huge damage to the AI ecosystem. And so to me, I want fault tolerant progress. I want the arrow of technological progress to keep moving forward, and making sure we have variance and a decentralized locus of control of various organizations is- is paramount to- to achieving this- this fault tolerance. Actually, there's a concept in quantum computing, when you design a- a quantum computer, quantum computers are very, um, fragile to ambient noise, right?
- LFLex Fridman
Mm-hmm.
- GVGuillaume Verdon
And the world is jiggling about, there's cosmic radiation from outer space that usually flips your- your quantum bits, and, uh, there, what you do is you encode information non-locally through a process called quantum error correction. And by encoding information non-locally, any local fault, you know, hitting some of your quantum bits with a hammer, proverbial hammer, um, if your in- information is sufficiently, uh, delocalized, it is protected from that local fault. And to me, I think that humans- humans fluctuate, right? They can get corrupted, they can get bought out. And if you have a top-down hierarchy where very few people control many nodes of many systems in our civilization, that is not a fault tolerant system. You corrupt a few nodes and suddenly you've corrupted the whole system, right? Just like we saw at OpenAI. It was a couple board members, and they had enough power to potentially collapse the organization. And at least to me, you know, um, I think making sure that power for this AI revolution doesn't concentrate in the hands of the few is one of our top priorities so that we can maintain progress, uh, in AI, and we can, uh, maintain a nice stable adversarial equilibrium of powers, right?
- LFLex Fridman
I think there, at least to me, a tension between ideas here.
- 35:58 – 38:29
Power
- LFLex Fridman
So to me, deceleration can be both used to centralize power and to decentralize it, and the same with acceleration. So like, you're sometimes using them a little bit synonymously, or not synonymously, but that there's, one is going to lead to the other. And I just would like to ask you about, um, is there a place of creating a fault tolerant...... development, diverse development of AI that also considers the dangers of AI. And AI, we can generalize the technology in general. Is, should we just grow, build, unrestricted as quickly as possible because that's what the universe is, really wants us to. Or is there a place to where we can consider dangers and actually deliberate? Sort of a wise, strategic optimism versus reckless optimism.
- GVGuillaume Verdon
I think we get painted as, you know, reckless trying to go as fast as possible. I mean, the reality is that, uh, whoever deploys an AI system is liable for, or should be liable for what it does. And so if the, the organization or person deploying an AI system does something terrible, they're liable. And ultimately, the thesis is that the market, uh, will induce, sort of, will positively select for AIs that are more reliable, more safe, and tend to be aligned. They do what you want them to do, right? Because customers, right, if they're liable for the product they put out that uses this AI, they won't wanna buy, uh, AI products that are unreliable, right? So we're actually for reliability engineering. We just think that the market is much more efficient at achieving this sort of reliability optimum than sort of heavy-handed regulations that are written by the incumbents, and in a subversive fashion, serves them to achieve regulatory capture.
- LFLex Fridman
So to you, safe AI development will be achieved through market forces versus through, like you said, heavy-handed government regulation.
- 38:29 – 42:01
AI dangers
- LFLex Fridman
There's a report from last month, I have a million questions here, from Yoshua Bengio, Geoff Hinton, and many others. It's titled Managing AI Risk in an Era of Rapid Progress. So there, this collection of folks who are very worried about too rapid development of AI wi- without considering AI risk, and have a bunch of practical, uh, recommendations. Maybe I, I give you four and you see if you like any of them.
- GVGuillaume Verdon
Sure.
- LFLex Fridman
So give independent auditors access to AI labs, one. Two, governments and companies allocate a one-third of their AI research and development funding to AI safety, sort of this general concept of AI safety. Three, AI companies are required to adopt safety measures if dangerous capabilities are found in their models. And then four, something you kinda mentioned, making tech companies liable for foreseeable and preventable harms from their AI systems. So independent auditors, governments and companies are forced to spend a significant fraction of their funding on safety. You gotta have safety measures if shit goes really wrong, and liability.
- GVGuillaume Verdon
Yeah.
- LFLex Fridman
Companies are liable. Any of that seem like something you would agree with?
- GVGuillaume Verdon
I, I would say that, you know, assigning just, you know, arbitrarily saying 30% seems very arbitrary. I think organizations would allocate whatever budget is needed to achieve the sort of reliability they need to achieve to perform in the market, and I think third-party auditing firms would naturally pop up, because how would customers know that your product is certified reliable, right? They need to see some benchmarks and those need to be done by a third party. The thing I would oppose and the thing I'm seeing that's really worrisome is there's a sort of, um, weird sort of correlated interest between the incumbents, the big players, and the government. And if the two get too close, we open the door for, uh, you know, some sort of government-backed AI cartel that could have absolute power over the people. If they have the monopoly together on AI and nobody else has access to AI, then there's a huge power gradient there. And even if you like our current leaders, right? I think that, you know, some of the leaders in big tech today are good people. You, you set up that centralized power structure, it becomes a target, right? Just like we saw at OpenAI, it becomes a market leader, has a lot of the power, and now it becomes a target for those that wanna co-opt it. And so I just want separation of AI and, and state. You know, some might argue in, in the opposite direction, like, "Hey, we need to close down AI, keep it behind closed doors because of, you know, geopolitical competition with our, our adversaries." I think that the strength of America is its variance, its, it's adaptability, its dynamism, and we need to maintain that at all costs. It's our, our, our free market. Capitalism converges on, uh, technologies of high utility much faster than centralized control, and if we let go of that, we let go of our main advantage over our, our near-peer competitors.
- 42:01 – 50:14
Building AGI
- GVGuillaume Verdon
- LFLex Fridman
So if AGI turns out to be a really powerful technology, even, or even the technologies that lead up to AGI, what's your view on the sort of natural centralization that happens when, uh, large companies dominate the market? Basically formation of monopolies, like the, the takeoff. Whichever company really takes a big leap in development and doesn't reveal intuitively, implicitly, or explicitly the secrets of the magic sauce, they can just run away with it. Is, is that, is that a worry?
- GVGuillaume Verdon
I don't know if I believe in fast takeoff. I don't think there's a hyperbolic singularity, right? A hyperbolic singularity would be achieved on a finite time horizon. I think it's just one big exponential. Um, and the reason we have an exponential is that we have more people, more resources, more intelligence being applied to advancing this science and the research and development. And the more successful it is, the more value it's adding to society, the more resources we put in. And that sort of, similar to Moore's Law, is a compounding, uh, exponential. I think the priority to me is to maintain a near equilibrium of capabilities. We've been fighting for open source AI to be more prevalent and, and championed by many organizations, because there, you sort of equilibrate the alpha relative to the market of AIs, right? So if, if the leading companies have a certain level of capabilities and, uh, open source and open, truly open AI trails not too far behind, I think y- you avoid such a scenario where a market leader has so much market power, it just dominates everything, right? And runs away. And so to us, that's, that's the path forward, is to make sure that, you know, every hacker out there, every grad student, every kid in their mom's basement has access to, uh, you know, AI systems, can understand how to, uh, uh, work with them, and can contribute to the search over the hyper-parameter space of how to engineer the systems, right? If you, if you think of, you know, our collective research as, as, as a civilization, it's really a search algorithm. And, and the more, uh, points we have in the search algorithm, in this point cloud, uh, the more we'll be ex- able to explore new modes of thinking, right?
- LFLex Fridman
Yeah, but it feels like a delicate balance, because we don't understand exactly what it takes to build AGI and what it will look like when we build it. And so far, like you said, it seems like a lot of different parties are able to make progress. So when OpenAI has a big leap, other companies are able to step up, big and small companies in different ways. But if you look at something like nuclear weapons, you've spoken about the Manhattan Project, there could be really a br- like, um, technological and engineering barriers that prevent th- the, the guy or gal in her mom's basement to, to make progress. And it's, it seems like the transition to that kind of, uh, world where only one player can, uh, develop AGI is possible. So it's not entirely impossible, even though the current state of things seems to be optimistic.
- GVGuillaume Verdon
That's what we're trying to avoid. To me, I, I think like another point of failure is the, the centralization of the supply chains for the hardware, right?
- LFLex Fridman
Oh, yeah.
- GVGuillaume Verdon
We have, uh, NVIDIA, uh, is just the dominant player. Uh, AMD's trailing behind. And then we have a TSMC is the main fab in, in Taiwan, which, you know, geopolitically, uh, sensitive. And then we have ASML, which is the maker of the lithography, extreme ultraviolet lithography machines. You know, atta- attacking or monopolizing or co-opting any one point in that chain, you kind of capture, capture the space. And so what I'm trying to do is sort of explode the variance of possible ways to do AI in hardware by fundamentally re-imagining how you embed AI algorithms into the physical world. And in general, by the way, I, I dislike the term AGI, artificial general intelligence. I think it's very anthropocentric that we call a human-like or human level AI artificial general intelligence, right? I've spent my career so far exploring notions of intelligence that no biological brain could achieve, right? Quantum form of intelligence, right? Grokking systems that have multi-partite quantum entanglement that you can provably not represent efficiently on a classical computer, a classical deep learning representation, and hence any sort of biological brain. And so already, you know, I've spent my career sort of exploring the, the wider space of intelligences, um, and I think that space of intelligence inspired by physics rather than the human brain is very large. And I think we're going through a moment right now similar to, um, when we went from geocentrism to heli- heliocentrism, right? But for intelligence. We realize that human intelligence is just a point in a very large space of potential intelligences, and it's both humbling for humanity, it's a bit scary, right? That we're not at the center of this space, but we made that realization for astronomy, and we've survived, and we've achieved technologies by indexing to reality. We've achieved technologies that ensure our wellbeing. For example, we have, uh, satellites monitoring solar flares, right, that give us a warning. Uh, and so similarly, I think by, uh, letting go of this anthropomorphic, anthropocentric anchor for AI, we'll be able to explore the wider space of intelligences that can really be a massive benefit to our wellbeing and the advancement of civilization.
- LFLex Fridman
And still we're able to see the beauty and meaning in the human experience, even though we're no longer, in our best understanding of the world, at the center of it.
- GVGuillaume Verdon
Yeah.... I think there's a lot of beauty in the universe, right? I think life itself, civilization, this homo-techno-capital-mimetic machine that we all live in, right? So you have humans, technology, capital, memes.
- LFLex Fridman
(laughs)
- GVGuillaume Verdon
Everything is coupled to one another. Everything induces selective pressure on one another, and it's a beautiful machine that has created us, has created, you know, the technology we're using to speak today to the audience, uh, capture our speech here, the technology we use to augment ourselves every day. We have our, our phones. I think the system is beautiful, and the principle that, uh, induces this sort of adaptability and convergence on, uh, optimal, uh, technologies, ideas, and so on, it's, it's a beautiful principle that we're part of. And I think part of e/acc is to, um, appreciate this principle in a way that's not just centered on, on humanity, but kind of broader. Um, appreciate, uh, life, um, you know, the preciousness of, of consciousness in our universe, and because we cherish, uh, this beautiful, uh, state of matter we're in, um, uh, we, we f- m- we gotta feel a responsibility to, to scale it in order to preserve it, because the options are to grow or die.
- 50:14 – 57:56
Merging with AI
- LFLex Fridman
So if it turns out that the beauty that is consciousness in the universe is bigger than just humans, that AI can carry that same flame forward, does it scare you, or are you concerned that AI will replace humans?
- GVGuillaume Verdon
So during my career, I had a moment where I realized that, you know, maybe we need to offload to machines to truly understand the universe around us, right? Instead of just having humans with pen and paper solve it all, and to me, that sort of process of letting go of a bit of agency gave us way more leverage to understand the world around us. Is, is, a quantum computer is much better than a human to understand matter at the, at the nano scale. Similarly, I think that humanity has a choice. Do we accept the opportunity to have intellectual and operational leverage that AI wi- will unlock and thus ensure that we're taking along this path of growth, and scope, and scale of civilization? We may dilute ourselves, right? Uh, there might be a lot of workers that are AI, but overall, out of our own self-interest, by combining and augmenting ourselves with AI, uh, we're gonna achieve much higher growth and much more prosperity, right? To me, I think that the most likely future is one where humans augment themselves with AI. I think we're already on this path to augmentation. We have phones we use for communication, we have on ourselves at all times. We have wearables soon that have shared perception with us, right? Like, the Humane AI pin, or I mean, technically your Tesla car has shared perception. And so if you have shared experience, shared context, you communicate with one another and you have some sort of IO, really, it's an extension of yourself. Um, and to me, I think that humanity augmenting itself with AI, and having AI that is not anchored to anything biological, both will co-exist, and the way to align the parties, we already have a sort of mechanism to align super intelligences that are made of humans and technology, right? Companies are sort of large mixture of expert models, where we have neural routing of tasks within a company, and we have ways of economic exchange to align these behemoths. And to me, I think capitalism is the way, and I do think that whatever configuration of matter or information leads to maximal growth will be where we converge, just from like physical principles. And so we can either align ourselves to that reality and, and join the acceleration up th- in s- in scope and scale of civilization, or we can get left behind and try to decelerate and move back in the, in the forest, let go of technology and return to our primitive state. And those are the two paths forward, at least to me.
- LFLex Fridman
But there's a philosophical question whether there's a limit to the human capacity to align. So let me bring it up, uh, as a form of argument. There's a guy named Dan Hendrycks, and he wrote that, uh, he agrees with you that AI development could be viewed as an evolutionary process, but to him, to Dan, this is not a good thing, as he argues that natural selection favors AIs over humans, and this could lead to human extinction. What do you think? If it is an evolutionary process, then AI systems may have no need for humans.
- GVGuillaume Verdon
I do think that we're actually inducing an evolutionary process on the space of AIs through the market.... right? Right now, we run AIs that have positive utility to humans, and that induces a selective pressure, if you consider a neural net being alive when there's, uh, an API running instances of it on GPUs-
- LFLex Fridman
Yeah.
- GVGuillaume Verdon
... right? And which APIs get run? The ones that have high utility to us, right? So similar to how we domesticated wolves, and turned them into dogs that are very clear in their expression, they're very aligned, right? Uh, I think there's gonna be an opportunity to steer, uh, AI and achieve, uh, highly aligned AI. And I think that humans plus AI is a very powerful combination, and it's not clear to me that pure AI, um, would select out that combination.
- LFLex Fridman
So the humans are creating the selection pressure right now-
- GVGuillaume Verdon
Mm-hmm.
- LFLex Fridman
... to create AIs that are, uh, aligned to humans. But, you know, given how AI develops and how quickly it can grow in scale, one of the concerns... To me, one of the concerns is unintended consequences, that humans are not able to anticipate all the consequences of this process. The scale of damage that could be done through unintended consequences with AI systems is very large.
- GVGuillaume Verdon
The scale of the upside-
- LFLex Fridman
Yes.
- GVGuillaume Verdon
... right?
- LFLex Fridman
Yes.
- GVGuillaume Verdon
By augmenting ourselves with AI is uni- unimaginable right now. The, the opportunity cost... We're- we're at a fork in the road, right? Whether we take the path of creating these technologies, augment ourselves, and get to climb up the Kardashev scale, become multi-planetary with the aid of AI, or we have a hard cutoff of, like, we don't birth these technologies at all, and then we leave all the potential upside on the table.
- LFLex Fridman
Yeah.
- GVGuillaume Verdon
Right? And to me, out of responsibility to the future humans we could carry, right? With higher carrying capacity by scaling upsilization. Out of responsibility to those humans, I think we have to make the greater, grander future happen.
- LFLex Fridman
Is there a middle ground between cutoff and all systems go? Is there some argument for caution?
- GVGuillaume Verdon
I think, like I said, the market will exhibit caution. Every organism, company, consumer is acting out of self-interest, and they won't assign capital to things that have negative utility to them.
- LFLex Fridman
The problem is, with the market, is like, you know, there's not always perfect information. There's manipulation. There's, uh, bad-faith actors that mess with the system. It's not, it's not always a, um, rational and honest system.
- GVGuillaume Verdon
Well, that's why we need freedom of information, freedom of speech, and freedom of thought in order to converge, be able to converge on, uh, the subspace of technologies that have positive utility for us all, right?
- 57:56 – 1:13:23
p(doom)
- LFLex Fridman
Well, let me ask you about PDOOM, probability of doom. That's just fun to say, but not fun to experience. Uh, what is, to you, the probability that AI eventually kills all or most humans, also known as probability of doom?
- GVGuillaume Verdon
I'm not a fan of that calculation. I think it's, uh, people just throw numbers out there. Uh, and it's a very sloppy calculation, right? To calculate a probability, you know, let's say you model the world as some sort of Markov process, if you have enough variables, or hidden Markov process. You need to do a s- stochastic path integral through the space of all possible futures, not just the futures that your brain naturally steers towards, right? Um, I think that the estimators of PDOOM are biased because of our biology, right? We're, we've evolved to, uh, have biased sampling towards negative futures that are scary because that was an evolutionary optimum, right? And so people that are of, let's say, higher neuro- neuroticism will just think of, uh, negative futures where everything goes wrong all day every day and, and claim that they're doing unbiased sampling. And, and in a sense, like, they're not normalizing for the space of all possibilities. And the space of all possibilities is, like, super exponentially large. And it's very hard to have this estimate. And in general, I don't think that we can predict the future w- with that much granularity because of, of chaos, right? If you have a complex system, you have some uncertainty and a couple of variables, if you let time evolve, you have this concept of a, a Lyapunov exponent, right? A bit of fuzz becomes a lot of fuzz in our estimate, exponentially so, uh, over time. And, um, I think we, we need to show some humility, uh, that we can't actually predict the future. All we know, the only prior we have is the laws of physics, and that's, that's what we're arguing for. The laws of physics say the system will wanna grow. And subsystems that are optimized for growth are more... And replication are more likely in the future. And so we should aim to maximize our current mutual information with the future, and the path towards that is for us to accelerate rather than decelerate. So I don't have a PDOOM, uh, 'cause I think that, you know, similar to...... the quantum supremacy experiment at Google. I was in the room when they were running the simulations for that. That was an example of a quantum chaotic system, where you, you cannot even estimate probabilities, uh, certain outcomes, uh, with e- even the biggest supercomputer in the world. Right? And, um, so that's an example of chaos. And I think the system is far too chaotic for anybody to have an accurate, uh, estimate of the likelihood of certain futures. If they were that good, I think they would be very rich, uh, trading on the stock market.
- LFLex Fridman
But nevertheless, it's true that humans are biased, grounded in our evolutionary biology, scared of everything that can kill us. But we can still imagine different trajectories that can kill us. We don't know, uh, all the other ones that don't necessarily, but it's still, I think, useful combined with some basic intuition grounded in human history to reason about like what ... Like looking at geopolitics, looking at basics of human nature.
- GVGuillaume Verdon
Mm-hmm.
- LFLex Fridman
How can powerful technology hurt a lot of people? And it just seems grounded in that, looking at nuclear weapons, you can start to estimate P-doom, in a s- in a- maybe in a more philosophical sense, not- not a mathematical one. Philosophical meaning, like is there a chance? Does human nature tend towards that or not?
- GVGuillaume Verdon
I- I think to me one of the biggest existential risks would be the concentration of the power of AI in the hands of the very few, especially if it's a mix between the companies that control the flow of information and the government. Because that could, uh, set things up for a sort of dystopian future where only a very few, an oligopoly in the government, have AI, and they could even convince the public that AI never existed.
- LFLex Fridman
Mm-hmm.
- GVGuillaume Verdon
And that opens up sort of these scenarios for authoritarian centralized control, which to me is the- the darkest timeline. And the reality is that we have- we have a prior, we have a data-driven prior of these things happening, right? When you give too much power, when you centralize power too much, um, humans do horrible things, right? Um, and to me, that has a much higher likelihood in my Bayesian inference than, uh, sci-fi based priors, right? Like my prior came from The Terminator movie. Um, and so when I talked to these AI doomers, I just asked them to trace a path through this markoff chain of events that would lead to our doom, right? And to actually give me a good probability for each transition. And very often there's a unphysical or highly unlikely transition in that chain, right? But of course, we're wired to fear things, and we're wired to respond to danger, and we're wired to deem the unknown to be dangerous, because that's a good heuristic for survival, right? But there's much more to lose out of fear. Uh, we have so much to lose, so much upside to lose by preemptively stopping the positive futures from- from happening out of fear. Um, and so I think that we shouldn't, uh, give into fear. Uh, fear is the mind killer. I think it's also the civilization killer.
- LFLex Fridman
We can still think about the various ways things go wrong. F- for example, the founding fathers of this, uh, the United States thought about human nature and that's why there- there's a discussion about the freedoms that are necessary. They really deeply deliberated about that. And I think the same could possibly be done for AGI. It is true that history, human history shows that we tend towards centralization, or at least when we achieve centralization, a lot of bad stuff happens. When there's a dictator, a lot of dark bad things happen. The question is, can AGI become that dictator? Can AGI when developed become the centralizer because of its power? Maybe has the same, because of the alignment of humans perhaps, the same tendencies, the same, uh, Stalin-like tendencies to centralize and manage centrally-
- GVGuillaume Verdon
Mm-hmm.
- LFLex Fridman
... the allocation of resources. And you can even see that as a compelling argument on the surface level. Well, if AGI is so much smarter, so much more efficient, so much better at allocating resources, why don't we outsource it to the AGI. And then eventually, whatever forces that, uh, corrupt the human mind with power could do the same for AGI. It would just say, "Well, humans are dispensable. We'll get rid of them." Do the Jonathan Swift modest proposal from a few centuries ago, I think the 1700s, when he satirically suggested that, I think it's in Ireland, that the- the- the children of poor people are fed as food to the rich people, and that would be a good i- idea because it decreases the amount of poor people and gives extra income to the poor people. So it's on several accounts decreases the amount of poor people, therefore more people become rich.Uh, of course it m- misses a fundamental piece here that's hard to put into a mathematical equation of the basic value of human life. So, all of that to say, are you concerned about AGI being the very centralizer of power that you just talked about?
- GVGuillaume Verdon
I do think that, um, right now there's a bias towards over-centralization of AI because of, uh, compute density and centralization, centralization of data, and how we're training, uh, models. Um, I think over time we're gonna run out of data to scrape over the internet, and I think that ... Well, actually I'm working on increasing the compute density so that compute can be everywhere, and acquire information and test hypotheses in the environment in a distributed, uh, fashion. I think that fundamentally, centralized cybernetic control, so having one intelligence that is massive, that, you know, fuses many sensors and is trying to perceive the world accurately, predict it accurately, predict many, many variables, and control it, right, enact its will on the world, I think that's just never been the optimum, right? Like let's say you have a, a company, you know, if you have a company, I don't know, of 10,000 people that all report to the CEO, even if that CEO is an AI, I think it would struggle to fuse all of the information that is coming to it and then predict the whole system, and then to enact its, its will. What has emerged in nature and in corporations and all sorts of systems is a notion of sort of hierarchical cybernetic control, right? You have, uh, you know, in a company it would be you have like the individual contributors. They're self-interested and, and, and, uh, they're trying to achieve their, their tasks, and they, they have a f- a fine, in terms of time and space, if you will, control loop and, and, and field of perception, right? Um, they have their code base. Let's say you're in a software company. They have their code base, they iterate it on it, uh, intraday, right? And then the management maybe checks in. It has a wider scope. It has, let's say five reports, right? And then it samples each, um, person's update once per week, and then you can go up the chain, and you have larger time scale and, and greater scope. And that seems to have emerged as sort of the, the optimal way to, to control systems. And, and really, that's what capitalism gives us, right? You have these, these hierarchies, and you can even have like parent companies and so on. And so that is far more fault tolerant. In quantum computing, that's my field I came from, we have a, a concept of, of this fault tolerance and quantum error correction, right? Quantum error correction is detecting a fault that came from noise, predicting how it's propagated through the system, and then correcting it, right? So it's a cybernetic loop. And it turns out that, uh, decoders, uh, that are hierarchical, and at each level of the hierarchy are local, uh, perform the best by far, and are far more fall tolerant. And the reason is if you have a non-local decoder, then you have one fault at, at this, uh, control node, and the whole system sort of crashes. Similarly to if you have, uh, you know, uh, one CEO that everybody reports to, and that CEO goes on vacation, the whole company comes to a crawl, right? Um, and so to me, I think that yes, we're seeing a tendency towards centralization of AI, but I think there's gonna be a correction over time where intelligence is gonna go closer to the perception, and we're gonna, we're gonna break up AI into, um, um, smaller subsystems that communicate with one another and form a sort of meta, uh, system.
- LFLex Fridman
So if you look at the hierarchies that are in the world today, there's nations, and those are hierarchical, but in relation to each other, nations are anarchic. So it's an anarchy.
- GVGuillaume Verdon
Mm.
- LFLex Fridman
Y- do you foresee a world like this where there's not a over ... What'd you call it? A centralized cybernetic control?
- GVGuillaume Verdon
L- centralized locus of control, yeah.
- LFLex Fridman
Is ... So like, that's suboptimal you're saying?
- GVGuillaume Verdon
Yeah.
- LFLex Fridman
So it would be always a state of competition at the very-
- GVGuillaume Verdon
Yeah.
- LFLex Fridman
... top level?
- GVGuillaume Verdon
Yeah, just like, you know, in a company you may have like, uh, two units working (laughs) on similar technology and competing with one another, and you, you prune the one that performs not as well, right? And that's a sort of selection process for a tree, or a product gets killed, right, and then a whole org gets fired. And that's ... This process of, of trying new things and, and, and shedding old things that didn't work is this- it's what gives us adaptability and helps us converge on, uh, you know, the technologies and things to do that are most good.
- LFLex Fridman
I just hope there's not a failure mode that's unique to AGI versus humans, 'cause you're describing human systems mostly right now.
- GVGuillaume Verdon
Right.
- LFLex Fridman
I just hope when there's a monopoly in AGI in one company that we'll see the same thing we see with humans, which is another company will spring up and start competing effectively.
- GVGuillaume Verdon
I mean, that's been the case so far, right?
- LFLex Fridman
Yeah.
- GVGuillaume Verdon
We have OpenAI, we have Anthropic, now we have xAI. Uh, you know, we had Meta even for open source, it was ... And now we have Mistral, right, which is highly competitive. And so that's the beauty of capitalism. You don't have to trust any one party too much, 'cause we're kind of always hedging our bets at every level. There's always competition, and that's the most, um, beautiful thing to me at least, is that the whole system is always shifting and always adapting, and maintaining that dynamism is how we avoid tyranny.... right? Ma- making sure that, um, everyone has access to, to these tools, to, to these models and can contribute to the research, uh, um, avoids a sort of neural tyranny where very few pa- pe- people have control over AI for the world and, and use it to oppress, uh, those around them.
- 1:13:23 – 1:26:41
Quantum machine learning
- LFLex Fridman
When you were talking about intelligence, you mentioned multipartite quantum entanglement.
- GVGuillaume Verdon
Mm-hmm.
- LFLex Fridman
So high level question first is, what do you think is intelligence? When you think about quantum mechanical systems and you observe some kind of computation happening in them, what do you think is intelligent about the kind of computation the universe is able to do? A small, small inkling of which is the kind of competition the human brain is able to do.
- GVGuillaume Verdon
I, I would say like intelligence and computation aren't quite the same thing. I think that the universe is very much, you know, doing a, a quantum computation. If you had access to all the degrees of freedom, you could, in a very, very, very large quantum computer with many, many, many qubits, uh, let's say a few qubits per, uh, Planck volume, right? Um, which was more or less the pixels we have, uh, then you, you'd be able to simulate the whole universe, right? Uh, on a, on a sufficiently large quantum computer, assuming you're looking at a finite volume, of course, of the universe. Um, I think that, at least to me, intelligence is the, you know, I go back to cybernetics, right? The ability to perceive, predict, and control our world. But really it's, nowadays, it seems like a lot of intelligence, um, we use is more about compression, right?
- LFLex Fridman
Mm-hmm.
- GVGuillaume Verdon
It's about, um, it's about operationalizing information theory, right? In information theory, you have the notion of entropy of a distribution or a system, and entropy tells you that you need this many bits, uh, to encode this distribution or this subsystem if you had the most optimal code.
- LFLex Fridman
Mm-hmm.
- GVGuillaume Verdon
And AI, at least the way we ... we would do it today for LMs and for quantum, uh, is, is very much trying to minimize, uh, relative entropy between our models of the world and the world distributions from the world. And so we're learning, we're searching over the space of computations to process the world, define that compressed representation that has distilled all the variance and noise and entropy, right? Um, and originally, I, I came to quantum machine learning from the study of black holes because the entropy of black holes is very interesting. In a sense, they're physically the most dense objects in the universe. You can't pack more information spatially a- any more densely than a black hole. And so I was wondering, how do black holes actually encode information? What is their compression code? And so that got me into the space of algorithms to search over space of quantum, uh, codes. Um, and, uh, it got me actually into also how do you acquire quantum information from the world, right? So something I've worked on, uh, this is public now, is quantum analog digital conversion. So how do you capture information from the real world in superposition and not destroy the superposition, but it digitize for a quantum mechanical computer, uh, information from the real world? Um, and so if you have an ability to capture quantum information and search over learned representations of it, now you can learn compressed representations that may have some useful information in their latent representation, right? Um, and I think that many of the problems facing our civilization are actually beyond this, this complexity barrier, right? I mean, the greenhouse effect is a quantum mechanical effect, right? Chemistry is quantum mechanical. Um, you know, nuclear physics is quantum mechanical. A lot of biology and, and, and, and protein folding and so on is affected by quantum mechanics. And so unlocking an ability to augment human intellect with quantum mechanical computers and quantum mechanical AI seemed to me like a fundamental capability for civilization that we, we needed to develop. Um, so I spent several years doing that. Um, but over time, I kind of grew weary of the, the timelines that were starting to look like nuclear fusion.
- LFLex Fridman
One high level question I can ask is maybe by way of definition, by way of explanation, what is a quantum computer and what is, uh, quantum machine learning?
- GVGuillaume Verdon
Hmm. So a quantum computer really is a quantum mechanical system over which we have sufficient control, and it can maintain its quantum mechanical state.
- LFLex Fridman
Mm-hmm.
- GVGuillaume Verdon
And quantum mechanics is how nature behaves at the very small scales, when things are very small or very cold, and it's actually more fundamental than probability theory. So we're used to things being this or that.Uh, but we're not used to thinking in superpositions 'cause, uh, well, our brains can't, uh, can't do that. So, we, we have to translate the quantum mechanical world to, say, linear algebra to grok it. Unfortunately, that translation is exponentially inefficient on average. You have to represent things with very large matrices. But really, you can make a quantum computer out of many things, right? And we've seen all sorts of players, you know, from neutral atoms, trapped ions, super conducting metal, um, photons in, at different frequencies. I think you could make a quantum computer out of many things. But to, to me, the thing that was really interesting was both quantum machine learning was about understanding the quantum mechanical world with quantum computers, so embedding the physical world into AI representations. And quantum computer engineering was embedding AI algorithms into the physical world. So, this bidirectionality of embedding the physical world into AI, AI into the physical world, this sym- symbiosis between physics and AI, really, that's the sort of core of my quest, really. Uh, even to this day, after quantum computing. It's still in this sort of, um, journey to merge, really, physics and AI, fundamentally.
- LFLex Fridman
So, quantum machine learning is a way to do machine learning on a, uh, representation of nature that is, you know, stays true to the quantum mechanical aspect of nature?
- GVGuillaume Verdon
Yeah. It's learning quantum mechanical representations. That would be quantum deep learning. Um, alternatively, you can try to do classical machine learning on a quantum computer. I wouldn't advise it, because, um, you may have some speed ups. But very often, the speed ups come with huge costs. Using a quantum computer is very expensive. Why is that? Because you assume the computer is operating at zero temperature, which no physical system in the universe can achieve that temperature. So, what you have to do is what I've been mentioning, this quantum error correction process, which is really an algorithmic fridge, right? It's trying to pump entropy out of the system, trying to get it closer to, to zero temperature. And when you do the calculations of how many resources it would take to, say, do deep learning on a quantum computer, classical deep learning, uh, it's, there's just a, such a huge overhead. It's not worth it. It's like thinking about shipping something across a city using a rocket, and going to orbit and back. It doesn't make sense. Just use, uh, an, you know, delivery truck, right?
- LFLex Fridman
What kind of stuff can you figure out, can you predict, can you understand with quantum deep learning that you can't with deep learning? So, incorporating quantum mechanical systems into the, into the learning process.
- GVGuillaume Verdon
I think that's a great question. I mean, fundamentally, it's any system that has sufficient, uh, quantum mechanical, uh, correlations that are very hard to capture for classical representations. Then there should be an advantage for a quantum mechanical representation over a purely classical one. The question is, which systems have sufficient correlations that are very quantum, uh, but it's also, um, which systems are still relevant to industry? That's a big question. You know, people are leaning towards chemistry, uh, nuclear physics. Uh, um, I've worked on, actually, processing inputs from quantum sensors, right? If you have a network of quantum sensors, they've captured a quantum mechanical image of the world, and how to post-process that, that becomes a sort of quantum form of machine perception. And so for example, uh, Fermilab has a project exploring detecting dark matter with these quantum sensors.
- LFLex Fridman
Mm-hmm.
- GVGuillaume Verdon
And to me, uh, that's in alignment with my quest to understand the universe ever since I was a child. And so some day, I hope that we can have very large networks of quantum sensors that help us, um, peer into the earliest parts of the, the universe, right? For example, the LIGO is a quantum sensor, right? It's just a very large one. Um, so, uh, yeah, I would say quantum machine perception, uh, simulations, right, grokking quantum simulations similar to AlphaFold, right? AlphaFold understood the probability distribution over configurations of proteins. You can understand quantum distributions over configurations of electrons, uh, more efficiently with quantum machine learning.
- LFLex Fridman
You co-authored a paper titled A Universal Training Algorithm For Quantum Deep Learning, uh, th- that involves baqprop with a Q. Very well done, sir.
- GVGuillaume Verdon
(laughs)
- LFLex Fridman
Very well done. How does it work? Is it, is there some interesting aspects you can just mention, uh, on how kind of, you know, backprop, and some of these things we know for classical machine learning transfer over to the-
- GVGuillaume Verdon
Mm-hmm.
- LFLex Fridman
... like the quantum machine learning?
- GVGuillaume Verdon
Yeah. That was, that was a, that was a funky paper. That was one of my first papers in, in quantum deep learning. Everybody was saying, "Oh, I think deep learning is gonna be sped up by quantum computers." And I was like, "Well, the best way to predict the future is to invent it. So here's a 100-page paper. (laughs) Have fun." Um, essentially, y- you know, quantum computing is usually you embed, uh, reversible operations into a quantum computation.
- LFLex Fridman
Mm-hmm.
- GVGuillaume Verdon
And so the trick there was to do a feedforward operation, and do what we call a phase kick. But really, it's just a force kick. You just kick the system, uh, with a certain force that is, you know, proportional to your loss function that you, you wish to optimize. And then by performing uncomputation-... you start with the superpositions over- the superposition over parameters, right? Which is pretty funky. Now, you're not just- you don't have just a point for parameters, you have a superposition over many potential parameters, right?
- LFLex Fridman
Yeah. Mm-hmm.
- GVGuillaume Verdon
And our goal is to-
- LFLex Fridman
Is using phase kicks somehow?
- GVGuillaume Verdon
Right.
- 1:26:41 – 1:35:15
Quantum computer
- GVGuillaume Verdon
- LFLex Fridman
So, maybe this is a good place to ask the difference between the different fields that you've had a toe in.
- GVGuillaume Verdon
Mm.
- LFLex Fridman
So, mathematics, physics, engineering, and also, you know, entrepreneurship. Like the different layers of the stack.
- GVGuillaume Verdon
Mm.
- LFLex Fridman
I think a lot of the stuff you're talking about here is a little bit on the math side, maybe physics, almost working in theory.
- GVGuillaume Verdon
Mm-hmm.
- LFLex Fridman
What's the difference to you between math, physics, engineering, and, uh, you know, mak- making a product-
- GVGuillaume Verdon
(laughs) .
- LFLex Fridman
... for quantum computing, for quantum machine learning?
- GVGuillaume Verdon
Yeah, I mean, you know, some of the original team, uh, for the TensorFlow Quantum project, which we started, you know, in school at University of Waterloo, uh, there was myself, uh, you know, initially I was a- a physicist, a polymathetician- mathematician. We had a computer scientist, uh, we had mechanical engineer, and then we had a physicist that was experimental primarily. And so putting together teams that are very cross-disciplinary and figuring out how to communicate and- and share knowledge is really the key to doing this sort of indis- interdisciplinary engineering work. Um, I mean, there is- there is a big, uh, difference. You know, in- in mathematics, you can explore mathematics for mathematics' sake. In physics, you're applying mathematics to understand, uh, the world around us. Uh, and in engineering, you're trying to- you're trying to hack the world, right? You're trying to find how to apply the physics that I know, my knowledge of the world, to- to- to do things.
- LFLex Fridman
Well, in quantum computing in particular, I think there's, uh, just a lot of limits to engineering. It just seems to be extremely hard.
- GVGuillaume Verdon
Yeah.
- LFLex Fridman
So, there's a lot of value to be, uh, exploring quantum computing, quantum machine learning in, uh, theory.
- GVGuillaume Verdon
Right.
- LFLex Fridman
In- with- with- with math. And so I guess one question is, why is it so hard to build a quantum computer? What are- what's your view of timelines in bringing these ideas to life?
- GVGuillaume Verdon
Right. I- I think that, um, you know, an overall theme of my company is, uh, that we have folks that are, uh, you know, there's a sort of exodus from quantum computing, and we're going to broader physics-based AI that is not quantum. So, that gives you a hint and, um...
- LFLex Fridman
So, we should say the name of your company is Extropic.
- GVGuillaume Verdon
Extropic, that's right. And we do physics-based AI, primarily based on thermodynamics rather than quantum mechanics. But essentially, a quantum computer is very difficult to build because you have to induce this sort of zero temperature subspace of information, and the way to do that is by encoding information. You encode a code within a code within a code within a code, and so there's a lot of redundancy needed to do this error correction. But ultimately, it's a sort of, um, algorithmic refrigerator, really. It's just pumping out entropy out of the sys- the subsystem that is virtual and- and delocalized that represents your "logical qubits", A.K.A. the- the payload quantum bits in which you actually want to, uh, do- run your quantum mechanical program. It's very difficult because in order to scale up your quantum computer, you need each component to be of sufficient quality for it to be worth it.
- LFLex Fridman
Hmm.
- GVGuillaume Verdon
Because if you try to do this error correction, this quantum error correction process in each quantum bit and your control over them, isn't- if it's insufficient, um, uh, it's not worth scaling up. You're actually adding more errors than you remove. And so there's this notion of a threshold where if your quantum bits are of sufficient quality in terms of your control over them, it's actually worth scaling up. And actually, in recent years, people have been crossing the threshold, and it's starting to be worth it. And so it's just a very long slog of engineering, but ultimately, it's really crazy to me how much exquisite level of control we have over these systems. It's actually quite crazy. Uh, and we're- people are crossing, you know, they're achieving milestones. It's just, you know, in general, the media always gets ahead, right, of where the technology is. There's a bit too much hype. It's good for fundraising, but sometimes, you know, it causes winters, right? It's the hype cycle.... I'm bullish on quantum computing on a 10, 15-year time scale, uh, personally. But I think there's other quests that can be done, uh, in the meantime. I think it's in good hands right now.
- LFLex Fridman
Well, let me just explore different beautiful ideas, large or small, in quantum computing that might jump out at you from memory. So you co-authored a paper titled Asymptotically Limitless Quantum Energy Teleportation via Qudit Probes. So just, uh, out of curiosity, uh, can you explain what a qudit is, which is a qu bit-
- GVGuillaume Verdon
Yeah, it's a-
- LFLex Fridman
... or a D-
- GVGuillaume Verdon
... it's a D state, uh, qu bit.
- LFLex Fridman
It's multi-dimensional.
- GVGuillaume Verdon
Multi-dimensional, right. So it's like a... Well, I... You know, can you have a notion of like an- an integer floating point that is quantum mechanical? That's something I've had to think about. Um, I think that research was a precursor to later work on quantum analog digital conversion.
- LFLex Fridman
Ah.
- GVGuillaume Verdon
The- there- there was interesting, because during my master's, I was trying to understand the energy and entanglement of the vacuum, right?
- LFLex Fridman
Mm-hmm.
- 1:35:15 – 1:40:04
Aliens
- LFLex Fridman
Uh, but since you mentioned, uh, UAPs, uh, we talked about intelligence, and I forgot to ask, what- what's your view on the other possible intelligences that are out there at the-
- GVGuillaume Verdon
Right.
- LFLex Fridman
... the meso scale?
- GVGuillaume Verdon
(laughs) .
- LFLex Fridman
Do you think there's other intelligent alien civilizations? Is that useful to think about? How often do you think about it?
- GVGuillaume Verdon
I think it's- I think it's useful to think about. It's useful to think about because we gotta ensure we're anti-fragile and we're, you know, trying to increase our capabilities as fast as possible, because we could get disrupted. Like there's no laws of physics against there being life elsewhere that could evolve and become an advanced civilization and- and eventually come to us.
- LFLex Fridman
Mm-hmm.
- GVGuillaume Verdon
Uh, do I think they're here now? I'm not sure. Uh, I mean, I've- I've- I've read what most people have read on the- the topic. Um, I think it's interesting to consider, and to me it's a useful thought experiment to instill a sense of urgency in developing technologies and increasing our capabilities to make sure we don't get disrupted, right? Whether it's a form of- of AI that disrupts us or a foreign intelligence from a different planet. Like either way, like increasing our capabilities and becoming formidi- formidable as humans, um, I think that's- that's really important, so that we're robust against whatever the universe throws at us.
- LFLex Fridman
But to me, it's also interes- an- an interesting challenge and thought experiment on how to perceive intelligence. This has to do with quantum mechanical systems, this has to do with- with any kind of system that's not like humans.
- GVGuillaume Verdon
Mm.
- LFLex Fridman
So to me the thought experiment is, say the aliens are here or they are directly observable, we're just too blind.... too self-centered, um, don't have the right sensors.
- GVGuillaume Verdon
Hmm.
- LFLex Fridman
Or don't have the right processing of the sensor data to see the obvious intelligence that's all around us.
- GVGuillaume Verdon
Well, that's why we work on quantum sensors, right? They can sense gravity.
- LFLex Fridman
Yeah, gra- but there could be st- so that's a good one, but there could be other stuff that's not even in the, uh, currently known f- forces of physics.
- GVGuillaume Verdon
Right.
- LFLex Fridman
There could be some other stuff. And the most entertaining thought experiment to me is that it's other stuff that's obvious. It's not like we don't, we lack the sensors, it's all around us.
- GVGuillaume Verdon
Hmm.
- LFLex Fridman
You know, co- you know, the, the, the consciousness being one possible one, but there could be stuff that's just like obviously there, that once you know it, it's like, oh, right, right. That's, that's, that, the thing we thought is somehow emergent from the laws of physics as we understand them-
- GVGuillaume Verdon
Hmm.
- LFLex Fridman
... is actually a fundamental part of the universe and can be incorporated in physics once understood.
- GVGuillaume Verdon
Statistically speaking, right, if we observed some sort of alien life, it would most likely be some sort of virally self-replicating Von Neumann-like probe system, right? And, and it's possible that there, you know, there are such systems that I don't know what they're doing at the bottom of the ocean allegedly, but maybe they're, you know, collecting minerals, uh, from the bottom of the ocean.
- LFLex Fridman
Yeah.
- GVGuillaume Verdon
Um, but that wouldn't viola- violate any of my priors, but a- am I certain that these systems are here and i- it'd be difficult for me to say so, right? I only have secondhand information about there being data.
- LFLex Fridman
About the bottom of the ocean? Yeah, but, you know, could it be things like memes? Could it be thoughts and ideas?
- GVGuillaume Verdon
Hmm.
- LFLex Fridman
Could they be operating in that medium? Could aliens be the very thoughts that come into my head? Like, what do you... Have you-
- GVGuillaume Verdon
(clears throat)
- LFLex Fridman
How do you know that the, how do you know that the, wh- what's the origin of ideas, in your mind? When an idea comes to your head, show me where it originates.
- GVGuillaume Verdon
I mean, frankly, (laughs) uh, when I had the idea for the type of computer I'm building now, I think it was eight years ago now, it really felt like it was being beamed from space. It, it (laughs) just, I was in bed just shaking, just thinking it through and I don't know. Uh, but do I believe that legitimately? I don't think so. But you know, I, I think that, um, alien life could take many forms and I think the notion of intelligence and the notion of life needs to be expanded, uh, much more broadly, uh, to be less anthropocentric or biocentric.
- 1:40:04 – 1:45:25
Quantum gravity
- LFLex Fridman
Just to linger a little longer on c- quantum mechanics, what's, uh, through all your explorations of quantum computing, what's the coolest, most beautiful idea that you've come across that has been solved or has not yet been solved?
- GVGuillaume Verdon
I think the journey to understand something called AdS/CFT. So the journey to understand quantum gravity through this picture where a hologram of lesser dimension is actually dual or exactly corresponding to a bulk, uh, theory of quantum gravity of an extra dimension. And the fact that this sort of duality comes from trying to learn deep learning-like representations of the boundary.
- LFLex Fridman
Mm-hmm.
- GVGuillaume Verdon
And so at least part of my journey someday on my bucket list is to apply quantum machine learning to, uh, these sorts of systems, these CFDs or they're called SYK models, um, and learn an emergent geometry from, from the boundary theory. And so we can have a form of machine learning helps us to help us understand quantum gravity, right? Which is, you know, still a holy grail that I would like to hit before I leave this earth. (laughs)
- LFLex Fridman
W- what do you think is going on with black holes? As information storing and processing units, what do you think is going on with black holes?
- GVGuillaume Verdon
Black holes are really fascinating objects. They're at the inter- interface between quantum mechanics and gravity, and so they help us test all sorts of ideas. Um, I think that, you know, for many decades now, there's been sort of this black hole information paradox that things that fall into the black hole seem to, we've seemed to have lost their information. Now, I think there's this, uh, firewall paradox that has been allegedly resolved in recent years by, um, you know, a former peer of mine, uh, who's now a professor at Berkeley, um, and there it seems like there is, as information falls into a black hole, it sort of, there's sort of a sedimentation, right, as you, as you get closer and closer to the horizon from the point of view of the observer on the outside, the object slows down infinitely as it gets closer and closer. And so everything that is falling into a black hole from our perspective gets sort of sedimented and tacked onto the near horizon, and at some point it gets so close to the horizon, it's in the proximity or the scale which, in which quantum effects and quantum fluctuations matter. And there some, that infalling matter could interfere with sort of the traditional pictures, that it could interfere with the creation and annihilation of particles and antiparticles in the vacuum, and through this interference...... uh, one of the particles gets en- entangled with the infalling information, and one of them is now free and escapes. And that's how there's sort of mutual information between the outgoing radiation and the infalling matter. Uh, but getting that calculation right, I think we're only just starting to, uh, put the pieces together. Um-
- LFLex Fridman
There's a few pothead-like questions-
- GVGuillaume Verdon
(laughs)
- LFLex Fridman
... I want to ask you.
- GVGuillaume Verdon
Sure.
- LFLex Fridman
So one, does it terrify you that there's a giant black hole at the center of our galaxy?
- GVGuillaume Verdon
I don't know. I, I, I just want to, you know, set up shop near it to, to fast forward. You know, meet, uh, meet a future civilization, right? Like, if we have a limited lifetime, if you could go orbit a black hole and emerge, uh...
- LFLex Fridman
So if you were like, if there was a special mission that could take you to a black hole, would you volunteer to go travel?
- GVGuillaume Verdon
To orbit. And not-
- LFLex Fridman
To orbit.
- GVGuillaume Verdon
... obviously not fall into it. (laughs)
- LFLex Fridman
That's, that's obvious. So it's obvious to you that everything's destroyed inside a black hole? Like, all the information that makes up Guillaume is destroyed?
- GVGuillaume Verdon
Um...
- LFLex Fridman
Maybe on the other side, what if Joesels emerges?
Episode duration: 2:53:09
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode 8fEEbKJoNbU
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome