Skip to content
Lex Fridman PodcastLex Fridman Podcast

Jaron Lanier: Virtual Reality, Social Media & the Future of Humans and AI | Lex Fridman Podcast #218

Jaron Lanier is a computer scientist, composer, artist, author, and founder of the field of virtual reality. Please support this podcast by checking out our sponsors: - Skiff: https://skiff.org/lex to get early access - Novo: https://banknovo.com/lex - Onnit: https://lexfridman.com/onnit to get up to 10% off - Indeed: https://indeed.com/lex to get $75 credit - Eight Sleep: https://www.eightsleep.com/lex and use code LEX to get special savings EPISODE LINKS: Jaron's Website: http://www.jaronlanier.com/ Jaron's Books: https://amzn.to/3tlhl9T PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 1:39 - What is reality? 5:52 - Turing machines 7:10 - Simulating our universe 13:25 - Video games and other immersive experiences 17:12 - Death and consciousness 25:43 - Designing human-centric AI 27:17 - Empathy with robots 31:09 - Social media incentives 43:29 - Data dignity 51:01 - Jack Dorsey and Twitter 1:02:46 - Bitcoin and cryptocurrencies 1:07:26 - Government overreach and freedom 1:17:41 - GitHub and TikTok 1:19:51 - The Autodidactic Universe 1:24:42 - Humans and the mystery of music 1:30:53 - Defining moments 1:41:39 - Mortality 1:43:31 - The meaning of life SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Lex FridmanhostJaron Lanierguest
Sep 6, 20211h 52mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:39

    Introduction

    1. LF

      The following is a conversation with Jaron Lanier, a computer scientist, visual artist, philosopher, writer, futurist, musician, and the founder of the field of virtual reality. To support this podcast, please check out our sponsors in the description. As a side note, you may know that Jaron is a staunch critic of social media platforms. Him and I agree on many aspects of this, except perhaps I am more optimistic about it being possible to build better platforms and better artificial intelligence systems that put long-term interests and happiness of human beings first. Let me also say, a general comment about these conversations: I try to make sure I prepare well, remove my ego from the picture, and focus on making the other person shine as we try to explore the most beautiful and insightful ideas in their mind. This can be challenging when the ideas that are close to my heart are being criticized. In those cases, I do offer a little pushback, but respectfully, and then move on, trying to have the other person come out looking wiser in the exchange. I think there's no such thing as winning in conversations, nor in life. My goal is to learn and to have fun. I ask that you don't see my approach to these conversations as weakness. It is not. It is my attempt at showing respect and love for the other person. That said, I also often just do a bad job of talking, but you probably already knew that, so please give me a pass on that as well. This is the Lex Fridman Podcast, and here is my conversation with Jaron Lanier.

  2. 1:395:52

    What is reality?

    1. LF

      You're considered the founding father of virtual reality. Do you think we will one day spend most or all of our lives in, uh, virtual reality worlds?

    2. JL

      I have always found the very most valuable moment in virtual reality to be the moment when you take off the headset and your senses are refreshed and you perceive physicality, uh, afresh, you know, as if you were a newborn baby-

    3. LF

      Mm.

    4. JL

      ... but with a little more experience. So you can really notice just how incredibly strange and delicate and peculiar and impossible the real world is. Um...

    5. LF

      So the magic is, and perhaps forever will be, in the physical world?

    6. JL

      Well, that's my take on it. That's just me. I mean, I think I don't get to tell everybody else how to think or how to experience virtual reality, and at this point, there have been multiple generations of younger people who've come along and liberated me from having to worry about these things.

    7. LF

      (laughs)

    8. JL

      Uh, but I should say also, even in, uh, what somet- well, I called it mixed reality back in the day, and these days it's called augmented reality, uh, but with something like a HoloLens, even then, like one of my favorite things is to augment a forest, not because I think the forest needs augmentation, but when you look at the augmentation next to a real tree, the real tree just pops out as being astounding, you know? It's- it's interactive, it's changing slightly all the time if you pay attention. And it's hard to pay attention to that, but when you compare it to virtual reality, all of a sudden you do. And even in practical applications, uh, my- my favorite early application of virtual reality which we prototyped going back to the 80s when I was working with Dr. Joe Rosen at Stanford Med, near- near where we are now, uh, we made the first surgical simulator. And to go from the fake anatomy of the simulation, which is incredibly valuable for many things, for designing procedures, for training, for all kinds of things, then to go to the real person, boy, it's really something. Like, uh, surgeons really get woken up by that transition. It's very cool. So I think the transition is actually more valuable than the simulation.

    9. LF

      That's fascinating. I never really thought about that. It's almost ... it's- it's like traveling elsewhere in the physical space can help you appreciate how much you value your home once you return.

    10. JL

      Well, that's how I take it. I mean, um, once again, people have different attitudes towards it. All are welcome.

    11. LF

      What do you think is the difference between the virtual world and the physical meat space world that- that you are still drawn, for you personally, still drawn to the physical world? Like, there clearly then is a distinction. Is there some fundamental distinction, or is it the peculiarities of the current set of technology?

    12. JL

      In terms of the kind of virtual reality that we have now, uh, it's made of software, and software is- is terrible stuff.

    13. LF

      Yeah.

    14. JL

      Software is always the slave of its own history, its own legacy.

    15. LF

      (laughs)

    16. JL

      It's always, um, infinitely arbitrarily messy and arbitrary. Working with it brings out a certain kind of nerdy personality in people, or at least in me-

    17. LF

      Mm-hmm.

    18. JL

      ... which, um, I'm not that fond of. And there- there are all kinds of things about software I don't like. (laughs) And so that's different from the physical world. It's not something we understand, as you just pointed out. On the other hand, you know, I'm a little mystified when people ask me, "Well, do you think the universe is a computer?" And I have to say, well, I mean, what on earth could you possibly mean if you say it isn't a computer? If it isn't a computer, it wouldn't follow principles consistently and it wouldn't be intelligible, 'cause what else is a computer, ultimately? You know, I mean, a- a- and we have physics, we have technology, you know, so we can do technology, so we can program it. So I mean, of course it's some kind of computer. But I think trying to understand it as a Turing machine is probably a foolish approach.

  3. 5:527:10

    Turing machines

    1. JL

    2. LF

      Right. Th- that- that's the question, whether it- it performs, this computer we call the universe, performs the kind of computation that could be modeled as a- a universal Turing machine, or is it something much more fancy?... so fancy, in fact, that we- it may be beyond our cognitive capabilities to understand.

    3. JL

      Turing machines are kind of, um, I'd call them teases in a way.

    4. LF

      (laughs)

    5. JL

      'Cause like (laughs) if you have an infinitely smart programmer with an infinite amount of time, an infinite amount of memory, and an infinite clock speed, then they're universal.

    6. LF

      Yeah.

    7. JL

      But that cannot exist, so they're not universal in practice, and they- they actually are, in practice, a very particular sort of machine within, you know, the constraints, within the conservation principles of any reality that's worth being in, probably. (laughs) And so, uh, so I- I, uh, uh, I think universality of a particular model is probably a deceptive way to think. Even though at some sort of limit, of course, it's can- it's- something like that's gotta be true at some sort of high enough limit, but it's just not accessible to us, so what's the point?

  4. 7:1013:25

    Simulating our universe

    1. JL

    2. LF

      Well- well, to me, the question of, like, whether we're living inside a computer or a s- a simulation is interesting in the following way. There's a technical question is here. How difficult is it to build a machine not that simulates the universe, but that makes it sufficiently realistic that we wouldn't know the difference? Or better yet, sufficiently realistic that we would kinda know the difference, but we would prefer to stay in the virtual world anyway.

    3. JL

      Mm. I wanna give you a few different answers. I wanna give you the one that I think has the most practical importance to human beings right now.

    4. LF

      Mm-hmm.

    5. JL

      Which is that there's a kind of an assertion sort of built into the way the question's usually asked that I think is false.

    6. LF

      Mm-hmm.

    7. JL

      Which- which is a suggestion that people have a fixed level of ability to perceive reality in- in- in a given way. And actually, people are always learning, evolving, forming themselves. We're- we're fluid too. We're also pro- programmable, self-programmable, changing, adapting. And so, uh, y- my favorite way to get at this is to talk about the history of other media. So for instance, there was a peer review paper that showed that an early wire recorder playing back an opera singer behind a curtain was indistinguishable from a real opera singer.

    8. LF

      Hmm.

    9. JL

      And so, now, of course, to us, it would not only be distinguishable, but it would be very blatant 'cause the- the recording would be horrible. But to the people at the time without the experience of it, it seemed plausible. There was an early demonstration of extremely crude video teleconferencing between New York and DC in the '30s, I think so-

    10. LF

      Wow.

    11. JL

      ... that people viewed as being absolutely realistic and indistinguishable-

    12. LF

      Yeah.

    13. JL

      ... which to us would be horrible. Uh, and there are many other example. Um, another one, one of my favorite ones, is in the Civil War era-

    14. LF

      (laughs)

    15. JL

      ... there were itinerant photographers who collected photographs of people who just looked kind of like a few archetypes.

    16. LF

      Mm-hmm.

    17. JL

      So you could buy a photo of somebody who looked kinda like your loved one (laughs) -

    18. LF

      (laughs)

    19. JL

      ... to remind you of that person 'cause actually photographing them was inconceivable and hiring a painter was too expensive, and you didn't have any way for the painter to represent them remotely anyway. How would they even know what they looked like?

    20. LF

      Wow.

    21. JL

      So these are all great examples of how in the early days of different media, we perceived the media as being really great, but then we evolved through the experience of the media. This gets back to what I was saying. Maybe the greatest gift of photography is that we can see the flaws in a photograph and appreciate reality more. Maybe the greatest gift of audio recording is that we can distinguish that opera singer now- singer now (laughs) from that recording of the opera singer on the horrible wire recorder. So- so we're- we're- we- we shouldn't limit ourselves by some assip- assumption of stasis that's incorrect. So, uh, that's the first thing- that's my first answer, which is, I think, the most important one.

    22. LF

      Yeah.

    23. JL

      Now, of course, somebody might come back and say, "Oh, but you know technology can go so far. There- there must be some point at which it would surpass." That's a different question. I- I think that's also an interesting question, but I think the answer I just gave you is actually the more important answer-

    24. LF

      Yeah.

    25. JL

      ... to the more important question.

    26. LF

      That's profound, yeah. But can you- can you... The second question, which you're now making me realize is way different, is it possible to create worlds in which people would want to stay instead of the real world?

    27. JL

      Well...

    28. LF

      Like, um, en masse, like, large numbers of- numbers of people.

    29. JL

      Uh, what I hope is, you know, as I said before, I hope that the experience of virtual worlds helps people appreciate this- this physical world we have and feel tender-

    30. LF

      Yeah.

  5. 13:2517:12

    Video games and other immersive experiences

    1. JL

    2. LF

      Can you maybe, um... this is a therapy session, psychoanalyze me for a second?

    3. JL

      (laughs)

    4. LF

      Like for exa- I really like the Elder Scrolls series. It's a, it's a, uh, role-playing game. Uh, Skyrim, for example. Why do I enjoy so deeply just walking around that world and then there's people you could talk to and you can just like... it's an escape. But, you know, my l- my life is awesome. I'm tru- truly happy, but I also am happy with the, with the music that's playing and the, and the mountains and, uh, carrying around a sword and just-

    5. JL

      (laughs)

    6. LF

      ... that. I don't know what that is. It's very pleasant though to go there.

    7. JL

      Yeah.

    8. LF

      And I miss it sometimes.

    9. JL

      I think it's wonderful to love artistic creations. It's wonderful to love contact with other people. It's wonderful to love play and ongoing evolving meaning and patterns with other people. I think it's a, it's a good thing.

    10. LF

      (laughs)

    11. JL

      You know, I, um-

    12. LF

      Please celebrate

    13. NA

      Me again.

    14. JL

      ... my, my... I'm not, I'm not like anti-tech and I'm certainly not anti-digital tech. I'm anti, as everybody knows by now, I think the, you know, manipulative economy of social media is making everybody nuts and all that stuff, so I'm anti that stuff. But the core of it, of course, I worked for many, many years on trying to make that stuff happen because I think it can be beautiful. Like I, um, I don't... like why not?

    15. LF

      (laughs)

    16. JL

      You know? And, and by the way-

    17. LF

      (laughs)

    18. JL

      ... um, there's a thing about humans which is, um, uh... we're problematic. Any, any kind of social interaction with other people is gonna have its problems. People are political and tricky. And like I love classical music, but when you actually go to a classical music thing and it turns out, oh, actually, this is like a backroom power deal kind of place and a big status ritual as well, and that's kind of not as fun, um, that's part of the package. And the thing is it's always going to be. There's always gonna be a mix of things. Um, I, I don't, uh, I don't think the search for purity is gonna get you anywhere. So I'm not worried about that. I worry about the, the really bad cases where we're becoming... where we're making ourselves crazy or cruel enough that we might not survive. And I think, you know, the social media criticism rises to that level, but I'm glad you enjoy it. I think it's great.

    19. LF

      (laughs) And I like that you basically say that every experience has both the beauty and darkness as in with classical music. I also play classical piano-

    20. JL

      Oh.

    21. LF

      ... so I appreciate it very much. But it's interesting. I mean every... and even the darkness, it's... A Man's Search For Meaning with Viktor Frankl in the, in the, in the concentration camps, even there there's opportunity to discover beauty.

    22. JL

      Mm-hmm.

    23. LF

      Now, and, and so it's... (laughs) that's, that's the interesting thing about humans is the capacity to discover beautiful in the darkest of moments. But there's always the dark parts too.

    24. JL

      Well, I mean it's... our situation is structurally difficult. We are, um-

    25. LF

      (laughs) Structurally difficult.

    26. JL

      No, it is. It's true.

    27. LF

      I like it.

    28. JL

      We perceive socially, we depend on each other for our sense of place and, and perception of the world. I mean we're dependent on each other, and yet there's also a degree in which we're inevitably, um... we inevitably let each other down. Uh, we are set up to be competitive as well as supportive. I mean it's a, it's just... w- our fundamental situation is complicated and challenging, and I wouldn't have it any other way.

  6. 17:1225:43

    Death and consciousness

    1. JL

    2. LF

      Okay, let's t- talk about one of the most challenging things.

    3. JL

      Mm-hmm.

    4. LF

      One, one of the things I unfortunately am very afraid of, being human allegedly. You wrote an essay on death and consciousness in which you write and note, "Certainly the fear of death has been one of the greatest driving forces in the history of thought and in the formation of the character of civilization, and yet it is underacknowledged. The great book on the subject, The Denial of Death by Ernest Becker, deserves a reconsideration." I'm Russian so I have to ask you about this. What- what's the role of death in life?

    5. JL

      See, you would have enjoyed coming to our house 'cause uh-

    6. LF

      (laughs)

    7. JL

      ... my wife is Russian and we also have-

    8. LF

      Awesome.

    9. JL

      ... we have a piano of such spectacular qualities you wouldn't... you would have freaked out-

    10. LF

      (laughs)

    11. JL

      ... if you played it, but anyway.

    12. LF

      Yeah.

    13. JL

      Let- we'll let all that go.

    14. LF

      (laughs)

    15. JL

      So, uh, the context in which... I, I remember that essay, uh, sort of. This was from maybe the '90s or something.

    16. LF

      Yeah.

    17. JL

      And, um, (laughs) I used to publish in a journal called The Journal of Consciousness Studies 'cause I was, I was interested in these endless debates about consciousness and science, uh-... which, uh, certainly continue today.

    18. LF

      Mm-hmm.

    19. JL

      And I was interested in how the fear of death and the denial of death played into different philosophical approaches to consciousness.

    20. LF

      Mm.

    21. JL

      Because (sighs) I, uh, I think on the one hand, uh, the sort of sentimental school of dualism, meaning the feeling that there's something apart from the physical brain, some kind of soul or something else, is obviously motivated in a sense by a hope that that whatever- whatever that is will s- survive death and continue, and that's a very core aspect of a lot of the world religions. Not all of them. Not- not really, but, you know, uh, most of them. Um, the thing I noticed is that the- the opposite of those, which might be the sort of hardcore, "No, the brain's a computer, and that's it," in a sense were motivated in the same way with a remarkably similar chain of- of- of, uh, of arguments, which is, "No, uh, the brain's a computer, and I'm gonna figure it out in my lifetime and upload it, upload myself, and I'll live forever." (laughs)

    22. LF

      Ah. That's interesting.

    23. JL

      And so-

    24. LF

      Yeah. That- that's- that's like the implied thought, right?

    25. JL

      Yeah. And so it's kind of this, in a funny way, it's- it's the same thing. I- i- i- it's, uh, um, it's peculiar that you... to notice that these people who would appear to be opposites in character-

    26. LF

      (laughs) Yeah.

    27. JL

      ... and cultural references and, uh, uh, and in their ideas actually are remarkably similar. And- and- and, uh, to- to- to an incredible degree, the sort of hardcore, uh, computationalist idea a- about, uh, the brain has turned into medieval Christianity with... together. Like, there's a... there are the people who are afraid that if you have the wrong thought, you'll piss off the super AIs of the future who will come back and zap you and, and all that stuff.

    28. LF

      Yeah.

    29. JL

      Uh, it's like, it's really, it's really turned into medieval Christianity all over again.

    30. LF

      Uh, this is... So the... Ernest Becker's idea that death, the fear of death is the warm at the core, which is like, that- that's the, that's the core motivator of everything we see humans have created. The question is if that fear of mortality is somehow core, is like a prerequisite-

  7. 25:4327:17

    Designing human-centric AI

    1. JL

    2. LF

      Your intuition, you speak about this brilliantly with social media, how things can go wrong. Isn't it possible, uh, to des- design systems that sh- that show compassion, not to manipulate you, but give you control and make your life better if you so choose to, like grow together with systems in the way we grow with dogs and cats, with pets, with significant others, in that way. They grow to become better people. I, I don't understand why that's fundamentally not possible. You're saying oftentimes you get into trouble by thinking you know what's good for people.

    3. JL

      Well, look, there's this question of what frame we're, we're speaking in. Um, do you know who Alan Watts was?

    4. LF

      Mm-hmm.

    5. JL

      So Alan Watts once said, "Morality is like gravity," that in some absolute cosmic sense, there can't be morality, because at some point it all becomes relative, and who are we anyway? Like, morality is relative to us tiny creatures. But here on Earth, we're with each other. This is our frame, and morality is a very real thing. Same thing with gravity. At some point, you know, you get, you get into interstellar space and you might not feel much of it, but here we are on Earth. And, and I think in the same sense, um, I think this, this identification with a frame that's quite remote cannot be separated from a feeling of wanting to feel sort of separate, separate from and superior to other people or something like that. There's, there's an impulse behind it that I really have to reject.

    6. LF

      Mm-hmm.

    7. JL

      And

  8. 27:1731:09

    Empathy with robots

    1. JL

      we're just not competent yet to talk about these kinds of absolutes.

    2. LF

      Yeah, but-

    3. JL

      Like-

    4. LF

      Okay, so I agree with you that a lot of technologists sort of lack this basic respect, uh, understanding, and love for humanity. There's a separation there. The thing I'd like to push back against... It's not that you disagree, but I believe you can create technologies and you can create a new kind of technologist engineer that does build systems that respect humanity. Not just respect-

    5. JL

      Okay.

    6. LF

      ... but admire humanity, that have empathy for common humans, have compassion. ?

    7. JL

      So, I mean, no, no, no. I, I, I think... Yeah. I mean, I think musical instruments are a great example of that. Musical instruments are technologies that help people connect in fantastic ways, and that's a, a great example. Um, my, uh, my invention or design during the pandemic period was this thing called together mode, where people see themselves seated sort of in a, a, a, a classroom or a theater instead of in squares. And it allows them to semi-consciously perform to each other as if they're... as if they have proper eye contact, as if they're paying attention to each other non-verbally. And weirdly, that turns out to work. And so it, it promotes empathy, so far as I can tell. I hope, I hope it is of some use to somebody. Uh, the AI idea isn't really new. Um, I would say it was born with Adam Smith's invisible hand-

    8. LF

      Mm-hmm.

    9. JL

      ... with this idea that we build this algorithmic thing and it gets a bit beyond us, and then we think it must be smarter than us. And the thing about the invisible hand is absolutely everybody has some line they draw where they say, "No, no, no we're gonna take control of this thing." (laughs) They might have different lines, they might care about different things, but everybody ultimately became a Keynesian, 'cause it just didn't work. It really wasn't that smart. It was sometimes smart and sometimes it failed, you know? And, and, and so if you really... You know, people who really, really, really want to believe in the invisible hand as infinitely smart screw up their economies terribly. You have t-... You know, you have to recognize the economy as a subservient tool. Everybody does when it's to their advantage. They might not when it's not to their advantage. That's kind of an interesting game that happens. But the thing is, it's just like that with our algorithms. You know, like, uh, (laughs) you, you can, uh, you can have a sort of a Chicago eco-... You know (laughs) economic philosophy about your computer and say, "No, no, no, my things come alive. It's smarter than anything."

    10. LF

      I think that there is a deep loneliness within all of us. This is what we seek. We seek love from each other. I think AI can help us connect deeper. Like this is, this is what you criticize social media for. I think there's much better ways of doing social media that doesn't lead to manipulation, but instead leads to deeper connection between humans, leads to you becoming a better human being. And what that requires is some agency on the part of AI to be almost like a therapist, I mean, a companion. It's not telling you what's right. It's not guiding you as if it's an all-knowing thing. It's just another companion that you can leave at any time, that you, you have complete transparency and control over. There's a lot of mechanisms that you can have that, that are counter to how...... current social media, uh, operates that I think is subservient to humans or no, deeply respects human beings and is empathetic to their experience and all those kinds of things. I think it's possible to create AI systems like that. And I think they n- I mean, that's a technical discussion of whether they need to have, um, uh, (sighs) th- uh, s- something that looks like more, like, AI versus algorithms, something that has a identity, something that has a personality, all those kinds of things.

  9. 31:0943:29

    Social media incentives

    1. LF

      AI systems, and you've spoken extensively how AI systems manipulate you within social networks, and that's the bigge- the biggest problem, isn't necessarily that, um, there's advertisement that, uh, you know, social networks present you with advertisements that then get you to buy stuff. That's not the biggest problem. The biggest problem is they then manipulate you. You're, they alter, like, your human nature to get you to buy stuff or, or to get you to, um, do whatever the advertiser wants. Uh, may- maybe you can correct me, but-

    2. JL

      Yeah. I, I don't see it quite that way, but we can work with that as an approximation.

    3. LF

      Sure. So m- my-

    4. JL

      I think the actual thing is even-

    5. LF

      Worse.

    6. JL

      ... sort of more ridiculous and stupider than that, but that's, that's okay. Let's, let's... (laughs)

    7. LF

      So my, my question is, let's not use the word AI, uh, but how do we fix it?

    8. JL

      Oh, fixing social media, um, that diverts us into this whole other field, in my view, which is economics, which I always thought was really boring, but we have no choice but to turn into economists if we wanna fix this problem 'cause it's all about incentives. But, um, I- I've been around this thing since it started, and, and, uh, I've been in the meetings where the social media companies sell themselves to the people who put the most money into them, which are usually the big advertising holding companies and whatnot, and there's this, there's this idea that I think is kind of a fiction, and maybe it's even been recognized as that by everybody, that the, the algorithm will get really good at getting people to buy something 'cause I think people have looked at their returns and looked at what happens, and everybody recognizes it's not exactly right.

    9. LF

      Mm-hmm.

    10. JL

      It's more like a cognitive access blackmail payment at this point. Like, you, just to be connected, you're paying them money. It's not so much that the persuasion algorithms... The, so Stanford renamed its program, but it used to be called Engage Persuade. The engage part works. The persuade part is iffy. But the thing is, that once people are engaged, in order for you to exist as a business, in order for you to be known at all, you have-

    11. LF

      You have to stay connected.

    12. JL

      ... to put money into-

    13. LF

      Oh, that's dark. (laughs)

    14. JL

      Oh, no, that's not, no, that-

    15. LF

      So it doesn't work, but they have to-

    16. JL

      But they're, they're still, it's a, it's a giant, it's a giant cognitive access blackmail scheme at this point. So, um, because the science behind the persuade part, it's not entirely, it's not entirely, uh, a failure, but it's, it's not what... The, th- there's, we, we play make believe that it, it works more than it does.

    17. LF

      Mm-hmm.

    18. JL

      Um, the, the damage doesn't come... Honestly, as I've, I've said in my books, I'm not anti-advertising. I actually think advertising can be demeaning and annoying and banal and ridiculous and take up a lot of our time with stupid stuff. It, like, there's a lot of ways to criticize that out- um, advertising that's accurate, and it can also lie and all kinds of things. However, if I look at the biggest picture, I think advertising, at least as it was understood before social media, helped bring people into modernity in a way that overall actually did benefit people overall.

    19. LF

      Mm-hmm.

    20. JL

      And, uh, you might say, am I contradicting myself because I was saying you shouldn't manipulate people? Yeah, I am probably here. I mean, I'm not, I'm not pretending to have this perfect ar- airtight worldview without some contradictions, but I think there's a bit of a contradiction there, so, you know.

    21. LF

      Well, looking at the long arc of history, advertisement has, has in some parts benefited society-

    22. JL

      Yeah, because it's-

    23. LF

      ... uh, because it funded some efforts that perh- perhaps benefited society.

    24. JL

      Yeah, I, I mean, I think, like, there's a, there's a thing where, uh, sometimes I think it's actually been of some use. Uh, now, let's... Where the damage comes is a different thing though. Social media, um, algorithms on social media have to work on feedback loops where they present you with stimulus, and they have to see if you respond to the stimulus.

    25. LF

      Yeah.

    26. JL

      Now, the, the problem is that the, the measurement mechanism for telling if you respond in the engagement feedback loop is very, very crude. It's things like whether you click more or occasionally if you're staring at the screen more if there's a forward-facing camera that's activated, but typically there isn't.

    27. LF

      Mm-hmm.

    28. JL

      So you have this incredibly crude back channel of information. And so it's crude enough that it only catches sort of the more dramatic responses from you, and those are the fight or flight responses. Those are the things where you get scared or pissed off or aggressive or horny, you know? These are these ancient, the sort of what are sometimes called the lizard brain circuits or whatever, (laughs) you know, these, these fast response, old, old, old evolutionary business circuits that we have that, um, are helpful in survival once in a while, but are not us at our best. They're not who we wanna be. They're not how we relate to each other. They're this old business. But they, it's... So then just when you're engaged using those intrinsically, totally aside from whatever the topic is, you start to get incrementally just a little bit more paranoid, xenophobic, aggressive. You know, you get a little stupid and, and like a, a, you become a jerk. And it happens slowly. It happens... It's not, it's not like everybody is, like, instantly transformed, but it does kinda happen progressively where people who get hooked kind- kind of get drawn more and more into this pattern of being at their worst.

    29. LF

      Would you say that people are able to, when they get hooked in this way, look back at themselves from 30 days ago and say, "I am less happy"?... with who I am now, or I'm not happy with who I am now versus who I was 30 days ago. Are they able to self-reflect when you take yourself outside of the lizard brain?

    30. JL

      Sometimes. Um, I wrote a book about people, suggesting people take a break from their social media to see what happens, and maybe even del-... Well, uh, no. It was actually, the title of the book was just Del- Arguments to Delete Your Account.

  10. 43:2951:01

    Data dignity

    1. JL

      So there's this thing called data dignity-

    2. LF

      Yes.

    3. JL

      ... that I've been studying for a long time. I wrote a book about an earlier version of it called Who Owns the Future?

    4. LF

      Mm-hmm.

    5. JL

      Um, and the, the, the basic idea of it is that... (laughs) Once again, this is a 30-year conversation.

    6. LF

      It's a fascinating topic, yeah.

    7. JL

      Let me do the fastest version of this I can do. The fastest way I know how to do this is to compare two futures.

    8. LF

      Mm-hmm.

    9. JL

      All right. Uh, so future one is then the normative one, the one we're building right now. And, and future two is gonna be data dignity. Okay. And, um, and I'm gonna use a particular population. I live on The Hill in Berkeley. And one of the features about The Hill is that as the climate changes, we might burn down, and all, lose our houses or die or something. Like it's, it's dangerous, you know, and it didn't used to be. And so, um, who keeps us alive? Well, the city does. The city does some things. The electric company, kind of, sort of, maybe hopefully better. Um, individual people who own property take care of their property. That's all nice, but there's this other middle layer, which is fascinating to me, which is that the groundskeepers who work up and down that hill, many of whom are not legally here, many of whom don't speak English, cooperate with each other to make sure trees don't touch to transfer fr- fire easily from lot to lot. They have this whole little web that's keeping us safe. I didn't know about this at first. I just started talking to them 'cause they were out there during the pandemic. And so I tried to just see who are these people, who are these people who are keeping us alive. Now, I wanna talk about the two different fates for those people under future one and future two.

    10. LF

      Mm-hmm.

    11. JL

      Future one, um, some weird like kindergarten paint job van with all these like cameras and where things... drives up, observes what the gardeners and groundskeepers are doing.

    12. LF

      Mm-hmm.

    13. JL

      A few years later, some amazing robots that can shimmy up trees and all this show up. All those people are out of work.

    14. LF

      Yeah.

    15. JL

      And there are these robots doing the thing, and the robots are good. And they can scale to more land, and they're actually good, but then there are all these people out of work.

    16. LF

      Mm-hmm.

    17. JL

      And these people have lost dignity, they don't know what they're gonna do. And, and then some will say, "Well, they go on basic income, whatever. They become w- uh, wards of the state." My problem with that solution is every time in history that you've had some centralized thing that's doling out the benefits, that things get seized by people because it's too centralized and it gets seized. This happened to every communist experiment I can find.

    18. LF

      Mm-hmm.

    19. JL

      Um, so I think that turns into a, a poor future that will be de- unstable. I don't think people will feel good in it. I think it'll be a political disaster with a sequence of people seizing this central source of the, the basic income. And you'll say, "Oh, no, an algorithm can do it." Then people will seize the algorithm. They'll seize, they'll seize control.

    20. LF

      Unless the algorithm is decentralized and it's impossible to seize the control.

    21. JL

      Yeah. But, but, uh-

    22. LF

      It's very difficult.

    23. JL

      ... si- 60-something people own a quarter of all the Bitcoin. Like, like the things that we think are decentralized are not decentralized. So let's go to future two. Future two, the gardeners see that, that van with all the cameras and the kindergarten paint job and they say... or the groundskeepers and they say, "Hey, the robots are coming. We're gonna form a data union." And amazingly, California has a little baby data union law-

    24. LF

      Really?

    25. JL

      ... emerging on the books. Yeah.

    26. LF

      That's interesting. That's interesting.

    27. JL

      Uh, and so there's... and so they'll... um, and, and what they s- and they, they say, "We're gonna form, we're gonna form a data union and we're gonna... not only are we gonna sell our data to this place, but we're gonna make it better than it would have been if they were just grabbing it without our cooperation."

    28. LF

      Mm-hmm.

    29. JL

      "And we're gonna improve it. We're gonna make the robots more effective. We're gonna make them better and we're gonna be proud of it. We're gonna become a new class of experts that are respected." And then here's the interesting... There's two things that are different about that world from future one. One thing, of cour- the people have more pride. They have more sense of, of ownership, of, uh, of auto- of, of agency, but what the robots do changes. Uh, instead of just like this functional, like, "We'll figure out how to keep the neighborhood from burning down," uh, you have this whole creative community that wasn't there before thinking, "Well, how can we make these robots better so we can keep on earning money?" There'll be waves of creative, uh, groundskeeping with spiral pumping- pumpkin patches and waves of cultural things. There'll be new ideas like, "Wow, I wonder if we could do something about climate change mitigation with how we do this? What about, what about fresh water? Can we... what about, can we make the food healthier? What about, what about..." All of a sudden, there'll be this whole creative community on the case. And isn't it nicer to have a high-tech future with more creative classes than one with more dependent classes? Isn't that a better future? But, but, but, but, but future one and future two have the same robots and the same algorithms. There's no technological difference. There's only a human difference.

    30. LF

      Yeah.

  11. 51:011:02:46

    Jack Dorsey and Twitter

    1. JL

    2. LF

      Do you think it's possible to, um, to create a social network that competes with Twitter and Facebook that's large and centralized in this way? Not centralized. Sorry, large, large.

    3. JL

      How to get... All right, so I gotta tell you, how to get from what I'm talking, how to get from where we are to anything kind of in the zone of what I'm talking about is challenging. Um, I know some of the people who run... Like, I know Jack Dorsey-

    4. LF

      Mm-hmm.

    5. JL

      ... and I view Jack as somebody who's actually... (sighs) I think he's really striving and searching and find- and trying to find a way to make it better, uh, but is kind of, like... It's very hard to do it while in flight.

    6. LF

      Yeah.

    7. JL

      And he's under enormous business pressure too. Um-

    8. LF

      So Jack-

    9. JL

      I don't-

    10. LF

      ... Dorsey to me is a fascinating study because-

    11. JL

      Mm-hmm.

    12. LF

      ... I think his mind is in a lot of good places. He's a, he's a good-

    13. JL

      Yeah.

    14. LF

      ... human being, but there's a big Titanic ship that's already moving in one direction. It's hard to know what to do with it.

    15. JL

      That's, I think that's the story of Twitter. I think that's the story of Twitter. Um, one of the things that I observe is that if you just wanna look at the human side, meaning like, how are people being changed, how do they feel, what is the culture like, almost all of the social media platforms that get big have an initial sort of honeymoon period where they're actually kind of sweet and cute.

    16. LF

      Yeah.

    17. JL

      Like, if you look at the early years of Twitter, it was really sweet and cute. But also look at Snap, um, TikTok. And then what happens is as they scale and the algorithms become more influential instead of just the early people, when it gets, when it gets big enough that it's the algorithm running it, then you start to see the rise of the paranoid style, and then they start to get dark. And, and we- we've seen that shift in TikTok rather recently.

    18. LF

      But I feel like that scaling reveals the flaws within the incentives.

    19. JL

      I feel like I'm torturing you. I'm sorry. (laughs)

    20. LF

      No, no, no, it's not torture. Uh, no, because-

    21. JL

      (laughs)

    22. LF

      ... I, I have hope, uh, for the world with, with humans and I have hope for a lot of things that humans create, including technology. And I just, uh, I feel that it's possible to create, uh, social media platforms that incentivize lo-

    23. JL

      I think-

    24. LF

      ... different things than the current. I think, eh, the current incentivization is around, like, the dumbest possible thing that was invented, like, 20 years ago or however long and it just works and so nobody's changing it. I just think that there could be a lot of innovation for more... See, see, you kind of pushed back this idea that we can't know what long-term growth or happiness is. I, I... If you give control to people to define what their long-term happiness and goals are, then you, that (laughs) optimization can happen for each of those individual people. Y- you know, the-

    25. JL

      Well, I mean, (sighs) imagine a future where probably a lot of people would love to make their living doing TikTok dance videos but people recognize generally that's kind of hard to get into.

    26. LF

      Mm-hmm.

    27. JL

      Nonetheless, dance crews have an experience that's very similar to, uh, programmers working together on GitHub, so the future is like a cross between TikTok and GitHub.

    28. LF

      (laughs)

    29. JL

      And they get together and they have-

    30. LF

      Yeah.

  12. 1:02:461:07:26

    Bitcoin and cryptocurrencies

    1. LF

      can we upset more people a little bit? You already-

    2. JL

      I th- maybe. We'd have to try.

    3. LF

      No, no, can we, uh-

    4. JL

      (laughs)

    5. LF

      ... can I ask you (laughs) to elabor-... 'Cause I... my intuition was that you would be a supporter of something like cryptocurrency and Bitcoin because-

    6. JL

      Oh.

    7. LF

      ... it is fundamentally emphasizes decentralization.

    8. JL

      (sighs)

    9. LF

      What do you... So, so can you elaborate (laughs) on what-

    10. JL

      Yeah. Okay, look.

    11. LF

      ... your thoughts on Bitcoin?

    12. JL

      Um, I, um... It's kind of funny. Um, I, I wrote a... I... (sighs) I've been advocating some kind of digital currency for a long time.

    13. LF

      Mm-hmm.

    14. JL

      And when the, the, uh... when the... when Bitcoin came out in the original paper on, on blockchain, um, my heart kind of sank because I thought, "Oh my God, we're, we're applying all of this fancy thought and all these very careful distributed security measures to recreate the gold standard?" Like, it's just so retro, it's so dysfunctional.

    15. LF

      Yeah.

    16. JL

      It's so useless from an economic point of view. So it's always ama-... And then the other thing is using computational inefficiency at a boundless scale as your form of security is a crime against this atmosphere, obviously. A lot of people know that now, but we knew that at the start. Like, the thing is, when the first paper came out, I remember a lot of people saying, "Oh my God, if this th- this thing scales, it's a carbon disaster," you know? And, and, um, I, I just like... I'm just mystified. But that, that's a different question than when you asked, can you have, um, a cryptographic currency, or at least some kind of digital currency, that's of a benefit? And absolutely. Like I'm... And there are people who are trying to be thoughtful about this. You should, uh... If you haven't, you should interview, uh, Vitalik Buterin some time.

    17. LF

      Yeah.

    18. JL

      There, there are people in-

    19. LF

      I interviewed him twice. (laughs)

    20. JL

      Okay. So like, there are people in the community who are trying to be thoughtful and trying to figure out how to do this better.

    21. LF

      Yeah. It has nice properties though, right? So the... one of the nice properties-

    22. JL

      Uh-huh.

    23. LF

      ... is that like government centralized, it's hard to control. Uh, and then the other one... To fix some of the issues that you're referring to, I'm sort of playing devil's advocate-

    24. JL

      Mm-hmm.

    25. LF

      ... here is, you know, there's Lightning Network, there's ideas how to u-... how you, uh, build stuff on top of Bitcoin, similar with gold, that allow you to have this kind of vibrant economy that operates not on the blockchain, but outside the blockchain, and uses, uh, Bitcoin for, uh, for like checking the security of those transactions.

    26. JL

      Mm. So Bitcoin's not new. It's been around for a while.

    27. LF

      Yes.

    28. JL

      I've been watching it closely. I've ne- I've not seen one e- example of it creating economic growth. There was this obsession with the idea that government was the problem. That idea that government's the problem, let's say, government earned that wrath honestly, uh-

    29. LF

      (laughs)

    30. JL

      ... because if you look at some of the things that governments have done in recent decades, it's not a pretty story. Like, uh, after, uh, after a very small number of people in the US government decided to bomb and landmine Southeast Asia, it's hard to come back and say, "Oh, government's this great thing." But, uh, then... (sighs) The problem is that this resistance to government is basically a resistance to politics. It's a way of saying, "If I can get rich, nobody should bother me." It's a way of not, of not having obligations to others. And that, ultimately, is a very suspect motivation.

  13. 1:07:261:17:41

    Government overreach and freedom

    1. JL

      an illusion.

    2. LF

      The idea... And I- I apologize, I'm... Uh, overstretched the use of the word government. The idea is there should be some punishment from the people when a group, um, when a bureaucracy, when a set of p- when a set of people or a particular leader, like in an authoritarian regime, which more than half the world currently lives under, if you...

    3. JL

      Mm.

    4. LF

      Uh, like, if they become, they start, stop representing the people, it stops being like a Berkeley, uh, meeting and starts being more like a- like a dictatorial kind of, uh-

    5. JL

      Mm-hmm.

    6. LF

      ... situation. So, the point is, it- it's nice to give people, uh, the populous in- in decentralized way, power to, um, resist that kind of, uh-

    7. JL

      (laughs)

    8. LF

      ... government becoming over-authoritarian.

    9. JL

      Yeah. But people... See this idea that the problem is always the government being powerful is false. Um, the problem can also be criminal gangs. The problem can also be weird cults. The problem can be abusive, um, abusive clergy. The problem can be, uh-

    10. LF

      Yeah.

    11. JL

      ... uh, fal- infrastructure that fails. The problem can be, uh, poisoned water. The problem can be failed electric grids. The problem can be, um, uh, a crappy education system that makes the- the whole society, uh, less and less able to- to create value. There are all these other problems that are different from an overbearing government. Like you have to keep some sense of perspective and not be obsessed with only one kind of problem, because then the others will pop up.

    12. LF

      But- but empirically speaking, some problems are bigger than others. So like, some, uh, like, uh, uh, groups of people, like governments or gangs, or companies lead to problems more than-

    13. JL

      Have, has... Are you- are you a US citizen?

    14. LF

      Yes.

    15. JL

      Has the government ever really been a problem for you?

    16. LF

      Well, okay. So first of all, I grew up in the Soviet Union. Used to, used to... Well, and actually

    17. NA

      Live there.

    18. JL

      Yeah, my wife did too. Like, yeah.

    19. LF

      So, so I- I have-

    20. JL

      That-

    21. LF

      ... I have seen, you know, um...

    22. JL

      Yeah, sure.

    23. LF

      Uh, and has the government bothered me? I would say that, uh, that's- that's a really complicated question, especially because the United States is such... It's a special place in like- like a- like a lot of other countries, but there's-

    24. JL

      My wife's family were Refuseniks, and so we have like a very... And, uh, her dad was sent to the Gulag, uh, for what it's worth, uh, on my father's side, all but a few were killed by a pogrom in- in, uh, a post-Soviet pogrom in Ukraine. So, I- I-

    25. LF

      S- I- I would think-

    26. JL

      ... I'm-

    27. LF

      ... 'cause- 'cause you did a-

    28. JL

      Yeah.

    29. LF

      ... little trick of, uh, eloquent trick of language that you switched to the United States to talk about government. So, I am... I'm... I believe, unlike my friend, Michael Malice, who's an anarchist, I believe, (laughs) go- government can do a lot of good in the world. That is exactly what you're saying, which is, it's- it's politics. The thing that Bitcoin folks and cryptocurrency folks argue is that one of the big ways that governments-

    30. JL

      Uh-huh.

  14. 1:17:411:19:51

    GitHub and TikTok

    1. JL

      case. (laughs)

    2. LF

      Well, you mentioned GitHub.

    3. JL

      (laughs)

    4. LF

      I think what Microsoft did with GitHub was brilliant. I, I was very... Uh, okay, uh, if I can give a, a, not a critical, but-

    5. JL

      Sure, go ahead.

    6. LF

      Uh, on Microsoft, because they recently purchased Bethesda. So Elder Scrolls is in their hands. I'm watching you, Microsoft. Do not screw up my favorite game. So, um-

    7. JL

      Yeah.

    8. LF

      ... .......................... (laughs)

    9. JL

      But look, um, I'm not speaking for Microsoft.

    10. LF

      No, for sure.

    11. JL

      I have an, I have an explicit arrangement with them where I don't speak for them, obviously. Like, that should be very clear. I do not speak for them.

    12. LF

      Yeah.

    13. JL

      Um, I, I am not saying, um... I like them. I think Satya is amazing. Um, the term data dignity was coined by Satya.

    14. LF

      Mm-hmm.

    15. JL

      Like, so you, you know, we have... Uh, it's, it's kind of extraordinary. But, you know, Microsoft's this giant thing.

    16. LF

      Yeah.

    17. JL

      It's gonna screw up this or that. You know, it's not, it's not... I don't know. It's kind of interesting. I've had a few occasions in my life to see how things work from the inside of some big thing. And, you know, it's always just people kind of... It- it- it's... I don't know. There's always like coordination problems-

    18. LF

      Yeah.

    19. JL

      ... and there's al- there's always-

    20. LF

      There's just human problems.

    21. JL

      Oh, my God. You know, it's like-

    22. LF

      And there's some good people, there's-

    23. JL

      I- I-

    24. LF

      ... some bad people. There's always-

    25. JL

      I hope Microsoft doesn't screw up their game. (laughs) You know-

    26. LF

      And I hope they bring Clippy back. You should never-

    27. JL

      but-

    28. LF

      ... kill Clippy. Bring Clippy back.

    29. JL

      Oh, Clippy. But Clippy promotes the myth of AI.

    30. LF

      Well, that's why-

  15. 1:19:511:24:42

    The Autodidactic Universe

    1. JL

    2. LF

      Uh, you co-authored a paper, we mentioned Lee Smolin, uh, titled The Autodidactic Universe ...

    3. JL

      Mm-hmm.

    4. LF

      ... which describes our universe as one that learns its own physical laws. That's a trippy and beautiful and powerful idea.

    5. JL

      (laughs) .

    6. LF

      What are, what would you say are the key ideas in this paper?

    7. JL

      Ah, okay. Well, I should say, uh, that paper reflected work from last year. And the project, the program has moved quite a lot. So it's a little ... There's a lot of stuff that's not published that I'm quite excited about, so I have to kind of keep my frame in that, in that, uh, last year's things. So, I have to try to be a little careful about that.

    8. LF

      Yeah.

    9. JL

      Um, we can think about it in a few different ways. Um, the core of the paper, the, the technical core of it, is a triple correspondence. Uh, one part of it was, uh, already established, and then another part is in the process. The part that was established was, um, of course understanding different theories of physics as matrix models. The part that w- uh, was, uh, fresher is understanding those as machine lea- machine learning systems, so that we could move fluidly between these different ways of describing systems. And the reason to want to do that is to just to have more tools and more options, because, um, uh, well, theoretical physics is really hard, and a lot of programs have kind of, um, run into a state where they feel a little stalled, I guess I can ... I, I want to be delicate about this, 'cause I'm not a physicist. I'm the computer scientist collaborating, so, um, I don't mean to dis anybody's, uh-

Episode duration: 1:52:29

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode Fx0G6DHMfXM

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome