Skip to content
The Diary of a CEOThe Diary of a CEO

WARNING: ChatGPT Could Be The Start Of The End! Sam Harris

In this new episode Steven sits down with philosopher, neuroscientist, podcast host and author Sam Harris. 00:00 Intro 02:02 6 years later, where do you stand on AI? 16:36 Is this not the most pressing problem? 33:16 Why I deleted twitter 45:43 Narrow AI 58:26 The meaning of AGI 01:02:00 In the age of AI how do we create purpose? 01:10:06 Who will AI replace? 01:14:41 Should we be doing universal basic income? 01:21:40 Would you stop AI if you could? 01:27:31 How do we change our minds to be happier? 01:34:28 Why not lying & telling the truth will make you happier 01:41:28 The last guests question Follow Sam: Instagram: https://bit.ly/3DHwOHy YouTube: https://bit.ly/3DE8RAy You can purchase Sam's book, 'Waking Up', here: https://bit.ly/3Qp51D7 Sam has kindly given DOAC listeners 30 days free trial on his app - Waking Up. Here is the link: https://bit.ly/3QxIrrZ My new book! 'The 33 Laws Of Business & Life' pre order link: https://smarturl.it/DOACbook Join this channel to get access to perks: https://bit.ly/3Dpmgx5 Follow me:  Instagram: http://bit.ly/3nIkGAZ Twitter: http://bit.ly/3ztHuHm Linkedin: https://bit.ly/41Fl95Q Telegram: http://bit.ly/3nJYxST Sponsors:  Huel: https://g2ul0.app.link/G4RjcdKNKsb Whoop: http://bit.ly/3MbapaY

Sam HarrisguestSteven Bartletthost
Aug 7, 20231h 50mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:002:02

    Intro

    1. SH

      Artificial intelligence is superhuman. It is smarter than you are, and there's something inherently dangerous for the dumber party in that relationship. You just can't put the genie back in the bottle. Sam Harris. Neuroscientist. Philosopher. Author. Podcaster.

    2. SB

      He goes into intellectual territory where few others dare tread. Six years ago, you did a TED Talk.

    3. SH

      The gains we make in artificial intelligence could ultimately destroy us.

    4. SB

      If your objective is to make humanity happy and there was a button placed in front of you and it would end artificial intelligence, what would you do?

    5. SH

      Well, I would definitely pause it. The idea that we've lost the moment to decide whether to hook our most powerful AI to everything is just oh, s-... It's already connected to the internet, got millions of people using it, and the idea that these things will stay aligned with us because we have built them, yet we gave them the capacity to rewrite their code, there's just no reason to believe that. And I worry about the near term problem of what humans do with increasingly powerful AI, how it amplifies misinformation. Most of what's online could soon be faked. Can we hold a presidential election 18 months from now that we recognize as valid, right? Like is it safe? And it just gets scarier and scarier. I worry we're just gonna have to declare bankruptcy to the internet. The internet. The internet. The internet.

    6. SB

      If your intuition is correct, are you optimistic about our chances of survival? Before this episode starts, I have a small favor to ask from you. Two months ago, 74% of people that watch this channel didn't subscribe. We're now down to 69%. My goal is 50%. So if you've ever liked any of the videos we've posted, if you like this channel, can you do me a quick favor and hit the subscribe button? It helps this channel more than you know. And the bigger the channel gets, as you've seen, the bigger the guests get. Thank you and enjoy this episode.

  2. 2:0216:36

    6 years later, where do you stand on AI?

    1. SB

      Sam, six years ago, you did a TED Talk. Um, I watched that TED Talk a few times over the last week, and the TED Talk was called Can We Build AI Without Losing Control Over It.

    2. SH

      Mm-hmm.

    3. SB

      In that TED Talk, you really discussed the idea whether, um, AI, when it gets to a certain point of sentience and intelligence will, will wreak havoc on humanity.

    4. SH

      Mm-hmm.

    5. SB

      Six years later, where do you stand on, on it today? Do you think, are you optimistic about our chances of survive, survival?

    6. SH

      Yeah, I mean, uh, I can't say I'm optimistic. I'm, I am worried about t- two species of problem here that are r- r- related. I mean, there's, there's sort of the near term problem of just what humans do with increasingly powerful AI and, um, how it amplifies the, the problem of misinformation and disinformation and make, and just makes it harder and harder to make sense of reality together. Um, and then there's just the, the longer term concern about, well, you know, what's called alignment with, with artificial general intelligence, where we build AI that is, is truly general and, you know, by definition superhuman in its competence and power. And then the question is have we built it in such a way that is aligned in a, in a durable way with, with our interests? And, um, I mean, there's some people who just don't see this problem.

    7. SB

      Mm-hmm.

    8. SH

      They're, they're kind of blind to it. When I'm in the presence of someone who doesn't have, doesn't share this intuition, they, they don't resonate to it, I just don't understand what they're doing or not doing with their minds in that moment. Let's say I, I'm wrong about that. Well, then, you know, it's just the other person's right, and so we're just, we just have fundamentally different intuitions about, about this particular point. And then the point is this, if you're imagining building true artificial general intelligence that is superhuman, and that is what everyone, whatever their intuition is, purports to be imagining here. I mean, there's, there's, you know, there are people on both sides of the, of the alignment debate. There are people who think alignment's a real problem and, or, and people who think it's total fiction. But everyone, you know, virtually everyone who's party to this conversation agrees that we will ultimately build artificial general intelligence that will be superhuman in its, in its capacities. And there's very little you have to assume to be confident that, that we're going to do that. There, there's really just two assumptions. One is that intelligence is substrate independent, right? There's no, it doesn't have to be made of meat. It can be made in silico, right? And we've already proven that with narrow AI. I mean, there's just, there's o- we obviously have intelligent machines, and, you know, your calculator and your phone is better than you are at arithmetic, and it's just, uh, that's, that's some, uh, very narrow band of intelligence. So as we keep building intelligent machines on the assumption that there's nothing magical about having a computer made of meat, the only other thing you have to assume is that we will keep doing this. We will keep making progress, and eventually we will, we will be in the presence of something more intelligent than we are. And that's not assuming Moore's Law. It's not assuming exponential progress. There's just, uh, we just, we just have to keep going, right? And when you look at the reasons why we wouldn't keep going-

    9. SB

      Mm-hmm.

    10. SH

      ... those are all just terrifying, right? Because intelligence is so valuable, and we're so incentivized to have more of it. And every increment of it is, is valuable. It's not like it only gets valuable when you get, you know, when you double it or, or 10X it. No, no, if you just get three more percent, right, that's, that's, uh, that pays for itself. Um, so...... we're going to keep doing this. Our failure to do it suggests that something terrible has happened in the meantime, right? We have had a world war, we've had a glo- a global pandemic far worse than COVID, we got hit by an asteroid. Something happened that prevented us as a species from continuing to make progress i- in building intelligent machines, right? So absent that, we're going to keep going. We will eventually be in the presence of something smarter than we are, and this is where intuitions divide. My intuition, and it's shared by, by, um, many people, I'm sure, and, and I know at least one who you've spoken to, my intuition is that there is something inherently dangerous for the dumber party in that relationship. There's, there's something inherently dangerous for the dumber species to be in pre- in the presence of the smarter species. And we have seen this, b- you know, based on our entanglement with all other species dumber than we are, right, or at least certainly less competent than we are. Um, and by... so by reasoning by analogy, w- that it would be true of something smarter than, than we are, um, people imagine that because we have built these machines, that is no longer true, right? But, eh, and here's where my intuit- intuition goes from there. That is, that imagination is born of not taking intelligence seriously, right? Because what intelligence is, is e- a, you know, for... A mismatch in intelligence, in particular, is a, a fundamental lack of insight into what the smarter party is doing and why it's doing it and what it will do next on the part of the dumber party, right?

    11. NA

      Hmm.

    12. SH

      So, I mean, you just gotta imagine that, by analogy, just imagine that the dogs had invented us as their, their super intelligent AIs, right, uh, for the purpose of making their lives better, you know, just securing resources for them, eh, securing comfort for them, make it... getting them medical attention. Um, it's been working out pretty well for the dogs for about 10,000 years, right? I mean, there's some exceptions. We've got... We mistreat certain dogs, but j- generally speaking, for most dogs most of the time, humans have been a great invention, right? Now, it's true that the, the mismatch in our intelligence dictates a, a, a fundamental blindness with respect to all... what we've become in the meantime, right? So like, we have all these instrumental goals and things we care about that they cannot possibly conceive, right? They know that when we go get the leash and say, "It's time for a walk," they understand that particular part of the language game. But everything else we do when we're talking to each other or when we're... when we're on our computers or on our phones, they don't have the dimmest idea of what we're up to.

    13. NA

      (laughs)

    14. SH

      And if we ever... If, if something happened, if we... I mean, we love... The truth is we love our dogs. We make the... e- just irrational sacrifices for our dogs. We prioritize their health over all kinds of things that w- it's just amazing to consider. And yet, if we learn... uh, if, if there was a, a new, you know, global pandemic kicking off and some xeno virus was jumping from dogs to humans and it was just kind of super Ebola, right? It was just... it was 90% lethal and this, this was just a forced choice between, I mean, wh- what do you value more, your... the, the lives of your dogs or the lives of your kids, right? If that's, if that's the situation we were in, it's totally conceivable, I mean, as it's not a, you know, not... by, by no means impossible, we would just kill all the dogs, right, and they would never know why, right? We would just... a- and it's because we have this layer of, of mind and culture and, and just, just the, the, the noosphere, right? There's this, this, this realm of, of hu- of mind that requires a requisite level of intelligence to even be party to it, even know exists, that they have no... they have no idea it exists, right? And it's... So this is a fanciful, uh, analogy because the dogs did not invent us, but evolution invented us, right? Evolution has coded us, you know, as I said, to survive and spawn, and that's it, right? So evolution can't see everything else we've done with our time and attention and, and all the values we've formed in the meantime, and all the ways in which we have explicitly disavowed the program we've been given, right? So evolution gave us a program, but if we were really gonna live by the lights of that program, what would we be doing? I mean, we would be having as many kids as possible, right? You know, the, the guys would be going to sperm banks and donating their sperm and finding that like the best use of their time and attention. I mean, it's like the idea that you could have hundreds of kids for which you have no financial responsibility, that would be the... that should be the most rewarding thing, uh, that you could possibly do with your time, uh, as a man. And yet, that's obviously not what we do. And there are people who decide not to have kids, and there are people who... and, and yet... and everything else we do, from, hey, having podcast conversations like this to, to curing diseases, to... I mean, just like l- l- literally everything we're doing with our... you know, with science, with, with culture is... Yes, there are points of contact bet- between those, those products and our evolved capacities, right? Like it's not, it's not just that... it's not magic, right? We are social primates that, that have leveraged certain ancient hardware to do new things.... but evolut- the code that we've been given doesn't see any of that, right? And we've not been optimized to build democracies, right? Um, evolution knows nothing. I- it can know nothing. If evolution were a coder, there's just no, there's no democracy maximization in that code, right? It's just, it's not a, it's- it's just not there. So, the idea that these things will stay aligned with us because we have built them, because if we have this origin story, that we gave them their initial code, and yet we gave them a c- capacity to rewrite their code and build future generations of themselves, right? Um, there's just no reason to believe that. I see no, uh, uh, and- and the, and the mismatch, uh, uh, in intelligence is intrinsically dangerous. And you could see this by, I mean, it's Stuart Russell. I don't know if, have you had him on the podcast?

    15. SB

      Yeah.

    16. SH

      He's a great, um, professor of computer science at Berkeley, and he, he wrote, li- literally co-wrote one of the, the most popular textbooks on AI. Um, he has some arresting analogies, which I think, uh, are- are good intuition pumps here. Um, and one is, just think of how you would feel if you knew, like, let- let's say we got a, uh, communication from elsewhere in the galaxy, and it was a message that we decoded, and it said, "People of Earth, we will arrive on your lowly planet in 50 years. Get ready." Right? That, uh, uh, anyone who thinks that we're going to get super intelligent AI in, let's say 50 years, thinks we're, uh, we're essentially in that situation, and yet we're not responding emotionally, uh, to it in the same way. If we, if we received a communication from a, a, a species that we knew just by, by fact, by the sheer fact that they were communicating with us in this way, we knew they're more competent and more powerful and more intelligent than we are, right? And they're going to arrive, right? We would, we would feel that we were on the threshold of the most momentous change in the history of, of our species. And we would feel, but most importantly, we would feel that it's because this is a, a, a rela- a relationship, an unavoidable relationship that's being foisted upon us, right? It's like we, like some, a, a, a new creature is coming into the room, right? With its own capacities, and now you're in relationship. And one, and one thing is absolutely certain, it is smarter than you are, right? By, by what factor? I mean, what, ultimately, we're talking about by, by factors, you know, just by so many orders of magnitude, it's, it, it, our intuitions completely fail. I mean, e- even if, even if it was just a difference in, in the time of processing, even if it, l- let, let's say there was no difference in, in, in the actual, you know, native intelligence, but it's just processing speed, a million-fold difference in processing speed is, is just a phantasmagorical difference in capacity. So just, like, just imagine we had 10 smart guys in a room over there, and they were working and thinking and talking a million times faster than we are, right? Well, so they're, they're no smarter than we are-

    17. SB

      (laughs) .

    18. SH

      ... but they're just faster, and we talk to them once every two weeks just to catch up on, you know, what they're up to and what they want to do and whether they still want to collaborate with us. Well, two weeks for us is 20,000 years of analogous progress for them. Right, so how could you, how could we possibly hope to constrain the opinions and, and collaborate with and negotiate with people ju- no smarter than ourselves who are making 20,000 years of progress every time we make two weeks of progress? Right? It's just, it's, it's, it's unimaginable, and yet there are many people who don't, that just think this is just fiction. What, everything I, all, all the noises I've made in the last five minutes are just, like, a, a, a new religion of fear, right? And it's just, there's no reason to think that alignment is even a potential

  3. 16:3633:16

    Is this not the most pressing problem?

    1. SH

      problem.

    2. SB

      If your intuition is correct, and the analogy of us getting a signal from outer space that someone is coming in 30 years, which, by the way, a lot of people that speak on this subject matter, um, don't believe it's even gonna be 30 years-

    3. SH

      Yeah, yeah.

    4. SB

      ... until we reach that sort of singularity moment I think they speak of, or artificial general intelligence. I've heard people like Elon say, you know, many fewer decades, 10, 10 years, 15 years-

    5. SH

      Right.

    6. SB

      ... 20 years, et cetera. If that is correct, then surely this is the most pressing challenge, conversation, issue of our time. And there's no logical reason that I can see to refute your intuition there. I, I can't see a logical reason. The, the rate of progress will, will continue. Don't necessarily see anything that will wipe out or pause our rate of progress. Um...

    7. SH

      I mean, l- let me just, to, to, uh, be charitable to the other side here, there are other assumptions that they smuggle in that they, that some people are, I mean, some do it without being aware of it, but some actually believe these assumptions, and this spells the difference on, on this, on this, uh, particular intuition. Um, so, so it's possible to assume that the more intelligent you get, the more ethical you become-

    8. SB

      (laughs) .

    9. SH

      ... by definition, right? Now, and we might, you know, draw a somewhat more equivocal picture from just the human case where we see that, well, oh, there's some very smart people who aren't that ethical, but there, I, I believe there are people, I mean, I've talked to a few, at least a few people who believe this. There are people who assume that kind of in the limit as you push out into just, just far beyond human levels of intelligence, there is every reason to believe that all of the, the-... provincial, creaturely failures of human ethics will be left behind as well. It's like, you're not, like... The- the- the selfishness and the- and the- and the basis for conflict, uh, like, we're like, these are not gonna... The apish urges of, you know, status-seeking, uh, monkeys is- is just not, it's not gonna be in the code. And as you push out into- into just kind of the omnibus genius of- of the coming AI, you're gonna... There's- there's a kind of a s- a sainthood that's gonna come along with it, right? And- and- and a wisdom that will come along with it. Now, I just think that's a, that's quite a gamble. I- I th- I think I would take the others- the other side of that bet, and- and I always frame it this way. There have to be ways in- in the space of all possible intelligences that are beyond the human, right? There's gotta be more than one possible. There's gotta be more. It's like, it's just like there's many different ways to have a chess engine that's better than I am at chess. There's still, they're- they're different from each other, but they're all better than me, right? Um, there's got to be more than one way to have a superhuman artificial intelligence. And I would s- I would imagine there, there are, you know, not- not an infinite number of ways, but just a va- a vast number of- of... There're m- in the space of all possible minds, there are many locations in that space beyond the human that are not aligned with human wellbeing, right?

    10. SB

      Mm-hmm.

    11. SH

      There's gotta be more ways to build this unaligned than aligned, right?

    12. SB

      Mm-hmm.

    13. SH

      And what other people are smuggling into this conversation is the intuition that, no, no, once you get beyond the human, it's just gonna get, it's just you're- you're gonna be in the presence of, you know, just the Buddha who understands quantum mechanics and oncology and everything else, right? I just see no reason to think that that's so, and we- we could build something that is... Again, taking intelligence seriously, we're gonna build something that we're in relationship to, it's really intelligent in all the ways that we're intelligent. It's just better at all of those things than we are. It's by definition superhuman because the only way it wouldn't be superhuman, the only way it would be human level even for 15 minutes is if we didn't let it improve itself, if we wanted to just keep it stuck at, you know, at a, uh, we would just... We w- we built a- a college undergraduate, and we wanted just to keep it stuck there. But we would have to dumb down all of the specific capacities we've already built, right? Just like all... Every AI we have, narrow AI, is superhuman for the thing it does. You know, it's- it's... It has access to all the information on the internet, right? It's- it's just like it- it's got perfect memory. It can perfectly copy itself. When one part of the system learns something, the rest of the system learns it because it just can swap files, right? It can... It's, um, your, again, your- your phone is a b- is a- is a superhuman calculator. There's no reason to make it a- a- a calculator that is human level. Um, and so we're never gonna do that. We're never gonna be in the presence of h- human AGI. We will, we will be immediately in the presence of superhuman AGI. And then the question is how quickly it- it improves and how far are there, how much headroom is there to improve into. Um, on the assumption that you can get quite a bit more intelligent than we are, right? That there's like that we're nowhere near the- the summit of possible intelligence. You have to imagine that you're gonna be in the presence of something that is, again, it could be completely unconscious, right? There's, uh... I'm not saying that there's li- something that it's like to be this thing, although there might be, and that's a totally different problem that's worth worrying about. But what... Conscious or not, it is solving problems, detecting problems, improving its capacity to do all of that in ways that we can't possibly understand. And the products of its increasing competence are always being surfaced, right? So it's like it's... We- we've been, we've been using it to change the world. We became, we've- we've become reliant upon it. We built this thing for a reason. I mean, one thing that's been amazing about r- the developments in recent months is that those of us, uh, who've been at all cognizant of the AI safety space for, you know, now going on a decade or more for some people, always assumed that as we got closer to the end zone, we'd become, uh... That the labs would become more circumspect. We'd be building this stuff, air-gapped from the internet, you know?

    14. SB

      (laughs)

    15. SH

      It's like we have this phrase air-gapped from the internet. Like, we thought this was a thing, like, you- you... This thing would be in a box and then the question would be, well, do we let it out of the box and let it do something, right? Like, is it safe, and how do we know if it's safe, right? And we thought we would have that moment. We thought it would, it would happen in a lab at Google or at Facebook or somewhere. We thought we would hear, okay, we've got something really impressive and now we just want it to touch the stock market, or we want it to touch, you know, the, our medical data, or we just wanna see if we can use it. We're way past that, right? We've built this stuff already in the wild. It's already connected to the internet. It's already got millions of people using it. It already has APIs. It's already, I mean, it's- it's already doing work. So that meant from an AI safety point of view, that's, uh, it's amazing. Like, we didn't even have the moment, the- the- the choice point we thought was gonna be so fraught, right?

    16. SB

      Of course we didn't. We- we... Because there was such pressing incentives for people to press forward regardless of that conversation, especially-

    17. SH

      But yeah, every... But everyone- everyone thought... I mean, I- I was never, I was n- I don't believe I was ever in conversation with someone, I mean, someone like Eliezer Yudkowsky or- or, um, Nick Bostrom or Stuart Russell.... who assumed we would be in this spot. Like, I just, e- everyone, we, 'cause, you know, I could, you know, I'd have to go back and look at those conversations. But there was so much time spent, you know, it seems quite unnecessarily, on this idea that cir- circumspect, w- we'd make a certain amount of progress and-

    18. SB

      (laughs)

    19. SH

      ... circumspection would kick in. Like, even the people who were, who were doubters would become worried and the, and there, and there would be, like in the final yards, you know, as we go cross into the end zone, there'd be some mode where we could sort of slow down and figure it out and try to, like try to deal with the arms race dynamics. Like, let's place a phone call to China and, and, and just like, "Let's talk about this. We got something interesting." But the stuff is already being built in connection to everything, and there's already just endless businesses being, being, um, devised on the, on the, the back of this thing. And all the improvements are gonna get plowed into it and so just imagine what this looks like even in suc- in success, right? Like, let's say it, it just starts working wonders for us and we just, we get these great productivity gains and ... Okay. So then we cross into the, into the, you know, whatever the singularity is, right? At whatever speed we find ourself in the presence of something that is truly general. After all of this stuff is, all of this narrow stuff, uh, albeit superhuman narrow stuff is, is something that we totally depend on, right? Like every hospital-

    20. SB

      Mm-hmm.

    21. SH

      ... requires it, and every air- airplane requires it, and all of our missile systems require it, and it's, we're just, this is the way we do business, um, there is no, there's nothing to turn off at that point. I mean, I just don't, you know, it's like, I guess ... I mean, I put this to Marc Andreessen on my podcast and he said, "Yeah, you can turn off the internet." And I mean, I don't, I can't believe he was quite serious. I mean, yes, if you're North Korea, I guess you can turn off the internet for North Korea and that's why North Korea is like North Korea. But the idea that we could get, I mean, it just, the cost of turning off the internet n- now would be, uh, I think it would be unimaginable.

    22. SB

      It-

    23. SH

      In, in the, in the economic, uh, just the economic cost alone.

    24. SB

      Mm-hmm.

    25. SH

      The ins- it, it just would be, um ... So anyway, I mean, just the, the idea that we've, we've lost the moment to decide whether to hook our most powerful AI to everything, uh, because it's already being built more or less in contact with n- if not everything, many, so many things that you just can't put the genie back in the bottle. That's, that is genuinely surprising to me. And, um, yeah, I mean, incentives-

    26. SB

      Is- is this not the-

    27. SH

      ... tell the tale.

    28. SB

      Is this not the most pressing problem then? Because I, I, I was gonna ask, start this conversation by asking you the question about the thing that occupies your mind the most and the most important thing we should be talking about. And I, I in part assumed the answer would be artificial intelligence because the way that you-

    29. SH

      Mm-hmm.

    30. SB

      ... talk about your intuition on this subject matter, you've got children.

  4. 33:1645:43

    Why I deleted twitter

    1. SH

      um ... Yeah. Uh, I worry about the near-term chaos.

    2. SB

      I've never found the narrow-term consequences of artificial intelligence to be that in- interesting until now-

    3. SH

      Right.

    4. SB

      ... until you said it, that image of, like, the internet becoming unusable. So that was a real eureka moment for me because I've not, I've not been thinking about that.

    5. SH

      Yeah, no, uh, me too. I was, I was just concerned about the AGI risk and now, really in the, in the aftermath of Trump and COVID, I've just, I see the risk of, um ... You know, if not losing everything, losing a lot that matters, uh, just based on our interacting with just these very simple tools that, that are mis- reliably misleading us. I mean, I'm just, I'm amazed at what social media ... I forget about co- ... I'm amazed at what Twitter did to me. I mean, you know, even with all of my training and all, you know, with my head screwed on reasonably straight, I mean, it's, it's amazing to say it, but almost all of the truly bad things that have happened to me in the last decade that just really, like, just destabilized relationships and, and just priorities and really ki- kind of got plowed back into me, just perf- kind of became a kind of professional emergency, you know, stuff I had to respond to, you know, in writing or on podcasts, it was all Twitter. It was my, my engagement with Twitter was the thing that produced the chaos, and it was completely unnecessary. Um, and it was just, it was amplifying a kind of signal for me that I felt compelled to pay attention to because I was on it and I was trying to communicate with people on it, I was getting certain communication back, and it was giving me a picture of the rest of humanity, which I now think was, is fundamentally misleading, but it was, it was still consequential in its ... Uh, like, I c- even believing that it was ... At a certain point, believing that it was misleading wasn't enough to inoculate me against the delusion of the kind of the, the opinion change that was being forced upon me. Um, and I was feeling like, okay, like, these people are becoming unrecognizable. Like, I know some of these people, I've had dinner with some of these people, and their behavior on Twitter is, is appearing so deranged to me, and so, in such bad faith, um, that people are, you know, es- sh- peop- people who I know to be non-psychopaths are starting to behave like psychopaths, at least on Twitter, and I'm becoming similarly unrecognizable to them, that it's just ... Again, it, it, it all felt like a psychological experiment to which I hadn't consented, in which I enrolled myself somehow because it was, it was what everyone was doing in 2009. Um, and I spent, you know, 12 years there getting some signal and responding to it, uh, and it's not to say that it was all bad. I mean, I read a bunch of good articles that got linked there, and I, you know, I s- I discovered some interesting people, but, um, the change in my life after I deleted my Twitter account was so enormous.I mean, it's embarrassing to admit it. I mean, it's just, it's like, it's like getting out of a bad relationship. I mean, it was just, it was a fundamental, um, just freedom from, from, uh, this, this chaos monster that was, uh, it was always there ready to disrupt something b- based on its own dynamics. And-

    6. SB

      When did you delete it?

    7. SH

      Um, yeah, like Decem- I think it was December.

    8. SB

      I would... And I'm not someone that really takes sides on things, I like to try and remain in the middle. I think politically I w-

    9. SH

      Yeah, so you must have a very different Twitter experience than I was having?

    10. SB

      No.

    11. SH

      No?

    12. SB

      No. So I don't tweet-

    13. SH

      Uh-huh.

    14. SB

      ... anything other than this podcast trailer, I don't tweet anything else.

    15. SH

      Right, okay.

    16. SB

      So I just d- The only thing you'll see on my Twitter is the podcast trailer. That's it.

    17. SH

      Yeah.

    18. SB

      And for all the reasons you've described, and more interestingly, I wanted to say, in the last eight months, as someone that tries to be... doesn't get caught up too much in the media, "Oh, Elon bought this," da, da, it's 100% gone in that direction. As in my timeline now is, I say it to my friends all the time, and some of my friends who are, again, I think are nuanced and balanced, have said to me, the, there's something that's been turned up in the algorithm to increase engagement that has planted me in a unpleasant echo chamber that I didn't desire-

    19. SH

      Mm-hmm.

    20. SB

      ... to be in. And if I wasn't co- somewhat conscious, I would 100% be in there. My timeline, I s- My friend tweeted the other day, my friend, Cathal, tweeted, he's never seen more people die on his Twitter timeline than he has in the last six months.

    21. SH

      Hm.

    22. SB

      They're prioritizing video, so you're seeing a lot of, like, death and CCTV footage that I've never seen before.

    23. SH

      Ah.

    24. SB

      And then the, the debate around gender, um, politics, right-leaning subject matter, has never been more right down your throat.

    25. SH

      Yeah. Yeah.

    26. SB

      Because it's been... It's almost like something in the algorithm has been switched where it's now c- it's now, like, people have been let out the asylum. That's the only way I can describe it, and it's made me retract-

    27. SH

      Hm.

    28. SB

      ... even more. So when Zuckerberg announced Threads the other w- the other couple of weeks ago, it was kind of like a s- a life raft-

    29. SH

      Right. Yeah.

    30. SB

      ... out of this, out of the Titanic (laughs) . Um, and I really, really mean that. A- and I'm not someone to get easily caught up in narrative, you know, as it relates to social media platforms, it's been my industry for a decade. But what I've seen on Twitter, and that, uh, it's actually made me believe this hypothesis I had five years ago, where I thought there would be, um... I thought the route, the, the journey of social networking would be, we'd have way more social networks and they'd be more siloed. I thought we'd have-

  5. 45:4358:26

    Narrow AI

    1. SH

    2. SB

      Narrow AI. I asked you the question a second ago-

    3. SH

      Yeah.

    4. SB

      ... which we, um, I really wanted to get a solution to it because I'm mildly terrified.

    5. SH

      Mm-hmm.

    6. SB

      I completely believe your, believe your, um, the logic underneath your opinion that narrow AI will cause this, um, destabilization and unusability of the internet. So just focusing on narrow AI, what, what would you consider to be a solution to prevent us getting to that world where misinformation is rife to the point that it-

    7. SH

      Mm-hmm.

    8. SB

      ... can destabilize society, politics and culture?

    9. SH

      Well, I think this is something I've been asking people about on my podcast.

    10. SB

      (laughs)

    11. SH

      Is this, because it's not actually my wheelhouse and I, I would just need to hear from experts about what's possible technically here. But, um, I'm, I'm imagining that paradoxically or ironically, this could usher in a new kind of gatekeeping that we're gonna rely on because, like the provenance of information is, is gonna be so important. I mean, the, the, the, the assurance that a video has not been manipulated or that it's not a, a, just a pure confection of, of, uh, deep fakery, right? So you get, so it could be that we're, we're meandering into a new period where you're not gonna trust a photo unless it's come, it's coming from, you know, Getty Images or, you know, The New York Times has some story how they, about how they have verified every photo in their, that they put in their newspaper. They have a process and, you know, so if you see a, a, a video of, of Vladimir Putin seeming to say that he's declaring war on the US, right? I think most people are gonna assume that's fake until proven otherwise. It's like, it's just, there's just gonna be too much fake stuff and it's gonna be, it's all gonna look so good that The New York Times and every other, you know, organ of media that we have relied upon, um, as imperfect as they've been of late, they're going to have to figure out what the tools are whereby they can say, "Okay, this is actually a video of Putin." Right? And if the new, I mean, I'm not gonna be able to figure it out on my own, right? If The New York Times doesn't have a process or CNN doesn't have a process bef- that they go through before they say, "Okay, Putin really said this. And so this is, we have to now react to this because this is real." Um, whatever that process is, and you know, whether it's, whether there's some kind of digital watermark that, you know, that's connected to the blockchain, there's some, there's, there's some tech implementation of it that can be fully democratized where you, by just being in the latest version of the Chrome browser can know that you're s- you, you can differentiate, you know, real and fake videos. I don't know what the implementation will be, but I just, I just know we're gonna get to some spot where it's gonna be, all right...... we have to declare epistemological bankruptcy. We don't know what's real. We have to assume anything especially lurid or agitating is fake until proven otherwise, so prove otherwise. And that's, you know, that, that'll be a, a resetting of something. I don't know what we do with that in a w- in a world where we really don't have that much time to react to certain things that are... you know, a, a video of Putin saying he's launched his big missiles is something that, you know, 30 minutes from now we would, we would understand whether it's real or not. We forget about... again, forget about everything we just said about AI and look at all of our legacy risks, look at the risk of nuclear war. The, the, the risk of stumbling into a nuclear war by accident has been hanging over our head for 70 years.

    12. SB

      Mm-hmm.

    13. SH

      I mean, we, we've got this old tech, we've got these wonky radar systems that throw up errors. We've, we've... we have moments in history where, you know, one Soviet sub commander decided based on his just gut feeling, just his common sense, that the data was almost certainly in error, and he decided not to pass the, the, the, the obvious evidence of a, an American ICBM launch up the chain of command knowing that the chain of command would say, "Okay, you have to fire." Right? And he reasoned that if the US was gonna attack the Soviet Union, they would launch more than... I mean, I think in this case it looked like there were four missiles, that was the radar signature. If the US is gonna launch a first strike against the, the Soviet Union, and, and this was like the mid-'80s, um, they're gonna launch more than four missiles, right? This has to be, this, this has to be bad data, right? So... but, you know, so if we automate all this, will we automate it to systems that have a... that kind of common sense, right? Um, but we've been perched on the, on the edge of the abyss based on this, this, the possib-... forget about malevolent actors, you know, who might decide to have a nuclear war on purpose, we have the possibility of, of accidental nuclear war. You add this cacophony of misinformation and deepfake to all of that, and it just gets scarier and scarier. And this is just... this is not even AI. This is just, you know, narrow AI amplified misinformation.

    14. SB

      How do you feel about it?

    15. SH

      Well, the... I mean, this is the thing that worries me. I mean, I wo- I worry about the next election. You know, I think the next pres-... i- if we can run the 2024 election in a way that most of America acknowledges was valid, that will be an amazing victory. You know, whatever the outcome. I mean, obviously, I, um, I would not be looking forward to a Trump presidency. But, um, I think even more fundamental than that is can we hold a presidential election 18 months from now that is... that we recognize as valid, right? Like, that... I, I don't know, I don't know what kind of resources are being spent on, on that particular performance, but that is hugely important, and I don't think, um, our near-term experiments with AI is gonna make that easier.

    16. SB

      Why is it so important?

    17. SH

      Well, it's just... I mean, if you think the maintenance of, of, uh, a valid democracy in, in the world's lone superpower is, is of minor importance, I, um, I'd like to drink the tea you're drinking.

    18. SB

      (laughs)

    19. SH

      Um, but-

    20. SB

      Are you, are you optimistic?

    21. SH

      I mean, I, I'm... I can't say I'm optimistic. I'm... you know, it's... it's a paradoxical state I'm in because I, I definitely have... I, I tend to focus on what's wrong or might be wrong. I tend to, I think, have a pessimistic bias, right? Like, I, I, I tend to notice what's wrong as opposed to what's right, you know? And that, that's my, um... that's my bias. But I'm actually very happy, right? Like, I have a very, a very good life. I'm just... like, everything is, is... I just... I'm incredibly lucky. I'm surrounded by great people. It's like, it's just... it's all great, and yet I see all of these risks on the horizon. So I'm not, um... I just... I have a, a very high degree of well-being at this moment in my life, and yet I... like, the... what's on the television is scary. And so it, it's, it's this very interesting juxtaposition, yeah. You know, I'll be, I'll be very relieved if we have a, uh... I just... I feel like we're in a very weird spot. I mean, like, the... I haven't seen a, a full postmortem on the COVID pandemic that has fully encapsulated what I think we... what I think happened to us there. But my, my vague sense is that we didn't learn a whole hell of a lot. I mean, basically what we learned is we're really bad at responding to this kind of thing. This was a challenge that, that just fragmented us as a society. It could, it could have brought us together. It didn't. Um, and it, it amplified all of the, the divisions in our society, politically and, and economically and tribally in all, all kinds of ways. The role of misinformation and disinformation on all of that was, was all too clear, and I think just getting worse. So I think, you know, as a dress rehearsal for some future pandemic that's... that is inevitably gonna come and is... you know, could well be worse, I think we failed this dress rehearsal. And, you know, I, I have to hope that at some point our institutions will reconstitute themselves so as to be...... obviously trustworthy and engender the kind of trust we actually need to have in our institutions. Like, we, we need a CDC that not only that we trust, but that is trustworthy, that we, that we, that we're right to trust, right? And s- and so it is with an FDA and every other, you know, institution that, that, uh, is relevant here. And we don't quite have that, and half of our society thinks we don't have that at all.

    22. SB

      Mm-hmm.

    23. SH

      Right? And, and so it's, um... We have to rebuild trust in institutions somehow, and I, I just think, you know, we have a lot of work to do b- to even figure out how to make an, an increment of progress on that score. Because we're, um... Again, the siloing of, of, of large constituents into alternate information universes is j- just not functional. And that's so much of what social media has done to us, and alternative media. I mean, like, you know, I call it, you know, you and I are podcasters, but I call it podcastistan, right?

    24. SB

      (laughs)

    25. SH

      I mean, we're, we're, we have this, this landscape of... I mean, there's now, whatever, a million-plus podcasts, and there's, you know, e- email newsletters, and everyone has now just decided to curate their information diet in a way that's just bespoke to them. And you can stay there forever, and you're getting, you're getting one slice of... And it could be a, you know, a completely fictional slice of, of reality, and, um, we're losing the ability to converge on a, on a common picture of what's going on.

    26. SB

      You... (laughs)

    27. SH

      D- did that sound optimistic?

    28. SB

      No. (laughs)

    29. SH

      I didn't hear the optimism in there. You tell me.

    30. SB

      No, I d- I, no, I, but I, I kind of can't refute anything you've said on like a logical basis. It all sounds, um, like that is the direction of travel that we're going in, unfortunately. Um, I have faith that there'll be surprising positives.

  6. 58:261:02:00

    The meaning of AGI

    1. SH

      else.

    2. SB

      How would you... I feel like we've not defined the term artificial general intelligence.

    3. SH

      Hmm.

    4. SB

      From my understanding of it, it's when the, the intelligence can think and make decisions almost like a human.

    5. SH

      Yeah, I mean, loosely. I mean, this, this, this is kind of just a semantic problem, but intelligence can mean many things, but, you know, loosely speaking, it is the ability to solve problems, uh, and meet goals, make decisions, um, in response to a changing environment, in response to data. Um, and the general aspect of that is an ability to do that in acro- i- in many different situations, all the sort of situations we encounter as people. And to have one's capacity in one area not... You know, as I get better at deciding whether or not this is a cup, I don't magically get worse at deciding whether, you know, you just said a word, right?

    6. SB

      Mm-hmm.

    7. SH

      It's like I can do b- it's like I can do multiple things in multiple channels. That's not something we had in our artificial systems for the longest time, because we were, everything was bespoke to the task. We'd build a chess engine, and it couldn't even play Tic-Tac-Toe. All it could do was play chess. And it would get, and we, and we just would get better and better in these, in these piecemeal, narrow ways. And then things began to change a few years ago, where you'd get, you know, with like DeepMind would, it would have its algorithms that were, uh, you know, the same algorithm with slightly different tuning w- could play Go, right? Or it could, you know, it could solve a, a protein folding problem, as opposed to just playing chess, right? And it became the best in the world at chess, and it became the best in the world at Go. And, um, and amazingly, I mean, to take, you know, AlphaZ- what AlphaZero did, it... You know, before AlphaZero, all the chess algorithms were... Th- they just had all of our chess knowledge plowed into them. They had studied every human game of chess, and they just, it was just, you know, it was, it was a bespoke chess engine. AlphaZero just played itself, I think for like four hours, right? It just, it just had the rules of chess, and then it played itself. And it became better not...... merely the- than every oth- every person who's ever played the game, it became better than all the chess engines that had all of the, the, e- all of our chess knowledge plowed into them. So, it's a fundamentally new moment in, in how you build i- an intelligent system, and it promises this, this possibility. Again, i- this inevitability the moment you admit that we will eventually get there. The, the moment, moment you admit that it's- it can be done in Silico, and the moment that you admit that we will just keep going unless a catastrophe happens, and those two things are so easy to admit that I, I just don't- at this point I don't see any place to stand where you're not forced to admit them, right? I, I don't see any neuroscientific or cognitive scientific, uh, argument for substrate dependence for intelligence, um, given what we've already built. And again, we're, we're gonna keep going until something stops us, right? We'll hit some immovable object that prevents us from releasing the next iPhone,

  7. 1:02:001:10:06

    In the age of AI how do we create purpose?

    1. SH

      but other- otherwise, we're gonna keep going. And then, yeah, so then it, uh, then we'll- whatever general will mean in that first case, there'll, there'll be a case where we've built a system that is so good at everything we care about, that it's functionally general. Now maybe it's missing something. Maybe it's not, you know, e- maybe it's missing something that we don't even have a name for. You know, we're missing all kinds of- th- there, there are possible intelligences that we haven't even thought about because we just haven't thought about them. Right? There, there, there, th- w- things that, there, there, w- there are ways to section the universe, undoubtedly-

    2. SB

      Mm-hmm.

    3. SH

      ... that we can't even conceive of because we are just- we have the minds we have.

    4. SB

      Elon was asked a question on this by a journalist. The journalist said to him, "In a world where you believe that to be true, that artificial general intelligence is around the corner, when your kids come to you and say, 'Daddy, what should I do with my life-

    5. SH

      Hmm.

    6. SB

      ... to find purpose and meaning,' what advice do you now give them if you hold that intuition to be true-"

    7. SH

      Mm-hmm.

    8. SB

      "... that it's around the corner?" What do you say to your children when they say, "What should I do with my life to create purpose and meaning and-"

    9. SH

      Did, did you say that Elon answered this question?

    10. SB

      Yeah.

    11. SH

      Yeah. What did he say?

    12. SB

      It's one of the most-

    13. SH

      Th-

    14. SB

      ... chilling moments in an interview I think I've seen in recent times-

    15. SH

      Ah.

    16. SB

      ... because he stutters. He goes silent for about 15 seconds, which is very un-Elon.

    17. SH

      Mm-hmm.

    18. SB

      He stutters, he stutters, um, he stutters a bit more. Like it's- he can't- he, uh, uh, uh, eh, eh, and then he says he thinks he's suspend- he's, uh, living in suspended disbelief because if you really thought about it too much, what's the point? He says-

    19. SH

      Hmm.

    20. SB

      ... "What's the point of me building all these cars?" He was in his Tesla factory. "What's the point of me building all these cars? And what's the point?" I do think that sometimes, so I think I have to live in, as his words were, suspended disbelief.

    21. SH

      Right. Well, I would encourage him to ask, what's the point of spending so much time on Twitter-

    22. SB

      (laughs)

    23. SH

      ... because that- he could clearly benefit from rethinking that. But, um, that aside, I mean, my, my answer to that is, and I think other people have echoed this of late, um, and it's s- sort of surprising to me. I m- my answer is that this begins to privilege a return to the, the humanities as a kind of a core, uh, like the center of, of, of mass intellectually for us. Because when you look at what we're really good at and w- it's w- among the last things that can be plausibly automated, uh, and if, if we automate it, we may cease to care about it. So it's like, learning to write good code is something that is going to be aut- it's being automated now, it's, it, uh, it's, you know, I'm, I'm not a programmer but, um, you know, I have it on good authority that already these large languag- language models are improving code and something like half the time they're writing better code than, than people. Uh, that's all gonna become like chess, right? It's just it's gonna be better than people ultimately. Um, so being a software engineer is something that, you know, and being a radiologist, and being, like tho- those things it's easy to see how AI just cancels those professions or at least makes one person, you know, so effective at using AI tools that we kn- one person can do the work of 100 people so that you got 99 people who don't have to be doing that job. Um, but creating art and, you know, writing novels and being a philosopher and, uh, talking about what it means and, uh, to live a good life and how to do it, like that's, that's something that if we, we hav- we have to look at those, w- we have to look at where we're goi- going to care that we're actually in relationship u- to and in dialogue with an, uh, another person who's, who we know to be conscious. Right? Like what do- w- i- wh- where we don't care about that, we're not gonna care, we're gonna want just the best version of it. Like I don't ca- if the cure for cancer comes from an AI, an insentient AI, I do not give a shit, I just want the cure for cancer. Right? I, like, w- there's no added value that I- wh- where, where I find out, okay, the person who gave me this cure really felt good about it and he's, you know, he had tears in his eyes when he figured out the cure. Every engineering problem is like that. We want safer planes, we want, you know, we just want things to work. We're not sentimental about the, the artistry that went into all of that. Uh, and when the difference, when the gulf between the best and the mediocre gets big and consequential, we're just gonna want the best, we're just gonna want the best, we're ju- all the way down the line. But what is the best novel-... right? What is the best podcast conversation? What is the, eh, and can you subtract out the, the conscious person from that and still think it's the best? And, and so, like, so someone once, once sent me a, um, what purported to be a, a... I didn't even listen to it, so I don't, I'm not even sure what it was, but it looked like it was an AI-generated conversation between Alan Watts and Terence McKenna.

    24. SB

      Mm-hmm.

    25. SH

      Right? And both guys who I love, I mean, but I, I didn't know either of them, but fans of both. I've listened to hundreds of hours of both talk. As far as I know, they never met each other. It would have been a fascinating conversation. Um, I realized when I lo- when I looked at this YouTube video, I realized I simply don't care how good this is because I only care if it was actually Alan Watts and Terence McKenna talking. Like, I, the... A simulacrum of Alan Watts and, and Terence McKenna, in this context, I don't care about, right? So, uh, uh, another use case I, I stumbled upon, I was playing with, with ChatGPT and I asked it, you know, the causes of World War II. You know, "Give me 500 words on the cause of World War II." And it gives, it gives you this perfect little, you know, bullet-pointed essay on the cause of World War II. That's exactly what I want from it. That's, that's fine. That's like I, I don't care that it, it wa- there was no person behind that typing. But when I, when I think, well, do I wanna re- read Churchill's, you know, history of World War II? It's on my shelf to read. It's, I, you know, it's kind of one of these aspirational sets of books. Haven't read it yet. Um, I actually wanna read it because Churchill wrote it, right? Like, that, that's why... A- and if you could give me an AI version of Churchill saying this is in the style of Churchill, it's very... even Churchill scholars say this sounds like Churchill, I actually don't care about it. Like, I, like, that's not the use. I, I'll take the generic use of, you know, give me the cause of World War II.

    26. SB

      Mm-hmm.

    27. SH

      The fake Churchill is profoundly uninteresting to me. The real, real Churchill, even though he's dead, is, is interesting to me.

    28. SB

      So the, the rebuttal I give here, and this is what my mind is doing-

    29. SH

      Yeah. Yeah.

    30. SB

      ... is that saying this, the distinction you're, you're presenting, the, the difference I see is that in the case of the conversation between two people you respect that has been generated by AI, someone has signaled to you that, that it, that it is fake.

  8. 1:10:061:14:41

    Who will AI replace?

    1. SB

      um, by any measure, that most of it would be words strung together by artificial intelligence, and it will be selling-

    2. SH

      Oh, yeah.

    3. SB

      ... potentially better than the words written by humans. So again, when we go back to the conversation with your, your, your, your children, there might not be a career there either because artificial intelligence-

    4. SH

      Right.

    5. SB

      ... is faster, can produce more, can test and iterate on whether its sells better, clicks, gets more clicks. It can write the headline, create the picture, write the content, and then I can just take the check 'cause I put my name to it.

    6. SH

      Yeah.

    7. SB

      So like, uh, w- even in that regard, w- what remains?

    8. SH

      Well, so in the limit, what I think we're imagining is a world where... And so none of the terr- n- none of the terrifyingly bad things have happened, so it's just all working. We're just producing a ton of great stuff that is better than the human stuff, and people are losing their jobs. We got a, we got a labor disruption, but we're not talking about, uh, any other kind of political catastrophe or, or, you know, cyber apocalypse, um, much less, uh, AGI destroying everything. Um, then I think we just need a different e- economic assumption and, and ethical intuition around the, the value of work. I mean, we're live- our default norm now in a capitalist society is you have to figure out something to do with most of your time that other people are willing to pay you for, right? You have to figure out how to add value to other people's lives such that you reliably get paid. Otherwise, you might die, right? Like, we've got a social safety net, but it's, it's pretty meager. You know, we're not, we're... There are cracks you can fall through. You could wind up homeless-

    9. SB

      Addiction.

    10. SH

      ... and we're, and we're not gonna figure out what to do about that all too well, you know? And, um, your c- so your claim upon your existence among us is you finding something to do with your time that other people will pay you for, right? And now we've got artificial intelligence removing some of those opportunities, creating others, but in the limit, and I do think it is different for... I think analogies to other moments in, in technological history are fundamentally flawed. I think this is a, a technology which in the limit will replace jobs and not create better new jobs in, in their wake, right? It's just, this just cancels the need for, for human labor ultimately. And it strangely, it replaces some of the, the highest status, most cognitively intensive jobs first-

    11. SB

      Mm-hmm.

    12. SH

      ... right? You know, it replaces, replaces Elon Musk before it replaces your electrician or your plumber or your masseuse.... way before, right? So we have to internalize the, the reality of that. If, if, uh, again, this is in success, this is not... This is all good things happening, right? Um, and we have to have a new ethic. We have to have a new economics based on that ethic, which is, you know, UBI is one solution to this. Like, you shouldn't have to work to survive, right? You know-

    13. SB

      Universal ba- basic income.

    14. SH

      Yeah, there's so... Yeah, there's, there's so much abundance now being created, we have to figure out how to spread this wealth around, right? We've got a cure for cancer over here. We've got perfect, you know, uh, photovoltaic, uh, um, driven economies over here where it's like we've, we've solved the climate change issue. You know, we're just pulling wealth out of the ether, essentially. Um, we've got, you know, nanotechnology that is just birthing whole new industries, yet but it's all being driven by AI. We don't, you know... P- there's no room in this a- whenever you put a person in the, in the chain, in the decision chain-

    15. SB

      (laughs)

    16. SH

      ... you're just adding noise. This is the best thing... This, this should be the best thing that's ever happened to us. This is just like God handing us the perfect labor-saving device, right? The machine that can build every other machine that can do anything you could possibly want. We should figure out how to spread the wealth around in that case, right? This is just powered by sunlight. No more wars over resource extraction. It can build anything. We can all be on the beach just hanging out with our friends and

  9. 1:14:411:21:40

    Should we be doing universal basic income?

    1. SH

      family, right? Like, like-

    2. SB

      Do you believe we should do universal bas- basic income where everybody's given like a monthly check?

    3. SH

      So, so something... We have to break this connection. Again, this is, this is what will have to happen in the presence of this kind of labor force dislocation enabled by all of this going perfectly well, right? Like this, again, just is pure success. Just AI is just producing good things, and the only bad thing is, it's putting all these people out of work, you know? It's coming for your job eventually.

    4. SB

      I've heard this and I've, I've... My issue with it and my rebuttal when I talk to my friends about this idea of universal basic income when we, you know, we hand out enough cash or resources to people so that they're stable, which I'm not necessarily against, but just, just want to re- play with it a little bit, is humans seem to have an innate, an innate desire for purpose and meaning-

    5. SH

      Yeah.

    6. SB

      ... and we seem to be designed and built psychologically for labor and for, uh, discomfort and success.

    7. SH

      But it, but it doesn't have to be labor that's tied to money, right?

    8. SB

      Yeah.

    9. SH

      Like, like, it can be like... We, we will get our status in other ways and we'll get our meaning in other ways. And again, this is all, these are all just stories we tell ourselves. I mean, like, you know, you're talking to a person who knows it's possible to be happy actually doing nothing, right? Like, like just sitting in a room for a month, right? And just staring at the wall, right? Like, like that's-

    10. SB

      Because you've done it.

    11. SH

      ... like that's possible, right? So, so... And yet that's most people's worst nightmare, you know? I mean, it's... Solitary confinement in a prison is considered torture, right? And I know people who spent 20 years in, in a cave, right? So it's like there, there's a... There are c- capacities here that are worth talking about, but, um, just more, more commonly, I think we will, we... We want to be entertained, we want to have fun, we want to be with the people we love, we want to be useful in relationship, uh, and insofar as that gets uncoupled from the necessity of working to survive, right? It doesn't all just go away. We just need new norms and new ethics and new conversations around what we do on vacation, right? It's like y- so what, what you're imagining is that if you put everyone on vacation, on the best vacation, you can make the vacation as good as possible, a majority of people will eventually be miserable because they're, they're not back at work, right? And yet they're... Most of these people are working so that they have enough money so they could finally take that vacation, right? We will figure out a new way to be happy on the beach, right? I mean, like, if you can't... If you get bored with Frisbee, we will figure something else out that is fun.

    12. SB

      (laughs)

    13. SH

      You know, you, you can read... Uh, you know, I'll be able to read the Churchill History of World War II on the beach and not be rushed by any other imperative because I'm, you know, I, I, I'm happily retired, right? Because my AI is creating the thing that is solving all my economic problems, right? Um, you know, we should be so lucky as to, as to have that be our problem. Like how to be happy in conditions of no economic imperative, no basis for political strife on the, on the basis of scarce resources, and no question about... The, the, the question of survival is off the table. The echo... With respect to what one, one, one, one, one does with one's time and attention, right? You can be as lazy as you want and you'll still survive. You can be as unlucky as you, as you want and you'll, and you'll still survive. I mean, the, the, the awful situation we're in now is that differences in luck mean everything, right? You know, someone is born in a, in... Without any of the advantages that we have, we don't have a s- we don't have a system, we don't have an economic system that reliably gives them every advantage and opportunity, opportunity they could have, right? It's like as... so we just... We, um, we don't have the re-... You know, we apparently... We've convinced ourselves we either don't have the resources or we've convinced ourselves we don't have the resources, we don't have the incentives such that we access the resources so as to actually come to the help of people we could help, right? I mean, the idea that people starve to death is just, it's unimaginable and yet it, it still happens.You know, that, that's not a scarcity problem, it's a political problem, where, wherever it happens. And yet, all of this is tied to a system where everyone has convinced themselves that it's normal to really have one's survival be in question if one doesn't work, right? A- a- and, and w- by choice or by accident. Like, like, if you get ... If you have an, you know ... I think, uh, I think it's still true that in the, at least in the US, this is almost certainly not true in the UK, but in the US, the most common reason for a, uh, personal bankruptcy is, um, you know, uh, o- overwhelming medical expense that just comes upon you for whatever reason. Uh, you know, your wife gets cancer, you guys go bankrupt solving the cancer problem, or failing to solve the cancer problem, and now everything else unravels, right? And we, we have a society which thinks, "Yeah, well, uh, unlucky you. You know, that's, you know, if you wind up homeless, just don't sleep in front of my store because I need my s- you know, you're gonna hurt my business." Um, like, the f- you know, uh, successful AI that cancels lots of jobs would be w- it would be- it would only be, uh, canceling those jobs by virtue of producing so many good things, so much value for everybody, that we would, we would have to figure out how to spread that wealth around. Otherwise, we'd ... Yeah, otherwise we would have a, you know, if ... an amazing, amazingly dystopian bottleneck for a few short years and then we would just have a revolution, right? Then we'd, uh, then the guys in their, in their, you know, gated communities w- making trillions of dollars based on them having, you know, gotten close enough to the GPUs, uh, that they, that it, you know, some of it rubbed off on them. Um, yeah, they'd be dragged out of their houses and off their Gulf Streams and, you know, we would have a, a, a fundamental reset, we'd have a hard reset of the political system.

    14. SB

      If I had to g- put you in a yes or no situation and, um, ask your intuition the question now, that if your objective was to, which I'm sure it is, is to encourage the betterment of humanity and to increase our odds of happiness and wellbeing 100 years from now-

    15. SH

      Mm-hmm.

    16. SB

      ... um, and there was a button placed in front of you, and it would either end the development of artificial intelligence as we've seen it over the last decade, so it would never, would never proceed with developing, um, intelligent machines, um, or not. So you could press a button and stop it right now.

  10. 1:21:401:27:31

    Would you stop AI if you could?

    1. SB

      What, what would you do?

    2. SH

      U- And stop it pre- is, stop it permanently such that we never then do that thing? We just never figure out how to d- build intelligent machines?

    3. SB

      Pause it indefinitely.

    4. SH

      Well, I would definitely pause it. Uh, uh, s- s- to, uh-

    5. SB

      Indefinitely.

    6. SH

      ... to a point where we would, we would, could f- ... get our heads around the alignment problem. Yes.

    7. SB

      Permanently. If the button was a permanent pause that you couldn't undo.

    8. SH

      Well, the question is how deep does that go? So like, we, we have everything we have now, but we just-

    9. SB

      Yes.

    10. SH

      ... it just never gets better, then no.

    11. SB

      Yeah, we never make progress from here.

    12. SH

      Right. Um-

    13. SB

      And your objective is to make h- humanity happy and prosperous.

    14. SH

      I mean, it's hard because wh- when you, when you begin imagining all of the good stuff that we could get with su- with a, with aligned superhuman AI, well then, you know, then the ... It's just, you know, cornucopia upon cornucopia. It's just everything is, everything is potentially within reach. Yeah, I mean, I, I take the existential risk scenario seriously enough that I would, I would pause it, you know? I would say, I mean, I, I think we will get, we will eventually get to it. If, I- if, if curing cancer is a, is a biomedical engineering problem that admits of a solution, and I, I think there's every reason to believe it ultimately would be, we will eventually get there based on our own, you know, muddling along with our, you know, current level of tech, you know, currently, uh, uh, information tech. Um, I'm, you know, reasonably confident of that. Um, because I mean, our int- you know, our intelligence shows every sign of being general, it's just, it's not, um, it's not as fast as we would want it to be. It's not, it's not ... What ... The, the thing that AI is gonna give us is, is gonna give us, uh, speed that is, um, it's a ... Uh, I mean, there's speed and then there's the, the acc- there's memory, right? It's like, and it like ... We, we can't integrate, we don't have the ability, we ha- ... No person or team of people can integrate all of the data we already have, right? So that, like the, the, the real promise here is that a, these systems will be able to find patterns that we wouldn't even know how to look for and then do something on the basis of those patterns. You know, I think an intelligent search within the data space, you know, by, by apes like ourselves will eventually do, uh, most of the, the great things we want done. And, you know, the, there isn't, uh, there isn't, uh ... I mean, the pro- the problems we need to solve so as to safeguard, uh, the, the, um, the career of our species and, and to make civilization durable and sane and, and, uh, to remove this sword of Damocles that is over our heads at every moment that, you know, at any moment we could just decide to have a, a nuclear war that ruins everything or, or create a, a, an engineered pandemic that ruins everything. We don't need superhuman intelligence to solve all those problems and we need the, we need an appropriate emotional response to the, the-... the untenability of the status quo, and we need, we need a political dialogue that eventually transcends our, our tribalism. (paper rustles)

    15. SB

      For those of you that don't know, this podcast is sponsored by Whoop, a company that I'm a shareholder in. And I'm obsessed with my Whoop. It's glued to my wrist 24/7. And for those of you that don't know, it's essentially a personalized wearable health and fitness coach that helps me to have the best possible health. My Whoop has literally changed my life. Whoop is doing something this month which I'd highly suggest checking out. It's a global community challenge called The Core 4 Challenge. Essentially, they guide you through a set of four activities throughout the month of August that are scientifically proven to improve your overall health. I'm giving it a go, and I can't wait to see the impact it has on me, and I highly recommend you to join me with that. So if you're not on Whoop yet, there is no better time to start. If you're a friend of mine, there's a high probability that I've already given you a Whoop because I'm that obsessed with it. It is the thing that I check when I wake up in the morning. It's the first thing that I look at. I want the information on my sleep to then plan my day around. So if you haven't joined Whoop yet, head to join.whoop.com/ceo to get your free Whoop device and your first month free. Try it for free, and if you don't like it after 29 days, they're gonna give you your money back, but I have a suspicion that you're gonna keep it. Check it out now and let me know how you get on. Send me a DM. (paper rustles) Quick one. If you've been listening to this podcast for some time, one of the recurring messages you've heard over and over and over again, especially when we first had that conversation with Tim Spector, is about the importance of greens in our diet. And a while ago, I started pressing my friends at Huel to come out with a product that did exactly that, allowed you to have all those greens, the vitamins and minerals you need in a drink. And after several, several, several months of iterations and processes, they released this product called Huel Daily Greens, which is now one of my favorite products from Huel because it tastes great and it fills that very important nutritional gap that I had in my diet. The problem is, it launched in the U.S. and it sold out straightaway, and became a smash hit for Huel for the very reasons I've described. It's now back in stock in the United States, but it's not here in the UK yet. So if you're a UK listener, which I know a lot of you are, it's not yet available. So, let's all attack Huel. Let's DM them everywhere we can (laughs) and tell them to bring Huel Daily Greens to the UK. This is the product. When it is available in the UK, I'm gonna let you know first, but until then, let's spam their DMs.

  11. 1:27:311:34:28

    How do we change our minds to be happier?

    1. SB

      (paper rustles) You and I'd say a few others, maybe two or three others, helped change my mind about one of the most profound things I think anyone could believe, which was when I was 18, I, I believed in Christianity.

Episode duration: 1:50:41

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode GmlrEgLGozw

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome