Skip to content
a16za16z

Reid Hoffman on AI, Consciousness, and the Future of Labor

Reid Hoffman has been at the center of every major tech shift, from co-founding LinkedIn and helping build PayPal to investing early in OpenAI. In this conversation, he looks ahead to the next transformation: how artificial intelligence will reshape work, science, and what it means to be human. In this episode, Reid joins Erik Torenberg and Alex Rampell to talk about what AI means for human progress, where Silicon Valley’s blind spots lie, and why the biggest breakthroughs will come from outside the obvious productivity apps. They discuss why reasoning still limits today’s AI, whether consciousness is required for true intelligence, and how to design systems that augment, not replace, people. Reid also reflects on LinkedIn’s durability, the next generation of AI-native companies, and what friendship and purpose mean in an era where machines can simulate almost anything. This is a sweeping, high-level conversation at the intersection of technology, philosophy, and humanity. Timestamps: 00:00 The Spirit of Silicon Valley 00:27 Web 2.0 Lessons & the Seven Deadly Sins 01:15 Investing in AI & Silicon Valley Blind Spots 03:40 From Productivity Tools to Drug Discovery 05:45 Will AI Replace Doctors? 09:40 Limits of LLMs and Reasoning 13:00 Credentialism vs. Competence 15:00 Bits vs. Atoms: The Robotics Challenge 18:00 AI Savants & Context Awareness 20:10 Software Eating Labor & the “Lazy and Rich” Heuristic 24:25 Scaling Laws and the Future of AI 31:15 Consciousness and Agency in AI 35:45 Philosophy, Idealism & Simulation Theory 38:15 LinkedIn’s Durability & Network Effects 47:00 Friendship & Human Connection in the AI Era Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends! Resources: Follow Reid on X: ​​x.com/reidhoffman Follow Alex on X: x.com/arampell Find a16z on X: https://x.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711 Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Reid HoffmanguestErik TorenberghostAlex Rampellhost
Oct 20, 202553mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:27

    The Spirit of Silicon Valley

    1. RH

      This is actually one of the things that I think people don't realize about Silicon Valley. You start with what's the amazing thing that you can suddenly create? Lots of these companies, when you go, "What's your business model?" They go, "I don't know." They're like, "Yeah, we're gonna try to work it out, but I can create something amazing here." And that's actually one of the fundamental, call it the religion of Silicon Valley and the knowledge of Silicon Valley that I so much love and admire and embody. [upbeat music]

    2. ET

      Reid,

  2. 0:271:15

    Web 2.0 Lessons & the Seven Deadly Sins

    1. ET

      welcome to a16z Podcast.

    2. RH

      It's great to be here.

    3. ET

      So Reid, you're one of the most successful Web 2 investors of, of that era, you know, Facebook, uh, LinkedIn obviously, which you co-created, Airbnb, m-many, many others, and you had several frameworks that helped you do that, one of which was the Seven Deadly Sins, which we talk about often and love. As you're thinking about AI investing, what, what, what's a framework or, or worldview that you take to your AI investing?

    4. RH

      So obviously we're all looking through a glass darkly, looking through a fog with strobe lights that don't really, you know, are hard to understand what's going on. So we're all navigating this new s- this new universe. So I don't, don't know if I have as crisp a framework. The Seven Deadly Sins still work because that's a question of what is infrastruct-- psychological infrastructure across all eight billion-plus human beings.

    5. ET

      Yeah.

    6. RH

      But

  3. 1:153:40

    Investing in AI & Silicon Valley Blind Spots

    1. RH

      I'd say there's a couple things. So first is, um, there is going to be a set of things that are the kind of the obvious line of sight. Obvious line of sight, bunch of stuff with chatbots, um, bunch of stuff productivity, coding assistance, you know, da da da da da. And so, and by the way, that's still worth investing in, but obviously obvious line of sight means it's obvious to everybody- [chuckles]

    2. ET

      Yeah

    3. RH

      ... line of sight. And so-

    4. ET

      Yeah

    5. RH

      ... so, you know, uh, doing a differential investment is harder. The second area is, well, what does this mean? 'Cause too often people say in an area of disruption that everything changes as opposed to significant things change. So like you were mentioning Web 2.0 and LinkedIn and, and obviously, you know, part of this i- with a platform change, you go, okay, well, are there now new LinkedIns that are possible because of AI or something like that? And of course obviously given my own heritage, I would love LinkedIn to be that.

    6. ET

      Yeah.

    7. RH

      But you know, it's, you know, it's whatever. It's... I'm always pro-innovation, entrepreneurship, the best possible thing for humanity. Um, but like what are the kinda more traditional, like the kind of things that haven't changed? Network effects-

    8. ET

      Yeah

    9. RH

      ... you know, enterprise integration, you know, other kinds of things that, that the new platform, um, upsets the apple cart, but you're still gonna be putting that apple cart kinda back together in some way and what is that? And then the third, um, which is probably where I've been putting most of my time, has been what I think of as Silicon Valley blind spots. Because what we tend to be-- like Silicon Valley is, is one of the most amazing places in the world. There's a network of intense coopetition, learning, uh, you know, r- invention, you know, kind of, uh, building new things, et cetera, which is just great. But we also have our canons. We have our kind of blind spots, and a classic one for us tends to be, um, well, everything should be done in CS, everything should be done in software, everything should be done in bits.

    10. ET

      Yeah.

    11. RH

      And that's the most relevant thing because by the way, it's a great area to invest. Um, but it was like, okay, what are the areas where the AI revolution will be magical but won't be within the Silicon Valley blind spots? And that's probably where I've been putting the majority of my co-founding time, invention time, um, you know, kind of investment time, et cetera, because like I think usually a blind spot on something that's very, very big-

  4. 3:405:45

    From Productivity Tools to Drug Discovery

    1. ET

      Yeah

    2. RH

      ... right, is precisely the kinds of things that you go, okay, you have a, a long runway to create something that could be like another one of the iconic companies.

    3. ET

      Yeah. The, let's go deeper on that 'cause we were also talking just, just before this about how people focus so much on the productivity-

    4. RH

      Mm-hmm

    5. ET

      ... side, the workflow sides, but they're missing other, other elements or... So say more about other, other things that you find more interesting there.

    6. RH

      Well, so, um, s-so one of the things I, you know, kinda told my partners back at Greylock in 2015, so it's like 10 years ago, um, was I said, "Look, there's gonna be a bunch of fr- things on productivity around AI. Um, I'll help," right? Like, you know, I'll... You know, you have, uh, companies you want me to, to work with that you're doing, great. That's awesome. You know, enterprise productivity, et cetera, you know, things that Greylock tends to specialize on. But I said, "Actually, in fact, what I think that's here getting the blind spots is, um, is also gonna be some things like, you know, what, you know, as you guys both know, Matas AI, um, which is how do we create a drug discovery factory that works at the speed of software?"

    7. ET

      Yeah.

    8. RH

      Right? Now obviously there's regulatory, obviously there's biological bits, obviously da da da. And so does... it won't be purely the speed of software, but how do we do this? And they said, "Oh, well, what do you know about biology?" And the answer's zero. Well, it may be not quite zero.

    9. ET

      Yeah.

    10. RH

      You know, been on the board of Biohub for 10 years. I'm on the board of Arc, et cetera. Like, I've been thinking about the intersection of the worlds of atoms and the world of bits, and you have biological bits, which are kinda halfway between atoms and bits in various ways. I've been thinking about this a lot and kinda what the things are. Not so much with a specific company focus as much as a what are things that elevate human life, you know, kind of focus. Part of the reason why Biohub, part of the reason why Arc. Um, but then I was like, well wait a minute, actually now with AI, and you have the acceleration. 'Cause like for example, l- um, actually I'll, uh, this detour will be fun.

  5. 5:459:40

    Will AI Replace Doctors?

    1. RH

      Um, so roughly also around 10 years ago, I was asked to give a, uh, a talk to the Stanford Long Range Planning Commission, and, um, what I told them, uh, was that they should, uh-Basically divert and, and put all of their energy into AI tools for every single discipline. Hmm. And this is well before ChatGPT and all the rest. And the metaphor I used was a search metaphor, because think if you had a custom search productivity tool in every single discipline. Now, back then, I could imagine it, I could build one for every discipline other than theoretical math or theoretical physics. Today, you might even be able to do theoretical math and theoretical physics. [laughs] Right, exactly. And so do that, like transform knowledge generation, knowledge communication, knowledge analysis. Well, that kind of same thing, now thinking, well, well, the biological system is still too complex to simulate. We've got all these amazing things with LLMs, but like the classic Silicon Valley blind spot is, "Oh, we'll just put it all in simulation- Hmm. "And drugs will fall out," [laughs] right? That simulation is difficult. Now, part of the insight that you begin to see from like the work with Alpha, you know, Go and AlphaZero is, 'cause like people just think, "Ah, physical material is gonna take quantum computing." Now, quantum computing could do really amazing things, but actually simply doing prediction and getting that prediction right, and by the way, it doesn't have to be right 100% of the time. It has to be right like 1% of the time, because you can validate the other 99% weren't, weren't right, and then finding that one thing, and so literally it's like it's not a needle in a haystack, it's like a needle in a solar system, [laughs] right? Yeah. And it's like, but you could possibly do that, and that's part of what led to like, okay, Silicon Valley will classically go, "We'll put it all in simulation, and that will solve it." Nope. That's not gonna work. Or, "Oh no, we're gonna have a super intelligent drug researcher, and that will be two years down the thing." I actually... Look, maybe someday, not soon, right? So anyway, that was the kind of thing that was the, the, the in other different areas. Now, part of it's also, um, you know, kind of, uh, what a lot of people don't realize... Actually, if I'm not going too long, I'll do- No, yeah, please. I'll go, I'll go to the other example- Yeah, please. Please ... that, that I gave 'cause the... You'll love this. Um, this will echo some of our conversations from 10, 15 years ago. Um, so, um, I am prepping for a debate about, uh, on Sunday this week on whether or not AIs will replace all doctors in a small number of years. Now, the pro case is very easy, which is we have massively increasing capabilities. If you look at ChatGPT today, um, you'd go, like for example, advice to everyone who's listening to this, if you're not using ChatGPT or equivalent as a second opinion, you're out of your mind. You're ignorant. Yeah. You get a serious result, check it- Yeah ... as a second opinion. And by the way, if it differs, then go get a third. Yeah. Um, and so the diagnostic capabilities, these are much better knowledge stores than any human being on the planet. Yeah. So you go, well, if a doctor is just a knowledge store, yeah, that's going away. However, the question is actually, I think things that really do mean doctor, and it's not like, oh, someone will hold your hand and says, "Oh, it's okay," [laughs] et cetera. Um, you know, the... I actually think there will be a, a position for a doctor 10 years from now, 20 years from now. It won't be as the knowledge store. It will be as a user of the... as an expert user of the knowledge store, but it's not gonna be, "Oh, because I went to med school for 10 years and I memorized things intensely, that's why I'm a doctor."

  6. 9:4013:00

    Limits of LLMs and Reasoning

    1. RH

      That all gone away. Great, that part, but that, but there's a lot of other parts to being a doctor. Now, so I went to ChatGPT Pro, you know, using deep research. I went to Claude, you know, uh, uh, four point f- Opus 4.5 deep research. I went to Gemini Ultra. I went to Copilot deep research. And I, in all of these things, I was doing everything I knew about prompting for it to give me m- the best possible arguments for my position. 'Cause I thought, "Well, I'm about to debate on AI. Of course I should be using AI to debate." The answers were B- Hmm ... or B, despite absolute top thing, and I'm not... like maybe there's probably better prompters in the world, but I've been doing this since I got access to GPT-4 six months before the public did, right? So I've, I've got some experience in the whole prompting thing. It's not like I'm an amateur prompter. And so I looked at this and I went, "Oh, this is very interesting and a telling of where current LLMs are limited in their reasoning capabilities," because, um, what it did is it basically did, you know, 10 to 15 minutes of like 32 GPU compute clusters doing inference, bringing it all in. Amazing work relative to a work that an analyst would have produced in three days was produced in 10 minutes. And of course, I set it up all in parallel, you know, with different browser tabs, all, all going into the different systems and then ran the comparisons across them and everything. But its flaw was that it was giving me a consensus opinion about how articles in good magazines, good things are arguing for that position today. And all of that was weak 'cause it was kinda like, oh, you need to have humans cross-check the diagnosis, right? Like it was a common theme across this. I'm like, well, by the way, very clearly we know as technologists that human cross-checking the diagnosis, we're gonna have AIs cross-checking the diagnosis. We're gonna have AIs cross-checking the AIs cross-checking the diagnosis. And sure, there'll be humans around here somewhere, but like that's not gonna be the central place to say in 20 years, doctors are gonna be cross-checking the diagnosis. 'Cause by the way, what doctors should be learning very quickly is if you ha- believe something different than the consensus opinion that an AI gives you

    2. AR

      You'd better have a very good reason [chuckles] and you're gonna go do some investigation. Doesn't mean the AI's always right. That's actually part of what you're-- Like, what we're gonna need in all of our professions is, is more sideways thinking, more lateral thinking. The, "Okay, this is good consensus opinion. Now, what if it's not consensus opinion?" Hmm. That's what doctors need to be doing. That's what lawyers will need to be doing. That's what coders will need to be doing. Right. You know, that's what it is, and LLMs are still pretty structurally limited there. That's funny. Yeah. My, my favorite saying is by Richard Feynman, "Science is the belief in the ignorance of experts." Yes. And there are so many professions where the credentialism is the expertness. Yes. Right? It's like, it's, it's if this, then that, and it's like, "I have MD, therefore I know. I have JD, therefore I know." And that's, that's why coding is actually a little bit ahead of it because it's like I don't care where you got your degree. Th- this is a, it's kind of ahead of the rest of society. Yes, yes. Now, um, it's funny. Uh, Milton Friedman one time got asked, um, because he was, you know, famous libertarian, uh, "Don't you think

  7. 13:0015:00

    Credentialism vs. Competence

    1. AR

      that brain surgeons should be credentialed?" And it's like, yeah, the market will figure that out, which seems kinda crazy, right? [laughs] But that's how we, we now do coding when you're in the world of bits. Um, but it feels like a lot of the reasons why you have this, you know, very not, not very advanced thinking is because so much of it is built upon layers of credentialism, and that's, that's a very good heuristic. Yes. Historically, it has been. Yes. If you have a doctor that graduated at the top of their class from Harvard Medical School, it's like probably a good doctor. Yes. Um- By the way, you critically wanted that- Yes ... three years ago. Right. Right? It's like, no, no, I need someone who has the knowledge base. You have it, great. Right. But now we have a knowledge base. Yeah. I totally agree. That was the reason I was saying you would love this- Yeah ... 'cause it- Yeah, yeah. No, it's, it's- ... echoes of our expertise conversations. I, I thought you were gonna get into, um, you know, bits versus atoms- Mm-hmm ... where it's kind of interesting right now, where it's like all this high-value work, like Goldman Sachs sell-side analyst, that's deep research. [laughs] Yes. Right? Whereas fold my laundry- Yes ... that's $100,000 of CapEx- Yes ... so it doesn't work as well as somebody that you could pay $10 an hour to. Yes. And it's like the atom stuff is so hard- Oh ... to actually disrupt. Yes. Um, and we're gonna get there eventually- Yep ... but that's where Silicon Valley certainly has a blind spot, but it's like a CapEx versus- Yeah ... OpEx or like, you know, bits versus atoms blind spot. The atoms is another part, but that's also the reason why bio, 'cause bios- Right ... are the, are the, are the- That is atoms, yeah ... are, are the, are, are the bitty atoms. Yes, yes, yes. [chuckles] Right. And what's the, what's the best explanation for why it's so hard to figure out f-fold-folding laundry but so easy to figure out, um- Well, it's, it's actually not that hard to figure out- Or why it's taken us much longer, w- much more expensive, 'cause we couldn't-- It would've been hard to foresee that in advance. Well, I remember I, I talked to Ilya about this a few years ago, and it's like, why is it that if you read an Asimov novel, novel where it talked about like how, you know, people will cook for you and fold your laundry, like why have none of these things happened? Um, and it's like, well, you just never had a brain that was smart enough. Hmm. This was part of the problem, is that you could-- I mean, yes, you have things like, you know, how do you actually pick up this water bottle? And it turns out your hands are very, very well-- Like why are humans more advanced than every other species? So there are two reasons. Number one is we have opposable thumbs, and then number two is we've come up with a language system that we could pass down from

  8. 15:0018:00

    Bits vs. Atoms: The Robotics Challenge

    1. AR

      generation to generation, which is writing. Dolphins are very smart. Like there, there was actually a whole theory which is it wasn't just brain size, it was brain to body size. Hmm. So humans were the highest. Nope, not true. [chuckles] And now that we've actually measured every single animal, there are a lot of animals that have more brain over body size. Um, like that, that, that ratio is in tilt of an elephant or of a dolphin or s-- I, I forgot the numbers, but- Hmm ... there are a bunch that are actually more advanced than humans, but they don't have opposable thumbs, and because of that, they never developed writing, so they can't actually iterate from generation to generation. Yep. And humans did. And then, of course, like the human condition was like it was this and then the, the Industrial Revolution, then it went like that, and now it's continued like this. By the way, this is the reason why in the last four or five years, one of the things I realized is, you know, um, 'cause of the classic, uh, uh, classification of human beings is Homo sapiens. I actually think we're Homo techni because it's that iteration through technology. Yes. Yes. Yes, exactly. Yeah. Whatever version, writing, typing, [chuckles] you know? But it, it's we iterate through technology. That's the actual thing, goes to future generations, builds on science- Yeah ... you know, all the rest of it, and that's what I think is really key. Yeah. A couple other ex-explanations could be that we have more training data on white-collar work than sort of, you know, pick-picking things up. Or, or some people make this evolutionary argument that we've been using our disposable thumbs for way longer than we've been, say, you know, reading, right? Well, it's, yeah, it's the, the lizard brain. Like most of your brain is not the neocortex. Yeah. And like that's the like draw and paint and everything else, which is actually very, very hard. Yeah. You can't find a dolphin that can draw or paint, and that's- Right ... probably 'cause they don't have opposable thumbs, but it's also like- Yeah ... maybe that part of the brain hasn't developed, but you have like mil- you have billions of years of evolution- Yeah ... for these somewhat autonomous responses. Yeah. Like fight or flight, that's been around for a long, long time, well before drawing and painting. Right. But I think the main issue is just like you have battery chemistry problems. Like I can't-- Like it turns out like a lithium ion battery is pretty cool, but the energy density of that is terrible- Yeah ... relative to ATP with cells, right? Yeah. Like you have all of these reasons why robotics don't work, but first and foremost is the brain was never very good. Yeah. So you had robotics like Fanuc, which makes assembly line robots. Those work really well, but it's like very deterministic or highly deterministic. But once you go into like, you know, multiple degrees of freedom, you have to get so many things to work, and the CapEx, it's like, I need $100,000 to have a robot fold my laundry. Yeah. And we have so many extra people that will do that work, the economics never made sense. But this is why Japan is a leader in robotics, because they can't hire anybody. So therefore, I might as well build... Uh, true story. I went bowling in Japan, and they had a robot to g-- like a vending machine robot- Hmm ... that would give you your bowling shoes. [laughs] And then it would clean the bowling shoes. [laughs] Uh-huh. Right? And it's like you would never build that here. Oh. Because you'd hire some guy- Yes, yes ... from the local high school- Yes ... and he'd go do that. Yeah, and much cheaper, and actually more effective. Much cheaper. But it's this CapEx, like the CapEx line- Yeah, yeah ... and the OpEx line when they cross- Yeah. Yeah. Then it's like, ooh, I should build robots. So that's the other thing that you probably need- Yeah ... but if the cost goes down, then of course it, it goes in, in favor of CapEx versus OpEx. I think there's a couple things to go deeper on the robot side.

  9. 18:0020:10

    AI Savants & Context Awareness

    1. AR

      So one is-

    2. RH

      The density, the, the, the, the bits to value.

    3. ET

      Yeah.

    4. RH

      Right? So, like in language, when we encapsulated all these things, even into like romance novels, there's a high bits to value. Whereas when you're gonna... in the whole world, there's a lot of, like, how do you-- We abstract from all those bits, and how do you abstract them? There's another part of it which is kinda common sense awareness. Like, this is one of the things that, like, when I look at, you know, GPT-2, 3, 4, 5, it's a progression of savants, right? And the savants are amazing. It doesn't mean the savant i- But, but, like, when it makes mistakes, like, a-as a classic thing, so Microsoft has had running for, uh, years now, agents talking to each other long-form. Like, just like, "Let's go for a year and do that and see what happens." And so often they get into like, "Oh, thank you." "No, thank you." "No, thank you." One month later, "Thank you." "No, thank you." Which human beings would be like, "Stop." [laughs] Right? Like, just like it's... And that's like a, that's a simple way of putting the context awareness thing, of saying, "No, no, no, no. Let's, let's stay very context aware." And even as magical as the progression has been, like much, much better da-data, much, much better reasoning, much, much better, uh, personalization, et cetera, et cetera, context awareness only is a proxy of that.

    5. ET

      Yeah. Yeah. I wanna go deeper on, um, your question about doctors, Reid, and... 'Cause Alex, we just released one of your talks around, you know, software ea-eating labor, and I'm curious where you, and how you, what sort of frameworks you have for thinking about what spaces are gonna have more of this copilot model versus what spaces are just gonna be m- sort of replacing the work entirely.

    6. AR

      Uh, I have, I wish I could pr- I'm gonna use an LLM to go predict the future.

    7. RH

      [laughs]

    8. AR

      But I'm gonna get a B-minus apparently.

    9. RH

      Yes, yes.

    10. AR

      So maybe I'll answer when I get a B-plus. Um, I think a lot of it is, like, the natural, like, there, there's the skeuomorphic version, which is, "Okay, well, I, I trust the doctor. Everybody trusts the doctor." The heuristic is, "Where did you go to medical school?" Apparently two-thirds of doctors now use OpenEvidence-

    11. RH

      Mm-hmm

    12. AR

      ... um, which is like ChatGPT-

    13. RH

      Yeah

    14. AR

      ... but it ingests to the New England Journal of Medicine-

    15. RH

      Yeah

    16. AR

      ... and have, like, a license to that.

    17. RH

      Yep.

    18. AR

      So, um-

    19. RH

      Yeah, Daniel Nadler, good guy.

    20. AR

      Yeah. Um, Kenshuo, right?

    21. RH

      Yeah.

    22. AR

      So, yeah, so, so that seems like there's no reason not to do that. Like, my, my Seven

  10. 20:1024:25

    Software Eating Labor & the “Lazy and Rich” Heuristic

    1. AR

      Deadly Sins version, uh, I'll simplify it, which is like everybody wants to be lazier and richer.

    2. RH

      Mm.

    3. AR

      So if this is a way that I can, like, get more patients and do less work-

    4. RH

      Mm

    5. AR

      ... of course people are gonna use this. There's no reason not to.

    6. RH

      Mm. Mm.

    7. AR

      Um, but does it replace that particular thing? And actually, m-most of, like, the, the software eats labor thing, it doesn't actually eat labor. Right now, the thing that's working the best is not like, "Hey, I have a product where everybody's gonna lose their job." Nobody's gonna buy that product. It's very, very hard to get that distributed, as opposed to, "I will give you this magic product that allows you to be lazier." I mean, obviously it's not framed this way, like lazy and rich. It sounds kind of, uh, you know, not, not great, but, "I'm going to let you work fewer hours and make more money." And that's, that's a very killer combo, and if you have a product like that, um, and it's delivered by somebody that already has that heuristic of expertise, these are just going to go one after another and get adopted, adopted, adopted. And then e-eventually you're gonna have cases like the one that you mentioned where if you don't use ChatGPT when you get a medical diagnosis, you're insane.

    8. RH

      Yeah.

    9. AR

      But that has not fully diffused across the population.

    10. RH

      Well, but-

    11. AR

      And it's pr-

    12. RH

      ... it's barely diffused.

    13. AR

      No, I know. Yes, yes.

    14. RH

      Yeah, no, no, but you were saying not fully. I mean, part of the reason everyone, start doing it. [laughs]

    15. AR

      Yes. Oh, uh, 100%, um-

    16. ET

      Well, it's because it's the fastest-growing product of all time-

    17. RH

      Yeah

    18. ET

      ... and yet it's barely, you know, it's-

    19. AR

      Well, I, that's why I'm convinced that AI is massively underhyped.

    20. ET

      Yeah.

    21. RH

      Yeah.

    22. AR

      Because in, in Silicon Valley you might not make that claim. Maybe it's overhyped, maybe valuation-

    23. RH

      No

    24. AR

      ... but whatever.

    25. RH

      We all, we all don't think it's overhyped.

    26. AR

      Um, but I think once I meet somebody in the real world-

    27. RH

      Yeah

    28. AR

      ... and I show them this stuff, they have no idea. And part of it is, like, they see the IBM Watson commercials-

    29. RH

      [laughs]

    30. AR

      ... and like, "Oh, that's AI." No, that's not AI, right?

  11. 24:2531:15

    Scaling Laws and the Future of AI

    1. RH

      doing, you're not trying hard enough.

    2. ET

      Yeah.

    3. AR

      Yeah.

    4. RH

      It isn't that it wor- does everything. Like, for example, I still think if I put inLike, how should Reid Hoffman make money investing in AI? And I'll go try that again in a month. I suspect I will still get what I think is the bozo business professor answer versus the actual game-- name of the game. But, um, everyone should be trying. And I, you know, like, for example, we put, when we get decks, we put them in and say, "Give me a due diligence plan." Right. If not everybody here doing that, that's a mistake.

    5. AR

      Yeah.

    6. RH

      Or y- 'cause you, five minutes you get one, and you go, "Oh no, not two, not five. Oh, but three is good," and it would've taken me a day to getting to about three.

    7. AR

      Yeah. Yeah. Um, in terms of-- Let's go back to extrapolation. Obviously, the last few years have had in-incredible, um, growth. You, you were involved, of course, with OpenAI since, since the beginning. When we look for the next few years, um, there's a broader question as to whether scaling laws will hold, whether sort of the limitations, um, or how far we can get with, with LLMs. Um, do we need another breakthrough of a different kind? What, what is your view on some of these questions?

    8. RH

      So one of the things we, you know, we all swim in this universe of extrapolating the future. One of the things that's great about Silicon Valley, and so you get such things as, you know, theories of singularity, the-theories of s-super intelligence, theory of exponential getting to super intelligence soon. And what I find is usually the mistake in that is not the fact that extrapolating the future, that's smart and people need to do that, and far too peopl-people do it. I think I remember liking your post and helping promote it, if I-

    9. AR

      Yeah

    10. RH

      ...if I recall. Um, but it's the notion of, well, what curve is that? Like, if it's a savant curve, that's different than, "Oh my gosh, it's an apotheosis, and now it's God." You know? You know, it's like, no, no, no, it'll be a, an even more amazing savant than we have. But by the way, if it's only savants, there's always room for us. There's always room for the generalist and the cross-checker and the context awareness and all the rest of it. Now, maybe, maybe it'll cross over a threshold or not. Maybe it won't. You know? Like, I think there's a bunch of different questions there. But that extrapolation too often goes, "Well, it's exponential, so in two and a half years, magic." And you're like, "Well, look, it is magic, but it's not all magic," is the, is the kinda way of doing it. Now, so my own personal belief is that, um... Look, so the critics of LLMs make a mistake in that, and, you know, we can go through all the different critics. "Oh, not knowledge representation. It, it screws up on, you know, prime numbers and, you know, blah, blah, blah, blah, blah." We've all heard-

    11. AR

      How many Rs in strawberry?

    12. RH

      Yes, exactly.

    13. AR

      Yeah, exactly.

    14. RH

      You know, and like, "Well, see? It's broken." And you're like, "You're missing the magic," right? Like yes, maybe there's some structural things that over time, even in three to five years, will continue to be a difficult problem for LLMs, but AI is not just the one LLM to rule them all. It's a combination of models. We already have combinational models. We use diffusion models for various image and video tasks. Now, by the way, they wouldn't work without, without also having the LLMs in order to have the ontology to say, "Create me an Erik Torenberg as a Star Trek captain," [laughs] you know, going out to, you know, explore the universe and meeting and making first contact with the Vulcans, and so forth, which, you know, now with our phone, we could do that, [laughs] right? And it would be there, uh, courtesy of OpenAI. Uh, and, you know, Veo 'cause Google's model is also very good. But it needs LLMs for that. But the thing that people don't track is it's gonna be LLMs and diffusion models and I think other things with a, with a fabric across them. Now, one of the interesting questions is, is the fabric fundamental to LLMs? Is the fabric other things? Like, I think that's a TBD on this, and the degree to which it gets to intelligence is an interesting question. Now, one of the things I think is a, um... You know, look, I, I talk to all the critics intensely, not because I necessarily agree with the criticism, but I'm trying to get to the what's the kernel of insight.

    15. AR

      Yeah.

    16. RH

      And like one of the things that I, um, loved about, you know, kind of a set of recent conversations with Stuart Russell was to say, "Hey, if we could actually get the fabric of these models to be more predictable, that would greatly a-a-allay the fears of what happens if something goes amok." Well, okay. Let's try to do that. Now I don't think the whole verification of outputs, like, like logical... Like, we can't even do verification of coding, right? Like verification always strikes me as very hard. Now, brilliant man, maybe c- we'll figure it out. But the, um, but, but on the other hand, the, "Hey, this is a good goal. Can we make that more programmable, reliable?" I think that is a good goal that people, that very smart people should be working on and, by the way, smart AIs. [laughs]

    17. AR

      Well, that's some of the math side.

    18. RH

      Yeah.

    19. AR

      It's like if you think about the, the foundation of the world, I mean, uh, philosophy is the basis of everything.

    20. RH

      Yeah.

    21. AR

      Actually, math comes from philosophy.

    22. RH

      Yeah.

    23. AR

      It's called the Cartesian plane after Descartes. You know, you're a philosophy major.

    24. RH

      Yes. [laughs]

    25. AR

      You know this, right? So you have, you have, uh, philosophy, math, physics, like why did Newton build calculus? To understand the real world, so math, physics. Physics gets you chemistry. Chemistry gets you biology, and then biology gets you psychology. So that's kinda the stack. So if you solve math, that's actually quite interesting-

    26. RH

      Yes

    27. AR

      ...because, um, there's a professor at Rutgers, uh, Kontorovich-

    28. RH

      Mm-hmm

    29. AR

      ...who's written about this a lot. Um, and I, I find this part fascinating just as a former mathematician-

    30. RH

      Yeah

  12. 31:1535:45

    Consciousness and Agency in AI

    1. ET

      Y- so y- as you just mentioned, Alex, Reid, you're a philosophy major, but you're also very interested and deep in neuroscience.

    2. RH

      Mm.

    3. ET

      And some people say that, "Hey, we'll never create AI with its own consciousness because we don't even understand our own consciousness. We don't understand how our, our own brain works." Um, and then, and then there's the broader question is, oh, will AI have its own goals, or will AI have its own agency? Uh, what is, what is sort of y- your view on, on some of these questions surrounding consciousness with AI?

    4. RH

      Well, consciousness is its own [laughs] tar ball, which I will say a few things about. I think agency and goals is almost certain. Um, there is a question, I think this is one of the areas where we wanna ex-, um, have some clarity and control. That was a little bit like the, the kind of question of what kind of compute fabric holds it together.

    5. ET

      Yeah.

    6. RH

      Because you can't get complex problem-solving without it being able to set its own minimum sub-goals and other kinds of things. And so, so goal setting and behavior and inference from it, and that's where you get the classic kind of like, whoa, you tell it to maximize, you tell it to maximize paperclips, and it tries to convert the entire planet into paperclips. And there's one thing that's definitely old comp- computer worry that, that which is no context awareness, something I even worry about modern AI systems. But on the other hand, it's like, look, if you're actually creating intelligence, they don't go, "Oh, let me gl- like, let me just go try to convert everything into paperclips." It's like, it's, it's actually, in fact, not that simple in terms of how it plays. Now, um, now consciousness is an interesting question 'cause you got some very smart people, Roger Penrose, um, who I actually interviewed way back when on Emperor's New Mind, speaking of mathematicians. Um, and um, you know, who are like, "Look, actually, in fact, there's something about our form of intelligence, our form of, of, of computational intelligence that's quantum-based, that has to do with how our physics work, that has to do with things like tub- tubulars and so forth." And by the way, it's not impossible. Like, that's, that's, that's a, the, it's a coherent theory from a very smart mathematician, like one of the world's smartest, right? Like, it's kind of in the category of, there's other people as smart, but there's no one smarter, [laughs] right, i- in, in, in that kind of vector. And so, so that's possible. Um, I don't think you need consciousness for, um, goal setting, uh, or reasoning. Um, I'm not even sure you need consciousness for certain forms of self-awareness. There may be some forms of self-awareness that consciousness is necessary for. It's a tricky thing. Philosophers have been trying to address this not very well for as long as we've got records of philosophy [laughs] right now. And philosophers agree. I'm not... Philosophers wouldn't think I was throwing them under the bus with this. They're like, "Yeah, this is a hard problem," 'cause it ties to agency and free will and a bunch of other things. And, and I think that the right thing to do is keep an open mind. Now, part of keeping an open mind, I think, um, Mustafa Suleyman wrote a very good piece in the last month or two on, like, semi-consciousness, which is we make too many mistakes a la the Turing test, that piece of brilliance, which is, um, well, it talks to us, so therefore it's fully intelligent and all the rest. And so similarly, you had that kind of s- you know, kind of nutty event from that Google engineer who said, "I asked this earlier model-

    7. ET

      Right

    8. RH

      ... was it conscious?" And it said, "Yes," so therefore it is.

    9. AR

      QED.

    10. RH

      Yes, QED. Like, no, no, no, no. It's like, you have to be not misled by that kind of thing. And like, for example, you know, the kind of thing that, you know, w- what, what I actually think most people obsess about the wrong things when it comes to AI. They obsess about the climate change stuff because actually, in fact, if you apply intelligence at the scale and availability of electricity, you're gonna help climate change. You're gonna solve grids and appliances and a bunch of other stuff. It's just like, no, this will be net super positive. And by the way, you already see elements of it. Um, Google applied its algorithms to its own data centers, which are, um, some of the best-tuned grid systems in the world, forty percent energy savings. I mean, just, you know, just da, da, da, da and just applying it. So that's the mistake. But one of the areas I think is this question around, like, what is the way that we want children growing up with AIs? What is their epistemology? What is their learning curves? You know, what are the things that kind of play to this? Because that kind of question is something that we wanna be very intentional about in terms of how we're doing it. And I think that's, like, like if you wanna go ask a good question that you should be trying to get good answers, that you could do something and gain and contribute in good answers

  13. 35:4538:15

    Philosophy, Idealism & Simulation Theory

    1. RH

      to, that's a good one.

    2. ET

      Yeah.

    3. AR

      Well, the, the most cogent argument that I've heard against free will, uh, is just that we are biochemical machines.

    4. RH

      Mm-hmm.

    5. AR

      So if you wanna test somebody's free will, get them very hungry, very angry-

    6. RH

      [laughs]

    7. AR

      ... like all of these things where it's just there's a hormone.

    8. RH

      Yeah.

    9. AR

      It's like norepinephrine.

    10. RH

      Yeah.

    11. AR

      It's like that makes you act a particular way.

    12. RH

      Yeah.

    13. AR

      It's like an override.

    14. RH

      Yes.

    15. AR

      So you have this, like, free will thing, but then you just insert a certain chemical and then, like, boom, it changes.

    16. RH

      Are you saying you're not a Cartesian? You don't have a little pineal gland that connects the two substances?

    17. AR

      No. No.

    18. RH

      Yeah, yeah.

    19. AR

      I, I don't, I don't know.

    20. RH

      [laughs]

    21. AR

      So, but it, it's true. I mean, it's like-

    22. RH

      Yeah

    23. AR

      ... like hunger is a-

    24. RH

      Yes. [laughs]

    25. AR

      Yeah, I'm hangry. Like, that's a thing.

    26. RH

      Yes.

    27. AR

      And, you know, what, what is the... Like, do, do you actually want, if you're developing super intelligence, do you want to have this, like, kinda silly override? I mean, the reason-

    28. RH

      Yeah

    29. AR

      ... why people go to jail sometimes that are perfectly normal is they get very angry.

    30. RH

      Yeah.

  14. 38:1547:00

    LinkedIn’s Durability & Network Effects

    1. ET

      Um, I wanna return to, uh, LinkedIn-

    2. RH

      Mm

    3. ET

      ... how we began the, the conversation.

    4. RH

      Mm.

    5. ET

      Uh, re- because we were lucky to, or I was lucky to work ma-many years with you. We would get-

    6. RH

      Yeah

    7. ET

      ... pitches, uh, e-every week about a LinkedIn disruptor.

    8. RH

      Yes.

    9. ET

      La-last 20 years, right?

    10. RH

      Yes.

    11. ET

      And so, and nothing's come even close. [laughs]

    12. RH

      No.

    13. ET

      And so it, it's fascinating. I'm, I'm curious why people pers- sort of underrated how hard it was, and pe-people have this about Twitter too, or other things that kinda look simple perhaps, but are actually very, very difficult to unseat and have a lot of staying power. And, and it's interesting, you know, OpenAI, uh, they said they're coming out with a job service to, quote, "Use AI to help find the perfect matches between what companies need and what workers can offer." Um, I'm, I'm curious j- how you think about sort of LinkedIn's durability.

    14. RH

      So look, I obviously think LinkedIn is durable, but first and foremost, I, I kinda look at this as humanity, society, industry. So first and foremost is what are the things that are good for humanity? Then what's good for society, then what's good for industry? And by the way, we do industry to be good for society and humanity. It's not an, it's not oppositional. It's just a, you know, how you're making these decisions and what you're thinking about. So I would be delighted if there were new, amazing things that helped people, um, you know, kind of, uh, make productive work, find productive work, and ma-make them do them. We're having, gonna have all this job transition, uh, coming from technological disruption with AI. Like, it would be awesome. Of course, it would be extra awesome if it was LinkedIn bringing it, just given my own personal craft in my hands and pride at what we've built and all the rest. Now, the thing with LinkedIn and, you know, Alex was with me on a lot of this journey, uh, you know, as I sought his advice on various things. Um, the, the, uh, LinkedIn was one of those things where it's where the turtle eventually actually, in fact, like, grows into something huge. 'Cause for many, many years, the general scuttlebutt in Silicon Valley was LinkedIn was the, was the, the bu- dull, boring, useless thing, et cetera, and it was gonna be Friendster. Probably most of the people listening to this don't know what Friendster is. Then MySpace, maybe a few people have heard of that, [laughs] right? You know, and then of course we've got, you know, Facebook and Meta and, you know, TikTok and all the rest. And part of the thing for LinkedIn is it's built a network that's hard to build, right? Because it doesn't have the same sizzle and pizzazz that photo sharing has. It doesn't have the same sizzle and pizzazz that, you know, um, you know, like one of the things that, uh, you know, you were referencing the Seven Deadly Sins comment. Um, and back when I started doing that, 2002, yes, I left my walker at the door.

    15. ET

      [laughs]

    16. RH

      Um, uh, the, the thing that I used to say was, uh, Twitter was identity. I actually mistook it. It's wrath.

    17. ET

      Right.

    18. RH

      Right?

    19. ET

      Yeah. [laughs]

    20. RH

      And so it doesn't have the wrath-

    21. ET

      [laughs]

    22. RH

      ... you know, kind of component of it. And so, um, and so the, uh, you know, the thing that... And you said with LinkedIn, LinkedIn's greed, great, you know, 'cause Seven Deadly Sins kind of, um, you know, 'cause, 'cause that's, you know, a motivation that's very common across a lot of human beings.

    23. AR

      Rich and lazy.

    24. RH

      Yes, exactly. And so, or, you know, y-you, you're putting it in the punchy way, but simply being productive.

    25. AR

      Yeah.

    26. ET

      Yeah.

    27. RH

      More value creation-

    28. ET

      Right

    29. RH

      ... and accruing some of that value to yourself. And so, um, and so I think the reason why it's been difficult to create a, uh, a disruptor to LinkedIn is it's a very hard network to build. It's actually not easy. And, um, and by staying really true to it, you end up getting a lot of people going, "Well, this is, this is where I am for that, and now I have a network of people with this," uh, and we are here together collaborating and doing stuff together, and that's the thing that a new thing would have to be. Um, andYou know, I, uh, you know, I, uh, when I saw GBD4, um, and, uh, knew that, uh, Microsoft had access to this, I called the LinkedIn people and said, "You guys have got to get in the room to see this," [chuckles] right? Because you need to start thinking about what are the ways we help people more with that, because you start with... This is actually one of the things that I think people don't realize about Silicon Valley, 'cause, you know, the general discussion is, oh, you're trying to make all this money through equity and all this revenue. Of course, you know, business people are trying to do that. But they don't realize is you start with what's the amazing thing that you can suddenly create? And part of it is, like, lots of these companies, like, get started with, and you go, "What's your business model?" And you go, "I don't know." Like, "Yeah, we're gonna try to work it out, but I can create something amazing here." And that's actually one of the fundamental, like, places of what the, you know, call it the religion of Silicon Valley and the knowledge of Silicon Valley that I so much, you know, love and admire and embody.

    30. AR

      That's, that's actually a, a question that I have. So I'll say one thing-

  15. 47:0052:51

    Friendship & Human Connection in the AI Era

    1. ET

      Um, is there anything you wanted to make sure-

    2. RH

      But we can do this again. This is always fun.

    3. ET

      Okay.

    4. RH

      Yes.

    5. ET

      Yeah, yeah. That's great. The, um, I'm curious, Reid, a-as you've sort of continued to uplevel in your career-

    6. RH

      Mm

    7. ET

      ... and have more opportunities, and they seem to com-compound, especially, you know, post selling LinkedIn, h-h-how have you decided where is the highest leverage use for, for your time, or w-where can you have the, the, the, the biggest impact?

    8. RH

      Well s-

    9. ET

      What's your mental framework?

    10. RH

      So, I mean, one of the things that I'm sure I speak for all three of us is an amazing time to be alive.

    11. ET

      Yeah.

    12. RH

      I mean, this AI and the transformation of what it means for evolving Homo techni and what, what is possible in life and in society and work and all the rest is just amazing. And so I stay as, uh, involved with that as I possibly can. Like, it has to be something that's so important that I will stop doing that.

    13. ET

      Yeah.

    14. RH

      Now, within that, you know, part of that was, you know, co-founding, um, Honest AI with Siddhartha Mukherjee, who's the CEO, emperor wa-- uh, uh, author of Emperor of All Maladies, um, inventor of, um, some T-cell therapies. So it's, uh, like, like, for example, getting an instruction from him on the FDA process-You know, that's the kind of thing that makes us all run screaming for the hills [laughs] right, as a, as an instance. Um, and so, uh, you know, that kind of stuff. But also, um, you know, like one of the things I think is really important is as technology drives more and more of everything that's going on in society, how do we make government more intelligent on technology? And so, you know, every kind of, um, you know, kind of well-ordered Western democracy, um, I've done-- been doing this for at least twenty to twenty-five years. If, if a minister, you know, or kind of senior person from a, from a democracy comes and asks for advice, I give it to them.

    15. ET

      Yeah.

    16. RH

      So, you know, just last week I was in France talking with Macron 'cause he's trying to figure out, like, "How do I help French industry, French society, French people? What are the things I need to be doing?" You know, if all the frontier models are gonna be built in the US and maybe China, what does that mean for how I help, you know, our people and so forth? And, and he's doing the exact right thing, which is, "I understand that I have a potential challenge. What do I do to help my people?"

    17. ET

      Yeah.

    18. RH

      "How do I reach out? How do I talk?" Sure, they've got Mistral, they've got some other things, but like, how do I maximally help what I'm doing? And so putting a bunch of time into that as well.

    19. ET

      Yeah. I remember seeing your, your, your calendar, and it was what seemed like seven days a week, meetings absolutely stacked. And o-one of the ways in which-

    20. RH

      I've gone to six and a half days.

    21. ET

      Okay. [laughs] So I'm glad you've calmed down a bit.

    22. RH

      Yes. [laughs]

    23. ET

      Um, one of the ways in which you're able to do that, one, it's important problems, but two, you, you work on projects with friends-

    24. RH

      Yeah

    25. ET

      ... sometimes over, over decades.

    26. RH

      Yeah.

    27. ET

      And you, you, maybe we'll close here. You've thought a lot about friendship.

    28. RH

      Oh.

    29. ET

      You've, you've, you've written about it.

    30. RH

      Yeah.

Episode duration: 53:02

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode brjL6iyoEhI

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome