Skip to content
No PriorsNo Priors

No Priors Ep. 45 | With Reid Hoffman

AI doomerism and calls to regulate the emerging technology is at a fever pitch but today’s guest, Reid Hoffman is a vocal AI optimist who views slowing down innovation as anti-humanistic. Reid needs no introduction, he’s the co-founder of PayPal, Linkedin, and most recently Inflection AI which is building empathetic AI companions. He is also a board member at Microsoft and former board member at OpenAI. On this week’s episode, Reid joins Sarah and Elad to talk about the historical case for an optimistic outlook on emerging technology like AI, advice for workers who fear AI may replace them, and why it’s impossible to regulate before you innovate. Plus, some predictions. Aside from his storied experience in technology, Reid is an author, podcaster, and political activist. Most recently, he co-authors a book with GPT 4 called Impromptu: Amplifying Our Humanity Through AI. 00:00 Reid Hoffman’s birdseye view on the state of AI 03:37 AI and human collaboration in workflows 5:23 What’s causing AI doomerism 12:28 Advice for whitecollar workers 16:45 Why Reid isn’t retiring 18:25 How Inflection started 22:06 Surprising ways people are using Inflection 25:34 Western bias and AI ethics 30:58 Structural challenges in governing AI 33:15 Most exciting whitespace in AI 35:00 GPT 5 and Innovations coming in the next two years 44:00 What future should we be building?

Sarah GuohostReid HoffmanguestElad Gilhost
Dec 21, 202347mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:003:37

    Reid Hoffman’s birdseye view on the state of AI

    1. SG

      (music plays) Hi listeners, and welcome to another episode of No Priors. This week we're joined by my longtime friend and partner Reid Hoffman. He needs no introduction as co-founder of PayPal, LinkedIn, and now Inflection AI, as well as Microsoft board member and former OpenAI founding board member. He's a prolific author, podcaster, and political activist, and he's also one of my favorite technology optimists, big picture thinkers, and supporter of people and founders. Welcome Reid. Thanks for doing this.

    2. RH

      Great to be here and love what you guys are doing and, you know, uh, longtime, uh, friends and partners with both of you. So this is awesome.

    3. SG

      We will start with a small question, which is what is your view on the state of AI today? What do we need more of and less of?

    4. RH

      Well, um, I mean the obvious thing about AI that everyone probably listening to this podcast already agrees with is that it's somewhere between the largest, you know, tech transformation of our lifetime and perhaps the largest tech transformation of, of, of human history. And one of the things I use to describe it is like steam engine of the mind. So just like the steam engine gave us physical powers, you know, kind of superpowers of, you know, construction and transport and manufacturing and a bunch of other things, this will give us a whole bunch of mental superpowers. It's both the implication of humanity, um, which is part of what the Impromptu book was gesturing towards. And also there will be some places where we will create, you know, kind of, um, uh, substitution, uh, replacement of work in various ways. And obviously we'll get into some depth on that. But I think that's the, the macro picture. And then with that of course there's tons of things that are current status and current needs, and you know, I think everyone tends to be a little bit over-predict like how quickly things like everything will change next year, and that's not gonna happen. Um, but then they tend to under-predict, you know, 10, 20 years, um, in some ways in terms of how the transitions. Although, you know, obviously because just like all technologies, the doomsayers come out first. Um, whether it's the printing press, electricity, everything else is like, "This is the end of the world." You can go back and you can find that this is the end of the world in each of these things. You know, the printing press was described as, as degrading human capabilities through cognition and spreading misinformation, um, as, as an example. And, um, but you know what I'd say that probably as an arc the thing that I would want to see more of in the... And that's part of the reason why I did Impromptu the way I did, in the creation, theorization, and the design of what we're doing in artificial intelligence is more in the kind of symbiotic, uh, amplification loop. We tend to as technologists say, "Well, I'm going to have autonomous vehicles and they're going to drive separately," which I think is a good thing in that case, uh, because I think, you know, you don't need an amplification loop. You just need, uh, effective logistics, you know, safety, uh, you know, save the 40,000 deaths that we currently have in, in human-driv- driven vehicles and so forth. You can go in depth in that if that's useful. But like, like the fact is there's gonna be a whole bunch of things that are actually going to be better with people plus, um, AI. That "plus" is a thing to focus on. And I think we haven't nearly as much, and that's of course part of the reason I wrote Impromptu.

    5. SG

      Do you have a favorite example of where you already see like the amplification loop you're talking about, or AI and humans working collaboratively together?

    6. RH

      So a

  2. 3:375:23

    AI and human collaboration in workflows

    1. RH

      friend of mine's kid first started with exposure to GPT-4 and was like, "Ah, I'm gonna do sonnets. Whatever, whatever." Like, "I don't care." But had-

    2. SG

      Wait, how old is this kid, for context?

    3. RH

      15.

    4. SG

      Okay.

    5. RH

      You know, bright kid. Um, you know, really interested in organic chemistry and realized that she could place, um, pa- you know, scientific papers in GPT-4 and say, "Explain this to me. I'm a 15-year-old." Right? And already like an entire world opens up to her. And I learned something from that because I was like oh yeah, there's occasionally these really complicated papers that I'm looking at going, "I don't have the two hours to try to decode, hey, I could do what she's doing." (laughs) That's smart, right? And so that exists today. Part of the, the thing that I try to tell the various concerned pessimists that say well, "You know, it is my job, you know, to, uh, limit what the large tech companies are doing with artificial intelligence." Well actually in fact we have a line of sight to a medical assistant on every smartphone. That's five billion smartphones in the world. Um, and you know, much less than a billion people have access to doctors. You have a line of sight to a tutor on every subject for every age group for everybody. Your actually job is to get that five billion access to all of that, right? This example of being able to use it as a tutor today if you just apply it a little bit in amazing ways for everybody, (laughs) right? Uh, who has access, access being the key thing that I would suggest governments are- should be working on, um, uh, is, you know, stellar.

  3. 5:2312:28

    What’s causing AI doomerism

    1. RH

    2. EG

      I've been, I've been working in technology now for about 20 years and, um, this is the biggest potential impact of global health and global equity that I've seen, and yet there... It's also the biggest immediate doomerism I've seen. And it feels like the foundation for that doomerism was laid, uh, many years ago. What- why do you think that exists or what do you think caused so much almost negative sentiment or pessimism or call for regulation so early by a number of people in the AI community?

    3. RH

      Well, in the AI community, I think a lot of it is very well-meaning, um, but, but conceptually flawed frequently.... and there's a couple different arcs you can go into it. Maybe one of the most common arcs, especially that gets to the, you know, so-called x-risk people, since we're talking about the people who refer to themselves as doomers, P-10 doomers or whatever. One of the things I find most amusing is people say, first sentence, totally agree with this, very strong insight, "Human beings are very bad at making predictions and instincts off existential curves." Then they go, "And then my prediction is..."

    4. NA

      (laughs)

    5. RH

      And you're like, "Well, wait a minute. You should take your first sentence seriously." (laughs) Right? Uh, because for example, what they do is they go, "Well, we have an exponential encue- increase in compute. Um, it's increasing cognitive functions, so then I'm gonna hand wave a little bit and say that's an increase in IQ. We're gonna have super intelligence and then this is what's gonna happen." And you're like, "Well, it's unclear." Like for example, if the increase in cognitive functions is actually more like an increase in savants, um, of various ways, then the super intelligence you're describing, by the way, GPT-4 is already super intelligent relative to a number of human capabilities, is actually not that alarming. Go play with GPT-4. It's not alarming. It's actually enhancing and amplifying in various ways. And I think that, that kind of thing of going, um, you know, I, I, I, I come to an observation, exponential curve, and I go, "Oh, shit," (laughs) right? And I'm trying to be helpful, and, and by the way, of course the calls for regulation are, "Hey, I shouldn't solo make this decision because I'm a tech creator. I should get, you know, broader sense of society and the representative of society involved in this," not realizing, of course, all you're really doing is calling for panic. For example, when I, um, talked to some of the authors of the six-month pause letter, uh, I was like, "Well what, what did you think was gonna happen?" And they're like, "Well, we just hoped everyone was gonna pause." I'm like, "Okay, I thought I was gonna need to talk to you about how tech development works, but let's talk about humanity first, 'cause I think you misunderstand humanity." (laughs) Right? Like, that's not what's gonna happen. You don't send up a flare and say, "Everyone should pause for six months. Oh, look, eight billion people paused."

    6. NA

      (laughs)

    7. RH

      On your theory of the universe, the UN would be a highly functioning organization that we would all use for a whole bunch of things. It doesn't work.

    8. SG

      Yeah, I, um ... One of the things that is sort of surprising to me is, um, how clearly laid out, like, the variations of doomer scenario are and how little, um, color there is in the optimistic scenarios. Right? And so this is one of the reasons I think you've been a really, um, important, like positive voice on the ways in which humanity will be pushed forward by AI and collaboration. Uh, because as you said, it's, as both of you said, it's c- entirely predictable that there will be some sort of panic around every new transformational technology, going back to, like, because of the advent of the telephone, no one will ever leave their home again. Um, and, and-

    9. RH

      Yes, exactly.

    10. SG

      ... and so I- I think it's, it's very, um, and this is a, around news cycles as well, it's very easy to amplify, um, a negative scenario, also because the, um, I think the set of fiction that actually inspires lots of technologists, uh, is much more, um, dystopian in sci-fi than, uh, than utopian because there's no conflict.

    11. RH

      Yeah. Well, especially in video, right?

    12. SG

      Mm-hmm.

    13. RH

      There, there is some stuff in written that's pretty good, um, you know, Iain Banks, et cetera, et cetera, but the, but the video is always like person versus machine and, you know, the machine has to play the conflict evil role. And, and one of the things in 2019 I went and talked to all the CA people saying, "Look, you're damaging humanity with all these stories. You should put the machine also on a positive category." It could be person plus good machine against bad machine. That's fine. But like, have some understanding that there's a, there's a good role, that, that there's a potentially good role for this. And like, when you look at all these technologies, it, there's a ton of... This is one of the reasons why I didn't, I didn't sign the 22-word statement of, you know, AI should be treated as an existential risk along with climate change, pandemic, et cetera. Because the mistake, and a punch of people I treasure signed this, this, this l- letter, Sam Altman, you know, uh, Mustafa Suleyman, et cetera, and, and th- this 22-word statement. And the reason is because unlike other things, climate change, pandemic, et cetera, A, they don't have anything in the positive column. When you get to pandemic, maybe the only way to solve pandemic is AI. W- a certain help for climate is AI. It's net, I think, strongly in the positive column. It isn't to say that it might not add some existential risk characteristics to the overall portfolio, but your portfolio is improved within it relative to reduction of it. And, and the reason I think that everybody, um, that a lot of people, and this is one of the things that I think, you know, given what we're doing here is the critics think they're virtuous because they go, "Oh, there's this, there's this danger th- that we should, we should trump it." And you're like, "Well, actually, in fact, you may be doing more harm than you're doing good in your attempt to be virtuous." Because by trumpeting the negativity, you're not shaping where we could be on positive. And so my challenge to the critics is say, you have to be articulating where we should be going to and what we should be doing, and then we can navigate around them. Now, I also don't think that the, that the, like the, "Oh, all technology is just great. We don't need to think about safety at all," that's bozoville. Right? Of course, like, it's like, look, you can clearly do things dumbly with technology. There's a bunch of stuff around viruses that people, like, by not being careful and all the rest, can be really dumb about. Or genetic manipulation. You're like, "No, no. You have to be, you have to be intelligent and careful about it." But going to a future that is so much better than the present, that's the goal. And that's there. And if you're not articulating that that's possible, that how you think your, your critiques or risks could help navigate to getting there, then actually in fact you're being destructive versus constructive. And I think part of the reason why people do this is they go, "Oh, I know that I'm just being good when I articulate this fear and this risk." And you're like, "Actually, no. You've got a conceptual mistake. You're actually maybe even being bad."... right? So you, so how to get to the good future is actually the hard work, so do the hard work.

    14. SG

      Consider the opportunity cost of, um, of all the good that, uh, we, we think AI might do in the, in the short and long term. Um, one, one more thought on this in terms of more the short term.

  4. 12:2816:45

    Advice for whitecollar workers

    1. SG

      What is your advice for the many, uh, people increasingly in white-collar jobs who are concerned about AI replacing them?

    2. RH

      For various reasons, in the medium to long term I'm actually pretty positive and counter a lot of, a lot of very smart AI thinkers that go, "Oh, my God. We may have rampant unemployment and a bunch of other stuff." And, and, and this is one of the medium-term ones that I actually really respect and think is gonna be super 'cause you know, steam engine. Steam engine

    3. EG

      Yeah.

    4. RH

      The mind steam engine, you know, helped them create capitalism had huge, uh, human consequences of transformation of society, um, that's important to navigate. That's, uh, as important here and in part because the speed of transformation will be a lot faster. You know, you have five billion smartphones. Um, your ability, you know, in c- computer devices and the internet, your ability to have that transformation hit is at a much more intense and focused wave. And, you know, call it, you know, mid-level, white-collar jobs, um, including some upper-level white-collar jobs are one of the ones that are gonna get transformed first and most, most ferociously. And so I think first, the advice to the, to the folks, which is start playing with AI, use it as amplification. You gotta learn it. I understand you may say, "Hey, look. I'm 40, 50. I've got my nice experienced position. I'm comfy. I do not want my society changed." Look, the, the person who was driving the, the horse and buggy carriage felt the same way. The Luddite weavers felt the same way. You know, it's like, no, no. This is y- you gotta, you gotta re-trot out your learning and your some curiosity. It doesn't have to be perfect. You don't have to be the A student. You just have to be engaged in learning the tool some, the same way you were learning Excel for doing your accounting thing and so forth, and just, you know, learn some on it. Now, here is the good news. The good news is and as a general arc and this is one of the things I was trying to do with Impromptu and I'm gonna do some more writing on this next year, is that anything that AI creates as a challenge, AI can also be part of the solution. Because you go, "Well, okay, so it's gonna displace a whole bunch of customer service people." Yep. All right. So, what can you build for customer service people that can help them figure out what other jobs they might be able to do, how they might find those jobs, how they might learn those jobs, how they might do those jobs? And let's make sure those AI tools are built too, so, so to help with the transition on people. You say, "Well, okay. There's a whole bunch of paper filing in accountants or, or, or form management in marketing groups, you know, kind of doing stuff, and that's all going to be much less human effort relative to the amount of work that's going on." Those people, how do they learn new jobs? Now, part of the thing that I think is the reason I'm more optimistic over time, the transition I pay a lot of attention to, but optimistic of what they target 'cause of course the exponential people tend to say, "Well, no, no. But th- then they're just gonna be better than all humans, and humans, you know, can't be doing anything." Um, I actually think that if, if you take that these are kind of these progressing, adding a whole bunch of tasks, we, we learn and adapt to other things. And so when you say to an individual, "Let's help with the transitions of the individuals to finding other kinds of things." And by the way, the other thing is it's like l- let's use as a parallel truck driving. So, you know, Aurora is obviously trying to do the completely autonomous truck. Well, if every truck manufactory in the world, um, basically, uh, started manufacturing AV trucks tomorrow, right, it'll be 10 years before more than half of the trucks on the road are AV trucks. Right? That gives you time to adjust. That gives you time to make this work. And I think that there is more time for adjustment than the usual, like, you know, five-alarm fire, you know, ringing the bell both for the individuals and for organizations and for society to do. And so it's like, look, let's navigate into it and be paying attention to and planning and trying to create it, but once again, AI can be part of the solution.

    5. EG

      Yeah. I guess, I guess speaking of career transitions, you've had I think one of the most impressive

  5. 16:4518:25

    Why Reid isn’t retiring

    1. EG

      careers in Silicon Valley. Um, you know, you were... You, you started a company that was an early social networking company in the '90s. You were at PayPal as a, a initially a board member, a senior person there. You started LinkedIn, which is one of the most important social products in the world and sort of productivity tools. You ran Greylock, the venture fund. Um, and, uh, so you, you've had this amazing career arc and usually once people hit your moment in time, they kind of say, "Okay, I'm done." And, you know, they move to the, you know, Tahiti or whatever it is (laughs) wherever people park their boats. Um, instead, you've decided to start, um, Inflection, which recently released its chatbot, Pi, um, which is, you know, focused on empathetic chatbots and human interaction and everything else. Could you tell us more about your decision and not only, like, why start this specific company, but why even do anything at this point?

    2. RH

      (laughs)

    3. EG

      (laughs)

    4. RH

      Well, um, I'm not very good at being bored. Um, uh, I hate boredom, cocktail parties or waiting in line. And so that's part of it. The other thing, of course, is how do we lead meaningful lives? It's 'cause we leave the world in a much better place than we found it. Um, and we, um... You can work at any level of scale. I think it's very honorable to say, "Hey, I'm working at my local senior community." For me, you know, obviously Blitzscaling and Master of Scale podcasts and all the rest of the stuff, you know, scale is my particular thing. I just have no idea when I'll retire. Um, and, and I don't really have that much of an interest in yachts. Uh, I do have an interest in getting to Tahiti at some point. I've heard about it.

    5. EG

      Okay. (laughs)

    6. RH

      It sounds kind of an interesting place to visit. (laughs)

    7. EG

      Yeah.

    8. RH

      Right. Um...

    9. EG

      It's, uh, in the middle of the ocean in case you don't know much about it.

    10. RH

      I, I have heard. (laughs)

    11. EG

      (laughs) It's very relaxing. Yeah, yeah. Some have said. Um, so you started Inflection.

    12. RH

      (laughs)

    13. EG

      And, uh, could you tell

  6. 18:2522:06

    How Inflection started

    1. EG

      us a little bit about how that came about and the focus of the company?

    2. RH

      So, uh, Mustafa Suleyman and Karen Simonyan, uh, were talking about this amazing transformation that's gonna transform all industries, gonna affect every different kinda path where language and cognition plays a role in society. Like, everyone, everyone on the planet is gonna have a medical assistant if, if we can gi- just get them access to a, even a friend's smartphone. And there's, all this stuff's gonna happen. And you can say, "Well, what exactly's gonna be happening with AI in 10 years from now?" And all three of us can now say predictions, and all three of us are kinda probably look foolish in two years on whatever prediction we make today- (laughs)

    3. EG

      Right.

    4. RH

      ... right, in terms of how this works. That, that's the nature of the game. But go, okay, startups, you're trying to go, well, things that would, would, would live as massive interesting independent companies developing a product. And so one of the things we came to was every individual will have a personal intelligence, a Pi that's for them, right? That's for Sara, that's for Elad, that's for Reid, and helps you navigate whatever the particular version of your life is, right? So we were talking about me a little earlier. It's like, well, I try to do these scale things but, you know, other people, you know, volunteer at the senior center and so on. But w- what's the thing that's useful to them? And we said well actually, in fact, like, something that is kind of a tool companion that, that reflects off whatever thing that you're particularly grappling with. It can range from, "How do I fix my, my, my flat tire?" to, "I had this kind of challenging conversation with a friend and I wanna debug it," or, "I'm trying to think about, like, what I should do next in, in my work," or something else, and have a c- have something that, that can be there for you. And yes, it's, it has elements of a therapist, but it's not a therapist, right? 'Cause it's, it's actually deeply knowledgeable in the world and it's not supposed to be just reflecting the, you know, "Elad, tell me the thing that you were most troubled about in your relationship with your mother." It's, "Hey, I'm, I'm here to, to provide a lens to the world and to help you." And like, for example, unlike the movie Her where it's like, "No, talk to me, don't go to the world," it's like if you show up and say, "Oh, you're my only friend, Pi," it's like, "Oh, we should help you get other friends. Let's talk about the importance of friendship and, and people you can talk to, and maybe there's some people you could talk to about it 'cause it, it's helping you with the world." And of course, then when you begin to design it, you think, "Well, what would be the right thing for a lot of the people in the world?" It may not... It's probably, nothing's probably for everybody, but it's like, well, something that's compassionate and kind, something that has a kind of a point of view so it doesn't just r- like, reflect... Like, if you show up and say, "I'm a white supremacist and I think race X is evil," it doesn't go, "Ooh, I'm with you. I agree." It goes, "No, well, really, you should think about that." (laughs) Right? Like, it's much better to be compassionate and to realize that we're all humans, you know, and kind of work you to that. And so it has a point of view in how it's, it's, it's operating, but with a view of helping you and amplifying you as a, as a way of doing it, and then bringing, you know, the enormous set of resources that a, that these, that this kind of, um, amazing large-scale language models can bring in it. And that, we said, "Okay, that product should exist. Um, it'll be one of the fixed points at, you know, X years in the future," and we see that clearly so we're gonna be building towards it. And as far as I can tell, I mean, you know, you guys are both, uh, highly active a- uh, AI investors. I think, you know, on that path, we're the ones, you know, who are most, like, of the serious teams, we're the ones who are dedicated to that path, uh, versus other paths.

    5. EG

      How have you seen, um, people use the product so far

  7. 22:0625:34

    Surprising ways people are using Inflection

    1. EG

      in terms of... Are there typical types of interactions? Is it reflecting this original intention? Like, ho- how do you view user behavior relative to the product?

    2. RH

      Um, I think we're getting that. I mean, you know, just like the surprise I, I kind of shared with the GPT-4, which is like wow, that's an amazing use case that is great. Like, um, actually one of the, uh, one of the people at, at Greylock is a new parent, and one of the things she came up to tell me about was, "Oh my god, it's giving me great, like, how do I navigate, (laughs) you know, all the things as a first-time parent? And, like, what are the things I should do? What should I pay attention to? You know, which, which, which things should I, like, read more about? Which things should I really obsess about, and which things do I not need to worry about?" And it's just, like, it's, it's there, like, when I go, "Ooh, I encountered this," and I can ask right now and it helps me right now. That's awesome. It's the thing that's useful to you. And so there's just a whole stack of them, and, and part of the reason I was using, uh, like the flat tire example is 'cause I had personally conceptualized Pi entir- entirely conceptually. It's like, you know, how do I help navigate my, my, my path through human society, whether it's work and the people I'm talking to or friends, da-da-da. Well, (laughs) someone went up to Pi and said, "Okay, how do I fix my flat tire?" (laughs) Right? And it helped. It was the interactions with Pi that got me to, to, uh, update my, my recommendation to everyone to experiment with AI, because it's like look, don't just go try to do something like, "Well, okay, I'm sitting in front of GPT-4, I'm gonna write a sonnet." Right? Like, 'cause I did, I haven't written a sonnet and it writes sonnets and I've seen them and blah. Great. Don't... You know, go ahead and do that. There's nothing that's saying don't do that, but my recommendation to people is... And this, this gets to the, the white-collar work thing, um, that, that you raised earlier, Sara, as well. It's like, no, no, try it with something that matters to you, right? Like, like that you may not expect it to get a good answer. And by the way, sometimes you won't. These things are not perfect in all kinds of ways. Sometimes you go, "Well, that was kinda lame and useless." Like, when I sh- first got access to GPT-4, um, uh, you know, months before it was publicly accessed, 'cause I was on the OpenAI board, I sat down and said, uh, "How can I, Reid Hoffman, make money through investing in artificial intelligence?" 'Cause I just wanted to try it.... and it was useless. It was, it was the classic MBA, like, "I don't understand anything about investing and I'm gonna write something that sounds really smart." Like, you're gonna study markets and address large TAMs, and you're gonna know which technological transformation, and then you're gonna go find teams that are doing that. And you're... It's like, no, that's not the way in, this technology investing thing works. I understand how you might teach it at, in, you know, if you're not knowledgeable (laughs) at a, at a seemingly smart MBA course, but it's not how it works. You'll find, some of the answers are not useful to you, but you'll find other of them are.

    3. SG

      Read, I don't know.

    4. RH

      (laughs)

    5. SG

      We're just following the steps.

    6. RH

      (laughs)

    7. SG

      It seems to be going super well so far. (laughs)

    8. RH

      Well, for example, it can be useful when you, say, have an associate, and you go, um, "What's all of this stuff? Where should I focus my due diligence?"

    9. SG

      Mm-hmm.

    10. RH

      Actually, in fact, giving a summary on that stuff for an associate can actually be very useful. It's, which things is it useful for is the key thing to start experimenting on. 'Cause, um, some of the stuff, it's great, and some of the stuff, it's like, eh, not so much.

    11. EG

      Mm-hmm. And I, I think you have a really key embedded point here, what you mentioned earlier, which is it's, um, who's it relevant for? And that's very personalized in terms of the specific context of the individual. Um, one thing related to that, that you mentioned

  8. 25:3430:58

    Western bias and AI ethics

    1. EG

      was that you wanted it to have an opinionated perspective. You know, you wanted to come with some pre-, uh, preexisting framework or pre-thought-out perspective on the world. And I think the, the racism one is actually a very cogent one given a lot of what's happening in the world today relative to universities and the perception of them in terms of, you know, are they doing the right thing or not relative to antisemitism or race or other things. Uh, many of the people who actually work on AI ethics come out of these institutions that are now being viewed increasingly as potentially biased. How do you think about where that perspective should come from, and who should actually decide what the right perspective is? Because you, you look at, for example, Falcon in the UAE is an open-source LLM, and I think one of the reasons they're doing it is because they don't necessarily want the Western perspective to be, you know, thrust on every single AI model, and it's a very specific Western perspective. So, I'm just curious how you think about the ethical and moral frameworks that should be applied to AI, and who should actually determine it, which i- perhaps is an even more important question.

    2. RH

      Well, um, the thing that I think is very much baseline is I think you should be... The developers, and we'll get to the full answer to your question, but the developers should be honest, open, and transparent about what they're designing to. And one of the things that frequently a lot of Silicon Valley people say, which I think is bozoville, is that the technology is value-neutral, right? I actually think, uh, values are embedded in it in various ways, and I don't think that's a bad thing. I just think it's, it's... Like, one of the reasons why I love The Economist as one of my, as one of my, uh, favorite magazines, or The Atlantic, um, is because they get, they don't go, "We're value-neutral." They go, "Here are our values," (laughs) right? "And here's what we're trying to do, and hold us accountable to the, the, the way that we are articulating, uh, what we're trying to say." Um, and it gives you a much more intelligent perspective, and I think that that's what, uh, technology companies should be doing. I think that's what, um, uh, you know, AI companies should be doing, and I think the, you know, kind of AI agents and whatnot is a way of doing it. And so, I think that's what's most important. Now, ultimately, um, you start with, like, when you're doing startups or initial products and there's a field of at least some choice, um, I think it's, it's the developers of the products. I think it's the companies being transparent and open about things. Like, "We're doing X for this reason." Right? "This is, this is why we're doing it." And I think one of the things that as technology companies... One of the challenges, of course, as they become more ubiquitous and important across all of society, e.g. , uh, shaping our collective mindsets, whether it's search or, you know, social networks or, you know, kind of, uh, you know, video networks or other kinds of things as ways of doing this, this does have society-level impact. And so there's responsibility to not just the individual as a participant and customer but society as a customer, and how do you navigate that? And I think that's important across all these things, and I think that's important as we begin to get the AI stuff to scale. And you could say it's important to have a, a certain amount of diversity and, and participation in that for set of options and perhaps limitations. 'Cause if you say, "Hey, I'm, I'm gonna create an AI that's gonna enable terrorists around the world," we'll say, "Well, (laughs) we don't think that's a good idea." (laughs) Right? "And we're gonna do something about that." Um, or, for example, you know, a, a stunning failure on a question, "We're gonna create an AI that helps people, uh, articulate and advocate for genocide." You know? Like, uh, no. That is clearly bad. Genocide against any human category, any, is terrible and evil. (laughs) Full stop. And so I think, you know, there's a dialogue within society about that. Um, I do think that one of the things that, you know, is an uncomfortable truth is that, you know, we go, "Oh, AI is shaping everything," and so everyone wants to put their hands on it in order to shape it. And yet, technology is built by small groups of people doing things, and you just can't have AI built by UN committee, right? It just doesn't exist. And it's one of the things that academics mostly don't understand because they've never... Most of them, the... Not all, of course, but most have never built anything, don't really understand how these organization works, don't understand how technology development works. They think th- if you just kind of write an essay, then a technology piece will come out of it. And you're like, "That's not how it works." Uh, you have to understand that and understand how technology is built, um, in various ways in order to make that happen, and then you have to try to shape it. That's part of the reason why that productive dialogue about, like, not, "You guys are evil because you fucked up on this bias case for this, blah, blah, blah, blah, blah. Lead the, lead, lead, lead the, lead the witch hunt." That's not useful. It's like, "Well, actually, in fact..."... your stuff on race is not good. Here's some ways you could make it better and here's some benchmarks that you could do in order to, to avoid it. And, and I'm gonna... And by the way, if after I say that, you're not listening to me and you're not making it better in some way, then fine. I'll, I'll, I'll go to the streets with it. (laughs) Right? 'Cause we should be better on bias and race and all the rest.

    3. SG

      One of the things that, um, you said that really resonates with me is the idea that you're going to... Like technology products, they take a point of view. They're built in a certain way by a relatively small group of people and the way you govern them, if they have impact on society, is you interact with those people, right? And then you hold that group accountable. I think one of the challenges is,

  9. 30:5833:15

    Structural challenges in governing AI

    1. SG

      uh, I feel like a big driver of the current narrative is, well, like because we didn't regulate and control social media companies that ended up being publishers that surfaced or drove certain points of view, we need to get that right with AI very early. I think the challenge is, like, that's true in many parts of society, right? May- maybe it is uncomfortable because it is a set of people that are going to have outsized influence on society. By the basic nature of building the thing, like, there's not really a, a way around that, right? All you can do is interact with them and govern the thing. And I think we should also expect to see that more around, um, academia. Or I, I would ask for it.

    2. RH

      Yeah. And I think it's a good thought on academia. I mean, look, one of the things that people don't understand is the only way you make progress with technology and get to it is you deploy, you learn, you iterate. And so you're gonna have errors. Um, there's no way to not have any errors. I mean, I would love it to have zero bias errors. And on terms of the AI regulation, yeah, I've heard the same thing. It's like, "Oh my God. We made this total mistake 'cause we, we allowed the social networks to go without regulation." And, you know, I think that the, um, the problem is that you don't really know the shape that you need to navigate it in until you begin to see it. And so, um, like, uh, I went to the UK AI Summit, um, this AI Safety and Innovation Summit at the beginning of November. It was a very good summit. The, the, the British government, I think, you know, triggered a whole bunch of stuff to, to kind of go in the right way. But (laughs) one of the dumbass things that I heard at this summit was, "And this time, we will not allow innovation before we regulate." (laughs) Like, well, that's dumb on several levels. One, we've already innovated. Two, there's no way to do that. None of us know how to do that. (laughs) Right? Um, and what's more, generally speaking, regulation is enshrining the past against the future. And if you look at every industry that goes really intense in regulation, it's, it's, it slows down intensely on innovation. And if you say, "Well, that's what we should be doing in AI." It's like, "Look, I think you are categorically wrong and harming humanity. Think about it. Let's try to get to the medical assistant for everybody."

    3. EG

      I guess speaking of benefits, um, how do you think about

  10. 33:1535:00

    Most exciting whitespace in AI

    1. EG

      the areas of AI where there's a biggest sort of, um, startup available opportunities? Because often when you look at these technology waves, there's a set of value that goes to incumbents. You know, it's somebody who already owns a workflow for a SaaS tool or whatever, and they just layer on AI. Versus things that are greenfield, where suddenly you can do something new and exciting. Maybe that'd be something like Harvey for legal or other areas. Are there specific areas that you are, um, most excited about or keenly looking for companies to exist in or, you know, alternatively think could be big areas for startup innovation?

    2. RH

      There's areas that I think are underdone that I would really like to see. Cybersecurity with AI. You know, like, I think it would be very good to have that relative to society. Um, I think that the notion of, um, you know, how do we make these transitions for the white-collar workers is, I think, something that, you know, I would like to see more of. Um, uh, I think in- you know, the reason we don't is because it's not the best economic opportunity, um, possibly and so people are all focused on the best and by the way, as an entrepreneur and as an investor who resembles that remark, (laughs) right? I- I'm sympathetic. Um, but the, um, but, you know, like, how do we get those things as well? But I think there's just, just, just tons. Like, I literally come up with a new, um, AI thing that I think about, "Ooh, I could, I could help get that co-created" like, every week. Um, and I, I just had the, the (laughs) k- the, the resources to do it, it would be, like, spawning new things.

    3. EG

      Yeah. And then I guess related to that, I think you were really forward-looking in terms of AI as a very important area of technology. And I remember going to an event you organized, I think it was, like, eight years ago or something, where it was, like, a small group discussion

  11. 35:0044:00

    GPT 5 and Innovations coming in the next two years

    1. EG

      of AI in the future and things like that. As you look forward in the next generation of AI, so say we go from GPT-4 to GPT-5, are there specific technological leaps that you anticipate happening even with that single increment? Or how do you think about the, the pace of innovation, but also the big shifts that are about to happen from a technology perspective over the next, you know, 12, 24 months?

    2. RH

      So I think there's two things at least, and I think there's gonna be much more. So, like, always, like, part of the delight, the reason that the three of us do this is we learn things that we j- I hadn't thought about and those really bright entrepreneurs come to us and we're like, "Oh, that's really great." And 'cause, you know, the thousands of people innovating through the network is... Right? One is, we're gonna get a lot more robust and capable on all the language model transformations. So, you know, whether it's a coding assistant, a legal assistant, a medical assistant, uh, a meeting note-taker, a, you know, uh, an amplification of, of, of slideshows through Tome or, or workflow with Coda or any of this other stuff, that's all gonna get just better. (laughs) Right? The second thing is, is the, the part of the superpowers of these things, because it's a scale compute thing, is breadth. So, like, how does the protein folding lead to drug, drug discovery or...... you know, or other things like that. Like things that are very broad space in this will also get special purpose tools, um, that will be, uh, I think, magnificent. I don't know if the result for those special purpose tools will be in a year or two, but the intensity of the work and the beginning of it, there will be, there will be, you know, gold and platinum from that, um, kind of over time. And I think that's in over a short time. Like, small in years, but we'll begin to see it over the next year or two and I think those, the, the one level scale, you know, 'cause that's, you know, the 10X GPT-4, GPT-5, the one level scale, um, will unlock some of that stuff. Uh, and that's part of it. Now, I'm certain that there's other things that two years from now we'll look back and say, "Oh, yeah, that was, that was..." maybe even now in retrospect, "obvious, but something I missed in that answer." By the way, I'd be curious your guys' answer to that. What would, what would, what would you guys say to that two-year GPT-4 to GPT-5?

    3. EG

      Yeah, I mean, I think there's gonna be three steps in, uh, or three areas of capability improvement. I think one is gonna be, um, to your point on baseline models, both in terms of broadening the knowledge base as well as the increase in the ability around chain of logic or the ability think or, you know, do simple, uh, thinking. So I think there's, there's one thing all around that and how much better these capabilities get, and you see that, for example, between GPT-3 or -3.5 and 4 where you had big step ups in, in medical knowledge and understanding, legal understanding, et cetera. And you can do things on GPT-4 that you just can't on GPT-3.5 or equivalent models. And I think we'll similar see a step up in other functionality for other fields with that, um, as well as sort of that chain of logic. I think a second is just augmentation of these things. Augmentation may include forms of memory, so you can actually loop back in a more, um, reasonable way to sort of chain, chain, uh, logic or chain actions. Augmentation may be things like RAG or the ability to augment other types of information or data sets in. Um, and so I think we're gonna see a lot of capabilities around that, and obviously there's a big debate right now in terms of fine-tuning versus just increased context windows and prompt engineering and how other things play off of each other and how that impacts generalizability, but I think we'll make a lot of progress on those sorts of things. And then third is I think we'll make a lot of progress in bespoke models for specific application areas. And that may be, um, biotech to your point, or specific, you know, protein folding, or it could be, um, materials or robotics, or it could be all sorts of things, but I think everybody's moving more to sort of end, end-to-end (coughs) both reinforcement learning but more, like, um, sort of, uh, deep learning approaches in some places where they hadn't applied them as deeply before and they're using more heuristics. Like self-driving would be an example of that. And it feels like the whole world is flipping to this more modern-based approach instead of architectures. Uh, and then maybe I'd actually throw in a last thing which is I think there will be some experimentation with new architectures, and the question is, besides Transformers, um, and the question is will they matter? And I, you know, I don't know. So those would be kind of my four 12 to 24 month, uh, predictions, but they may be incorrect, to your point on predicting the stuff.

    4. RH

      I think the new architectures, there's a bunch of stuff that I've been trying to work with on this, but I don't think new architectures will be one to two years. I, I'd be surprised if it was.

    5. EG

      Yeah, 'cause you need to scale 'em, right? So, yeah.

    6. RH

      Yeah.

    7. SG

      Yeah, I think the, um, uh, all of this is clearly going to be wrong for all of us. It's not a judgment on both of you, it's, it's three given up, what's happened in the last year.

    8. EG

      I think the two of us are gonna get it right, so I don't know.

    9. RH

      Yeah, exactly. (laughs)

    10. SG

      Okay, you two are gonna get it right? Fine.

    11. EG

      (laughs)

    12. SG

      Fine. Just me, I'll be, um-

    13. RH

      Ha- ha- have conviction, Sara.

    14. EG

      Yeah. There's a sign behind you.

    15. SG

      Yeah, well, here, here, conviction.

    16. RH

      (laughs)

    17. EG

      (laughs)

    18. SG

      Conviction. Conviction is making, making decisions based on those beliefs and plowing ahead, we- uh, and always seeking the truth, but not knowing you're gonna be totally sure. (laughs)

    19. RH

      (laughs)

    20. SG

      So, um, I don't, it depends on what you describe as a new architecture, right? I, I think that there's a lot of experimentation around, um, new attention mechanisms right now, and that, you know, credit to Ashish and Niki and Noam and the whole Transformers team originally and, and everybody who's worked on scaling it up. But that actually hasn't been that interesting of an area for a while, and I think there's much more, um, interest in that. Uh, I, I think the biggest labs, um, are enthusiastic about code and validation for code in some sort of, um, uh, s- uh, you know, self-reinforcing feedback loop of improvement, um, which I, I think, like, there are obvious reasons to be optimistic about. Um, I, I think this isn't quite, like, just advancement of GPT-4 to 5, but I'm an investor in Mistral and, uh, the efficiency that you get of being able to do the same reasoning at much smaller scales, um, begets more applications, right? And I think that also, like, you, you just get much more experimentation in that case, and, um, I'm, I'm pretty excited about, like, when you, when you take away that barrier to entry for application developers, you're gonna get so much more experimentation because they can go take that, the cost of integration and workflow and domain understanding and put all the energy there, right? So kind of what Reid said about, like, all of these workflows are gonna get a lot better and they're gonna get a lot broader, like, companies like Harvey, like, you need to go collect very specific data if you want to, um, increase the sophistication of the legal tasks you're doing. You may or may not believe that this falls in the path of core reasoning, but one of the things I'm really excited about is, um, the democratization of, like, content creation and creativity in general. That has been so dramatic this year, and I'm constantly surprised by what, like, HeyGen and Pika are doing i- in terms of, um, "Oh, okay, we can get avatars to walk around and take actions now. We can do... We have much better controllability around video." And I think for anybody who's worked on a social network, as both of you have, like, if you can create those new content types, like, you get so much more expression. Um, Reid mentioned, like, uh, a- and Elade, you and I have talked about this, but, like, the categories that make me most excited about impact on society, like-... education and healthcare are two of the areas that have been most resistant to, like, society's forays to get it to be better for cheaper. But I think the one other that I would mention here if we talk about code generation is, all right, today, you know, um, software is built by very small groups of people. They're often in Silicon Valley, sometimes they're in Paris. (laughs) Uh, but, um, if you can enable more people to build software that is useful, I think that will dramatically change society. So I'm excited a- about that piece. But, you know, like, we're, we're just gonna be wrong. So I- I- I have conviction that, like, n- none of these predictions are exactly right.

    21. EG

      Yeah. I- I- I think, uh, Sara sparked one other thing, which is- is in the mention of- of Mistral or Mistral, uh, I never know how to say it 'cause I can't do the French accent.

    22. SG

      Mistral.

    23. EG

      Is, um, Mistral is that, um-

    24. SG

      Oh my God, Arthur, I'm so sorry. (laughs)

    25. EG

      (laughs) I feel good about it. Um, is, uh, I do think that there's a lot of questions right now about, um, uh, inference versus cost and what infrastructure to use, and there's all these different folks doing everything from sort of, like, Stripe for, um, open source APIs for these different models on through to different hosting solutions and everything else. And I think in two years there'll be sort of a clear fallout of what are the set of approaches, and how do you do it, and what's the cheapest inference platform? And, you know, I think there'll be a lot of work done in terms of just basic ability to use these models at, um, uh, high, uh, scalability and low cost at, you know, across the board. And so I think that's another big shift where people are still kind of figuring things out right now, but I think it'll be pretty solved in two years.

  12. 44:0047:12

    What future should we be building?

    1. EG

    2. RH

      Like what Sara was bringing up in terms of creativity, I think that part of it is we're gonna have a number of superpowers that we don't currently envision. And part of it is, like, for example, one of the, you know, slogans that I've- I've, um, borrowed from Kevin Scott at Microsoft is the- the most significant programming language in the next few years is gonna turn into English, and then of course rapidly followed by Chinese because of the broad use of the language in being able to create things, computational artifacts, code, et cetera. I mean, you know, even today someone can go to these AI agents and code up a website where they wouldn't have been able to code up a website before. And that's part of the reason I'm optimistic about- about there being a symbiotic relationship, you know, uh, people plus the AIs, um, because I think there's that sort of- of- of direction. You know, part of the thing to do is say, don't try to say, "No, no, I want to stay, I want to keep the present exactly as it is." Um, it's, "What future should we be making?" Um, and, you know, you can say, "Hey, there's a danger over here. Look, as we go to this, let's try to avoid that." That's totally good. But, like, it's where- it's where should we be going? What should we be doing? Is- is the most important context for all of that.

    3. SG

      Is there anything we didn't cover that you wanted to talk about?

    4. RH

      Well, obviously we'll probably do this again, and I think that the- there's just a ton, but the way that technology is created is a small group that- that does something bold, takes a risk, and makes it happen. And most people outside of the tech industry don't really fully understand that. And so we need to help them understand that's what going on, but also to have the dialogue about, like, look, raise your considerations and so forth, but- but frankly as- as the cars get steered, only the people in the car really have their hand on the steering wheel. (laughs) Um, so y- you have to have a dialogue just like you're driving down the road and navigating with other cars and so forth about what you're doing as opposed to, you know, "We will all decide this is what's gonna happen," 'cause, you know, you know, maybe this is top of mind from the EU Act stuff which, you know, always makes me think that they're trying to hold onto the past so ferociously that they're just completely willing to sacrifice the future. Anyway.

    5. SG

      When they have such an opportunity to given, like, they actually have, you know, great talent in Europe working on AI now.

    6. RH

      Yeah. No, exactly.

    7. SG

      Yeah. Um, okay, well, we take no paid sponsors for this program, but legitimately, you should read Impromptu, drink a Y3KAI Coke, and listen to No Priors. Reid, thank you so much for doing this. Um, and, uh, until next time.

    8. EG

      Thanks for joining us.

    9. RH

      Always great to see you guys.

    10. SG

      (instrumental music) Find us on Twitter @nopriorspod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.

Episode duration: 47:13

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode _Hprred2E7M

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome