Skip to content
The Twenty Minute VCThe Twenty Minute VC

The Ultimate AI Roundtable: What Happens Now in AI, Why Google are Vulnerable | E1085

Des Traynor is a Co-Founder of Intercom, and has built and led many teams within the company, including Product, Marketing, and Customer Support. Yann LeCun is VP & Chief AI Scientist at Meta and Silver Professor at NYU affiliated with the Courant Institute of Mathematical Sciences & the Center for Data Science. He was the founding Director of FAIR and of the NYU Center for Data Science. Emad Mostaque is the Co-Founder and CEO @ StabilityAI, the parent company of Stable Diffusion. Stability are building the foundation to activate humanity’s potential. Jeff Seibert is the Founder & CEO @ Digits, building the future of AI-powered accounting. Digits have raised funding from the likes of Peter Fenton @ Benchmark and 20VC. Tomasz Tunguz is the Founder and General Partner @ Theory Ventures, just announced last week, Theory is a $230M fund that invests $1-25m in early-stage companies that leverage technology discontinuities into go-to-market advantages. Douwe Kiela is the CEO of Contextual AI, building the contextual language model to power the future of businesses. Cris Valenzuela is the CEO and co-founder of Runway, the company that trains and builds generative AI models for content creation. Richard Socher is the founder and CEO of You.com. Richard previously served as the Chief Scientist and EVP at Salesforce. Before that, Richard was the CEO/CTO of AI startup MetaMind, acquired by Salesforce in 2016. ----------------------------------------------- In Today’s Episode We Discuss: 1. Foundational Models: Analysis Will foundational models become commoditized? Who are the major players? What are their different strengths? Who will win? Who will lose? How important is the size of the model vs the quality of the data? 2. Open vs Closed: What are the biggest pros and cons of an open ecosystem for LLMs? Why is it naive to think that open-source LLMs will prevail? What will determine which method wins? 3. An Analysis of the Incumbents: Why is Google the most vulnerable? What can they do to regain ground? Why is Apple the sleeping giant? How could they win the next wave of AI? What should Amazon do today to compete with Microsoft? 4. The Future: Doom and Gloom? Why is it ridiculous to assume AI systems want to dominate? Why will AI create a renaissance of creativity and human freedom? What role should regulation play in the advancement and progression of AI? ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on Twitter: https://twitter.com/HarryStebbings Follow Des Traynor on Twitter: https://twitter.com/DesTraynor Follow Yann LeCun on Twitter: https://twitter.com/ylecun Follow Emad Mostaque on Twitter: https://twitter.com/EMostaque Follow Jeff Seibert on Twitter: https://twitter.com/JeffSeibert Follow Douwe Kiela on Twitter: https://twitter.com/douwekiela Follow Tomasz Tunguz on Twitter: https://twitter.com/ttunguz Follow Cris Valenzuela on Twitter: https://twitter.com/c_valenzuelab Follow Richard Socher on Twitter: https://twitter.com/RichardSocher Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #VentureCapital #JeffSeibert #DesTraynor #Intercom #yannlecun #Meta #EmadMostaque #StabilityAI #TomaszTunguz #TheoryVentures #DouweKiela #ContextualAI #CrisVenezuela #Runway #RichardSocher #Youcom #Digits #HarryStebbings

Harry StebbingshostRichard SocherguestEmad MostaqueguestDes TraynorguestJeff SeibertguestYann LeCunguestChris (Runway co‑founder)guestDuy (Contextual founder)guestUnspecified female guest commentatorguest
Nov 24, 202333mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    Welcome to 20VC with…

    1. HS

      Welcome to 20VC with me, Harry Stebbings. And for this Thanksgiving special, I wanted to bring some of the best minds in AI together for an amazing panel. The only thing is, this panel never really happened. What you're about to hear is the leading minds in AI, from the head of AI at Meta, the founder of Intercom, Stability, Runway, leading AI investor Tom Tunguz, all debate some of the core questions in AI. I pulled together some of those best moments and the most contrarian elements from their different episodes. Let me know what you think of this different style and format. I think it's really special and cool. You can do that on Twitter @HarryStebbings.

    2. RS

      You have now arrived at your destination.

    3. HS

      So I want to start at the very core, the foundational model layer. And I want to ask, how do we see this layer in terms of the foundational model providers playing out? And will we see the commoditization of LLMs? Emad, I know that you have some strong opinions here. So Emad at Stability, handing the mic over to you.

    4. EM

      I think that there's only going to be five or six foundation model companies in the world in three years, five years. I think it's going to be us, NVIDIA, Google, Microsoft, OpenAI, and, uh, Meta and Apple probably are the ones that train these models.

    5. HS

      What about you, Des? You're the founder of Intercom. Do you think we'll see the commoditization of LLMs?

    6. DT

      I, I don't know if it's actually happened yet, if I'm clear, right? Like, we actually torture test all of the LLMs. It's not yet the case that they're all equal. We-

    7. HS

      And when you compared OpenAI to the alternative providers-

    8. DT

      Yeah.

    9. HS

      ... what did the test show you?

    10. DT

      It's basically the quality of conversation, and, like, does it fail any of our, like, hallucination tests? Does it fail any of our trustworthiness tests? Can it infer its own confidence?

    11. HS

      Was it close though? Was there a wide-

    12. DT

      Yeah, uh, it... Close and narrowing and, and, like, it's also a work in progress. All of these things are moving targets, right? So, like, we haven't even gotten around to maybe testing the latest and greatest of all the providers, which are increasing in number. You mentioned Mistral, there's also GLaMa, there's Anthropic, there's, like, Cohere, like, there's, like, a whole chunk of them. And it's a bit of work to go around and constantly be trying to find out, has anyone got... We only really care about better, right? Right now, we're not in cost optimization mode, we're just like, "Who's got the best?"

    13. HS

      Jeff Sibert, you're the founder of Digits. So you sit on top of these foundational LLMs and then fine-tune them with your own data. What do you think in terms of this commoditization of the foundational model layer?

    14. JS

      I certainly think we will. And this may not be a popular position. Obviously, OpenAI is charging ahead, sort of leading the way right now. I think the market forces at work mean there's just immense energy to have an open source equivalent. Meta appears to be highly motivated to open source its work. Many folks want to run these themselves and tune them themselves and so on. That is hard and expensive today, but I can't think of another thing in time in history where something hard and expensive in tech has lasted all that long. It's going to be commoditized.

    15. HS

      Okay, so if we go one layer deeper from just the commoditization of these models to actually how important is the size of these models, and then how do we think about the lifespan and longevity of these models? Emad, I know that you have quite a few thoughts on this. I'll start with you.

    16. EM

      The reality is no models that are out today will be used in a year. So again, you see the order of magnitude improvement. PaLM last year was 540 billion parameters, then Chinchilla 67, and now 14. 540 to 14 is a big step. You see the quality of GPT-3 versus GPT-4.

    17. HS

      Is there any extent to how low it can go?

    18. EM

      We have no idea. You would have said this is impossible two years ago. You were like, "No way." You have a single file that's maybe a few hundred gigabytes that can pass every exam apart from English lit. There is no such thing as an unbiased model. DALLE-2, when OpenAI had that and they introduced a bias filter, any non-gendered word that had a random gender and a random ethnicity. So you typed in sumo wrestler and you get Indian female sumo wrestler. This is why you need national data sets, you need cultural data sets, you need personal data sets that can interact with these base models and customize to you and your stories, 'cause you and I both have our stories that make up our psyche and understand that context is so important to have AIs that can work for us, not on us.

    19. HS

      Yann, help me out here. You run AI at Meta. We heard there cultural data sets, we heard national data sets. Just start, Yann, how important is the size of the model first?

    20. YL

      You don't need those models to be very large to, to work really well. And I think it caused a bit of a epiphany for a lot of people realizing, oh, okay, maybe you need 1,000 GPUs running for a couple weeks to train it, the base system. In fact, that number is going down too (laughs) because people are figuring out how to do this more efficiently. But once it's pre-trained, you can use it for all kinds of stuff and you can fine-tune it really easily. And then at the end, you can run it on your laptop, right? That's kind of amazing. Or maybe on a desktop machine with a GPU in it or a couple GPUs. So I think it opened the minds of people to the fact that there is, like, enormous opportunities that really weren't thought to be possible before. And I think it's going to make even more progress because if we go towards the design of AI systems perhaps along the lines of what I described with objectives and planning, I think those systems could actually be even smaller to some extent.

    21. HS

      That's so interesting, Yann. So you said there about it becoming less important to have larger and larger models. I, I'm really intrigued, Richard Socha, the founder at You, I know that you have a different take on this. So do you think that it is important in terms of the size of the models themselves?

    22. RS

      It is super important. You just cannot train a single model for all of these different tasks with a small model. That's exactly why and how it would have always failed in the past.

    23. HS

      Hmm. Some different opinions there. Chris, a founder at Runway, what are your thoughts on bluntly the defensibility in terms of the size of the models being used, and how do you think about that?

    24. CC

      I find myself hearing a lot about models as the mode, and mode has been something that Silicon Valley has been discussing for, for some months now. I think models are not a moat. Models eventually don't matter. What matters most is the people building those models and how fast can you change and learn from those models. And so I don't think... That's why I go back, there's no one singular model that's going to rule them all.

    25. HS

      So models are not the moat. Now, Jeff, founder at Digits, who we heard from earlier, I know that you maybe disagree with this slightly. So do you disagree with this in terms of models not being the moat? And how do you think about that data size and quality also being a moat?

    26. JS

      At the base LLM layer, the data size so far has been very correlated with performance. And so, right, the bigger the models, the more data, the more parameters, et cetera, the better they do. Now there's starting to be this counterpush of like, okay, can we compress them? Can we pull that back? Like, how do we maintain the performance improvements without the size? So I think that's a super interesting part of R&D. What I'm talking about is sort of the next tier of how do you fine-tune the models, and that's where actually I think the quality of data is most important.

    27. HS

      I mean, I think before we go deeper in terms of data quality and data size, I just want to ask in terms of the models themselves, there's a core challenge today in terms of two opposing ideologies, which is open versus closed. Yann, you know, you run all things AI at Facebook, or Meta. How do you feel about the open versus closed discussion? I know you've got some very strong opinions. Why does the future have to be open, not closed?

    28. YL

      It's very simple. It's because no outfit, as powerful as they may be, has a monopoly on good ideas. If you do it in the open, you recruit the entire world's intelligence to contribute to things and having ideas, and ideas that you may not have thought about, which an outfit with 400 people has no chance of thinking about, or even a large company with 50,000 employees may not want to devote any resources to because they may not think it's useful in the long term or they have more urgency to take care of. So you give it away and then you have tons and tons of people, some of whom are undergraduate students or people, you know, in their parents' basement, so coming up with amazing ideas that you would never have thought about or willing to spend the time to crunch down the 7 billion weight LLAMA so that it runs on a Mac, on a laptop. I think that's why open source projects succeed, particularly when they concern basic infrastructure.

    29. HS

      Now, I'm really intrigued. Duy, you're the founder of Contextual, which is essentially a contextual foundational model.How do you feel about this? 'Cause I know that you do have some opinions in terms of the open source versus closed.

    30. DF

      So I'm a big fan of open source, right? I would like it to be true that with open source we could just keep up with all of that. But I think that's just incredibly naive. OpenAI has this very deep understanding of how people want to use language models basically nobody else has, and they have this giant economy of scale where they can serve up language models very cheaply because they get so many requests coming in at the same time. So they have a giant moat.

  2. 15:0030:00

    Des, as we know,…

    1. DT

      I think there's a world where we move to in the future where, like, an SLA maybe almost looks more like a BPO would in some sense. An SLA will be you wanted a efficiency of X on your market, we delivered that. You wanted this many leads from an SDR team, like, we do that. You wanted this sort of accounting and books closed by two days at the end of a quarter, we'll give you an SLA on that, not on the software is up. You'll go from CoPilot to, to control center. It'll be a UX for worker, which is dominant, to a UX for managers. You'll go from a seat add-on to software and labor, and you'll go from SLA on, like, reliability to SLA on outcomes, on work performance. That's what can be offered up by this. It's th- you see very little of it so far. I think it- AI is offering the potential for that architecture shift, and if we get that, th- the whole seat model, the whole the worker does it, and the product paradigm, the distribution in terms of who you can reach changes because ACVs change. And in that way, uh, I think it will be very different to mobile, which was mostly another UX, but the same architecture, same business models.

    2. HS

      Des, as we know, you're the founder of Intercom, you know, you're at the forefront of customer service. How do you feel about the business model that will be prominent in this next wave of AI, and how you think about that today?

    3. DT

      I think a lot of work is gonna get handed over to LLMs over the next five years. We're gonna start trying to, like, price against the work that's being done, not price against the seats or the employees, but just say, "How much is it worth for you to have all of your digital assets created dynamically?" Or, "How much is it worth for you to have, like, your customers get sub-second replies to common questions?" Like, that's the actual right way to think about pricing in the future.

    4. HS

      I want to bring in a new voice here. Christian Lange, you're the former founder of Tradeshift, now the president or co-founder of Beyond Work. I, I have to ask, how do you think about this consumption model pricing versus seat-based pricing question in the future?

    5. UC

      Well, I do think the world is changing. I think we're gonna absolutely move to consumption-based pricing. It's the only way. I think it's gonna be very hard to hold the moat on, on recurring revenue as it is today, because customers will wanna see more value, and they'll wanna see a more soft ramp-up to that value. But I also think on the work piece, look at the demographic problem. Like, most of the people I talk to, even if they wanted to replace the workers they have in their shared service centers today one-to-one, they can't, 'cause that generation of young people, they don't wanna go in and sit in front of a computer and type formulas into Workday every day. So, so we gotta completely change the work experience in next 5 to 10 years, or we're gonna run out of people to do the work.

    6. HS

      So everyone seems in line that we're moving to consumption-based pricing, away from seat-based pricing. But Jeff Seibert at Digits, we have one who thinks otherwise. So Jeff, I wanna move to you. This is great. Why do you think that maybe we'll stay in the realm that we're in today?

    7. JS

      So this may be just me, but I, I very much see AI as a tool, not a product. And so it's a, it's a technology, it's like your database, it's like memcache back in the day. And so because of that, I don't think it'll change how people price in specific industries. Like, if your market does per-seat pricing, that'll probably stay. If your business and product does consumption pricing, that'll probably stay, and you'll have to work that into how you use the AI. I think it'll be commoditized and seen as technology within a couple years.

    8. HS

      I do have to ask, we see CoPilots everywhere. Miles, you mentioned CoPilots earlier. How do we feel about the rise of CoPilots, are they an incumbent strategy? How useful are they? Christian, I, I wanna start with you. Christian Lange, over to you. How useful are CoPilots, and are they an incumbent strategy?

    9. UC

      I mean, who wants a CoPilot? I, I wanna be a pilot (laughs) . Like, and I wanna have a pilot. Like, I don't wanna... (laughs) ... have a co-pilot. I don't want to be inside an application. I don't care how many co-pilots you have. The problem is not to have an AI help you navigate an application that's shit. The better solution is to remove the application that's shit and just talk straight with AI. And I think the co-pilot metaphor, right, I mean, what happens when we have 10,000 Clippys all just talking to each other? And for you to work, you have to go and work in all of these Clippys, and you're going to have to tell Clippy A to go talk to Clippy B about that thing you have over in Clippy C.

    10. HS

      (gasps) .

    11. UC

      I mean, it's going to be worse than it is now.

    12. HS

      Myles Grimshaw at Benchmark, you were the first in this discussion to bring up co-pilots. Are co-pilots an incumbent's strategy?

    13. DT

      I think co-pilot is an incumbent's strategy. Incumbents own distribution, they own data, they own the UX, and they own a business model that all aligns to a co-pilot. Co-pilot as GitHub Co-pilot, right, like inline suggestions, think of it like how most, go to, go to any Microsoft product right now. Every Microsoft product now has a co-pilot experience in sort of a sidebar, an autofill, things like that, right? Where the UX, the, the core product is a layer on top of it, right? It's immediately added in, which is also totally incumbent strategy. And it's still about sort of supercharging that worker, but still where every user has a seat and every user's doing most of the work. And it works probably, you know, if you think about the evolution here, the models, most of what's rolled out might not be good enough for some of this yet, right? But that's what will come around the corner. You know, if you think back to Salesforce disrupting C- Siebel, Salesforce launched, like, five years after Netscape launched. Like, it might take a moment for that to happen, but the co-pilot, this idea of, "I'm still the pilot, I'm still the user controlling everything and it's sort of, like, giving me assistive suggestions," like GitHub Co-pilot, fits into the UX of incumbents, it fits into the business model of incumbents, and they already control all that distribution. The opportunity offered up to a startup, being a co-pilot for something else, like, probably won't be that amazing. And there might be pockets of it where it can really work, but the opportunity to disrupt is to be orthogonal to the incumbents.

    14. HS

      Yann, we said there about it not being the best position to maybe be in for startups. Christian mentioned, "We don't want a co-pilot, we want a pilot." How do you feel about co-pilots and intelligent assistants, and is it brighter than maybe we think in terms of what we have coming?

    15. YL

      Let's imagine a future where everyone can talk to their intelligent assistant. That, that system will have pretty close to human level intelligence, probably more accumulated knowledge than most humans. You know, they could translate in any language and give you a quick summary of yesterday's newspaper and things like that, right? Explain mathematical concepts to you, things like that. So, people are probably going to use this almost exclusively in the future for their interaction with the digital world. You're not gonna go to Google or Wikipedia, you're just gonna talk to your assistant. The only way to do this properly is for the base infrastructure for those assistants. They will be so pervasive, so much will ride on those systems, that I don't think anyone will accept that those assistants be behind a event horizon in a private company. They will insist that the infrastructure is open. They will insist also that the vetting process by which those systems are trained be something maybe like Wikipedia. We tend to trust Wikipedia, sometimes with a grain of salt, but we tend to trust Wikipedia because there is a vetting process so that whenever a, an article is modified, some editor kind of check on it, and then the changes are accepted or not, things like that. So you can imagine that the sort of common repository of all human knowledge that will be our assistants will be constructed through some sort of crowdsourcing process, perhaps similar to Wikipedia, where you're gonna have a bunch of people training those systems and fine-tuning them so that whatever they... and so they produce, uh, correct.

    16. HS

      Yann, you mentioned Wikipedia. I do want to move to the company level element and discuss which companies or incumbents are best positioned. Now, I want to start with Apple. And Des, I know you have some strong thoughts here. So, what are your thoughts on how Apple are positioned for this next wave in the next three to five years of AI?

    17. DT

      I think Apple will make massive, massive strides forward, but I'm kind of disappointed how long it's taken them. But I think-

    18. HS

      And this is my point. What makes you say that? Because so far, we're kind of left searching. (laughs)

    19. DT

      Yeah, yeah, for sure. You have to assume Apple's a really well-run company, and you have to assume that there's a head of AI in there, and you have to assume that they're training LLMs and they're looking for LLMs that can possibly run on their hardware natively, and not even have to talk to the cloud. And Apple are very privacy-focused, so they're gonna get all that shit correct. And you have to assume it's all gonna work with your AirPods, your watch, and your phone, and all that sort of stuff. That that's, like... I would be shocked if that wasn't the case. So then, what will they win, is the question. I think what they'll win is this idea of Siri might finally become useful. Siri is currently not useful because it doesn't really have enough smarts. But I think when Siri can be as conversational as ChatGPT and can take actions on the device, it'll change the entire interaction model across desktop and iOS as well, in huge ways. So I think Apple will win there.

    20. HS

      Emad at Stability AI, are you with Des in saying that Apple will win there? Or do you have a different perspective? Apple's a black box, right? And so they could surprise us all. But let's face it, Siri's crap. But they have all the ingredients in place, the identity architecture, the secure enclave, other things. Neural engine. Stable diffusion was the first model ever optimized on the neural engine, et cetera. But let's see that one. Geoff at Digits, Apple's a black box. We get it, agree. How do you feel about how they're positioned for the next few years?

    21. JS

      So Apple, of course, is super focused on privacy. They don't want your data to leave the device. The only way to do that with AI is if you can fit a machine learning model on the device and keep all the data there. I bet Apple can and will. And so if you project forward five years, if they get to the point where they can run a sufficiently large LLM on your iPhone, then OpenAI's out of the picture. Don't even need to hit their servers. It's just on your phone.

    22. HS

      Okay, so Apple's in a very strong position looking forwards. Tom Tunguz, they're your former employer. How do you see Google playing out over the next few years?

    23. JS

      I didn't believe that chat would replace search, but I think it... for many use cases, it will. And I think Google had a rude awakening where, I don't know, for 20, 25 years, they were uncontested, and now all of a sudden there's this disruptive technology, to some extent they developed in-house but ignored. So it's the classic innovator's dilemma. And so this technology went to other places and now is, uh, challenging the hegemony, the monopoly power, and that is so exciting if you think about, like, the ads ecosystem, like, the B2C ecosystem has been relatively quiet over the last 10 years because of that dominance of Facebook and Google. And now all of a sudden you have a technology and a re-platforming where all that market share is conceivably up for grabs. You could create a new travel agency, you could create a new shopping experience, you could create a new Stack Overflow, you could create a new social experience based on chat. And so it's wide open. And when you have a golden goose, when you have an incredible business model, you're always faced with a choice of disrupting yourself and destabilizing the ship or waiting until somebody destabilizes it for you. And it's... I think as a leadership team, it is so difficult.... to have the discipline to say, "We are going to destabilize this ourselves." That's what happened.

    24. HS

      Des, what are your thoughts on decisions from leadership team at Google and how they stand today looking forward?

    25. DT

      I feel like Bard unfortunately felt like we had to release this because ChatGPT was getting a lot of traction. It didn't feel like, "We've actually cracked search again. We've reinvented ourselves all over again." You know? They need to have that sort of a, um, Jay-Z like, "Allow me to reintroduce myself," moment, right? Where they come back and they say like, "Google 2.0 is here."

    26. JS

      Yeah.

    27. DT

      Now you're pulling into the real potential problem, which is are they willing to risk it all to win it all, right? Like are they willing to disrupt themselves or are they happy to take the like long slow decline into obsolescence or irrelevance or whatever, right?

    28. HS

      What would you do?

    29. DT

      You, yeah.

    30. HS

      Genuinely. I mean, it's easy to say here but if you were CEO of Google and you have shareholders on Wall Street.

  3. 30:0033:23

    People are kind of…

    1. HS

      do you feel about this, the concern, and how we should think about the next few years of AI and its role in society moving forwards?

    2. YL

      People are kind of extrapolating if we let those systems do whatever, we connect them to internet and they can do whatever they want, they're going to do crazy things and stupid things and perhaps dangerous things and we're not going to be able to control them and it's, they're going to escape our control and they're going to become intelligent just because they're bigger. And that's nonsense. No economist believes this. No economist believes we're going to run out of job because no economist believes that we're going to run out of problems to solve or requirement for human creativity and human communication and stuff like that. This is going to create as many jobs as it makes disappear and, and those jobs, by the way, are going to be more productive. So overall, technology makes people more productive. In other words, for the same amount of hours worked, you produce more wealth, okay? But every technological revolution, unless it's accompanied by political changes and social changes, generally profit a small number of people, at least temporarily, right? That happened in the industrial revolution in the late 19th century where few people became extremely rich and a lot of people were exploited and then society changed and there were like social programs and income tax and high tax for richer people and stuff like that which the US has backpedaled on this but not Europe. So there is a question of how you distribute the wealth, if you want, okay? How do you organize society so everyone profits from it? But that's a political question, it's not a technology que- question and it's not new, it's not caused by AI, it's just caused by technological evolution, right? It's not a recent phenomenon. AI is going to bring a new renaissance for humanity, a new form of enlightenment if you want, because AI is going to amplify everybody's intelligence. Every one of us will have a staff of people who are smarter than us and know most things about most things, so it's going to empower every one of us. It's going to make us more creative because we're going to be able to produce text, art, music, videos without necessarily having all the technical skills that are currently required for doing those things, and so exercise our creative juices. So that's the positive side. There are risks, there's no question, but it's not like those risks... Don't believe the people who tell you that those risks are inevitable or that they will inevitably lead to catastrophe. That's just not true. Place yourself in 1920, who would have thought that a mere 50 years later you could cross the Atlantic in a few hours in complete safety, you know, at near the speed of sound? And would people seriously want to ban aviation or call for regulation of jet engines before jet engines existed? I mean, that's kind of insane. So I'm not against regulation. The, there should be regulation of AI products, particularly the ones that involve making critical decisions for people. But regulating or slowing down research is complete nonsense. (music)

    3. HS

      I mean, I absolutely love doing that show. For me, it was so interesting bringing all the different thoughts and opinions together. I would love to hear your thoughts. Did you enjoy the show? You can say if you didn't like it. Let me know on Twitter @harrystebbings. We have so many more of these that we can do, whether it's on price sensitivity for venture deals, reserve decision-making, best and worst investments. There are so many where we could bring awesome, awesome opinions together in a really cohesive episode like this. So let me know what you think, and I can make more or less of them. I love your thoughts. I do this for you, and what you want always comes first. So let me know. And oh my God, we have an amazing show coming on Monday with Keith Rabois and Mike at Traba. And so stay tuned for that, 'cause that is a fantastic show.

Episode duration: 33:23

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode yTJ6aiALiUU

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome