Skip to content
No PriorsNo Priors

No Priors Ep. 14 | With Sarah Guo and Elad Gil

This week on No Priors, Sarah and Elad answer listener questions about tech and AI. Topics covered include the evolution of open-source models, Elon AI, regulating AI, areas of opportunity, and AI hype in the investing environment. Sarah and Elad also delve into the impact of AI on drug development and healthcare, and the balance between regulation and innovation. 00:00 - The March of Progress for Open Source Foundation Models 06:00 - Should AI Be Regulated? 13:49 - Investing in AI and Exploring the AI Opportunity Landscape 23:28 - The Impact of Regulation on Innovation 31:55 - AI in Healthcare and Biotech

Sarah GuohostElad Gilhost
Apr 27, 202333mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:006:00

    The March of Progress for Open Source Foundation Models

    1. SG

      Hey, everyone. Welcome to No Priors. Today, we're gonna switch things up a bit and just hang out and answer listener questions about tech and AI.

    2. EG

      The topics people wanted us to talk about include everything from the evolution of open source models to the Balkanization of AI, Elon AI, which I think will be super interesting to cover, regulating AI, and AI hype in the investing environment. Let's start with the march of progress for, uh, open source models. Um, I guess, Sara, what have you been paying attention to, and what are some of the more interesting things that you view happening right now?

    3. SG

      Yeah. So there's nothing out there today in open source that is like GPT-4 3.5 or Anthropic Claude quality, right? So there is a, there's one player out in front, and that's OpenAI, but I think the landscape has changed a lot over the last couple of months. Like, Facebook LLaMA's quite good. Um, many startups are just using it despite its licensing issues, assuming Mark won't come after them, and then you have a number of other releases that have happened, right? Uh, Trmro just released a pre-training dataset, which seems quite good. Stability just refused, um, released DiffusionXL in the image gen space. Um, and so I, I think the, like, larger dynamic is that there's been an increasing number of people and teams that now know how to train large models. The cost of a flop is only gonna go down. Um, there's a lot of investment in, like, distilling models, and, uh, a lot of researchers would claim that you and I know that it's gonna be 5X cheaper to train the same size model the second time around, like, once you've made your mistakes and know what you're doing. And then you have these other accelerants, like you can use these models to annotate your datasets and increasingly do advanced, like, self-supervision. So if VCs are going to continue to fund foundation model efforts, including open source, um, foundation model efforts, like... If I were a betting woman, and I am, I'd bet there's a 3.5 level model in the open source ecosystem within a year, um, and I, I didn't personally believe that would be true, like, a few months ago.

    4. EG

      I guess that puts it about two to three years behind when GPT-3.5 came out, though. And so do you think that's gonna be the ongoing trend, that there'll be a handful of companies that are ahead of open source by, you know, one or two generations?

    5. SG

      Yeah. I, I think that's, like, the status quo, so if we just straight line project, I imagine that will continue to happen, and the real question is, like, can you stay in the lead if you are OpenAI and, um, and, like, get, uh, get paid for it? Or is that the, is that the objective of the organization anyway? Like, I, I think, you know, if you have a, um, a great leader and a lot of resources and a lot of really talented people, that's not something I wanna bet against.

    6. EG

      Is there anything you think, um, is coming in terms of other big shifts in the model world, either on the open source side or more generally?

    7. SG

      Yeah. I mean, and we, we should also talk about just, like, stuff that you're, um, interested in investing in and generally paying attention to. But I think the b- the big idea that's been very popular over the last few weeks are autonomous agents, right? And I don't think that that's, like, a... I wanna hear what you think about this too. I don't think that's necessarily an architectural change, but for our listeners, the basic idea is to orchestrate LMs in this, like, iterative loop towards some high level goal where they're doing planning and they have memory and prioritization, reflection. And so you're, you're not necessarily changing the architecture of the LM itself, but this orchestration allows you to, like, do many new things, possibly. The classic example being, like, make money on the internet for me, and there's a good number of hackers trying to figure out how to make agents that, for example, um, like, analyze demand, find a supplier, set up a drop ship, um, Shopify store, generate ads, then promote that store on social, right? The whole loop being, like, one call to an agent with this high level goal of make money on the internet for me. Uh, like, uh, do you think this stuff is interesting, around autonomous agents?

    8. EG

      I think it's, I think it's, um, super interesting, and I think for, uh... There's the old saying that the future is here, it's just not equally distributed, and I feel like that's one of those things that people in the AI community have been talking about for a while, and there have been very clear ways to do it. And then I think there's one or two people that went and implemented interesting things there in terms of AutoGPT or other things, and then everybody was like, "Oh my gosh. This can w- this can happen." And I think a lot of people in the community are like, "This is really cool, but at the same time, yeah, of course it can happen," you know? (laughs) Because effectively, you have, um, some form of, of, uh, context as a AI agent is acting, and then you use that context to inform the next motion and sort of update, you know, uh, the prompt or what the model's gonna do. I think there's other forms of memory that people have been talking about that are super interesting, like how do you make that a bit more of a cohesive part of how an LLM or AI agent functions? Because right now, effectively every time you start a new instance of ChatGPT, a new chat, you've lost the context on all the other sessions you've had. And so a lot of what people are thinking about is, how do I create ongoing context so that whatever chatbot or whatever API I'm using remembers everything else I've done with it over time, or perhaps everything it's done with every other user over time? And then that becomes really powerful, because you're effectively crowdsourcing an understanding of the world and then integrating it into an AI system and agent, and so suddenly you have global context. Like, imagine if you as a person understood the life of every other person who's lived, and then you had all the context around what that means in terms of just how you operate in the world, right? And so I think those sorts of things-

    9. SG

      I wouldn't operate anymore, Elad.

    10. EG

      (laughs)

    11. SG

      We'd be hive mind.

    12. EG

      Yeah, exactly. It's just the hive mind. So I think, I think that's where we're all heading, so... You heard it here first.

    13. SG

      Okay, fine. While we're on this topic of directionally AGI, there's been a lot of call for regulation of AI from, you know, Sam Altman to Satya to Elon Musk. Do you think AI should be regulated?

    14. EG

      You know, I think the first question is why do people wanna regulate to begin with? And I think there's, you know, two or three reasons. Um, one is, if you're an incumbent, it actually really benefits you to get lock-in, and one of the best ways you can get lock-in as an industry is to have regulators get involved, because they start blocking innovation and creativity and new efforts,

  2. 6:0013:49

    Should AI Be Regulated?

    1. EG

      and if you basically... There's that famous chart of, um...... where price, prices have gone up by industry and where they've gone down, and they've largely gone down in areas that have been unregulated traditionally. That's things like software or certain types of food products or other things. And then there's areas where prices have gone up dramatically, and that's education, it's healthcare, it's housing. It's the most regulated industries. So regulation tends to lock in incumbents. It means you have fewer people making drugs, and you have fewer people doing all sorts of things that could actually be quite useful. So that's one thing. The second is, um, I think some people are just scared, and in some cases you could say, "Well those, there's reasons to be scared," right? Like, what if the AI is used to unleash a virus or what if the AI is used to cause war? And if you look at the history of the 20th century, humans have done that pretty well on their own already, right? It's not, it's not a new concept that bad things will happen, and often they're driven by other people versus technology, right? And of course technology can have accidents or can be misused, but fundamentally usually people have driven a lot of the really bad things that have happened over time. And there's a really long history of doomers who are wrong, right? And I should say, by the way, on AI I'm a short-term optimist, long-term doomer, right? I actually think eventually there may be an (laughs) existential threat from AI, but I think in the next n years, um, you know, everything will be okay. Uh, there may be accidents or maybe terrible things that happen, but fundamentally we're being different from any other period. Uh, but if you look at the doomerism in the past, it's things like, you know, public intellectuals, uh, worried about swine flu and nothing happened, or pop, uh, you know, a lot of people worried in the '70s about population collapse. "We're gonna have too many people and the world will starve and we're gonna have global famine," and that didn't happen. And so, uh, we have a lot of examples of people in the past kind of predicting doom when nothing happened, and we also had that during COVID where a lot of people said COVID is the worst thing that ever happened to the world, and then they would be hosting dinner parties unmasked inside with large groups, you know, later that evening. And so I just think you kinda have to look at people's actions versus their words, um, and fundamentally, you know, my view would be let's not regulate right now, at least most things. I think the things that maybe should be regulated are things related to export controls, so there may be advanced chip technology that we don't want to get out of the country, um, and we already have those sort of export controls on other capabilities. Uh, we may wanna limit the use of AI for certain defense applications. You know, do we really want a really smart hyper-intelligent AI agent driving swarms of offensive drones or weaponry? Um, and so there may be some need to do some sort of global regulation for things like that, or at least, you know, something like what we've done for chemical weapons or the like. And then I think in the long run we may wanna think twice about advanced robotics and their implications as AI becomes more of a existential threat to humanity. But overall, like if I had to choose right now, I'd say don't regulate in the short run except for those areas that I mentioned, um, and then I, I think that, um, the big pivot point for regulation may actually come during the 2024 election because I think that's the moment that, um, people will show examples of AI being used to influence the election or influence voting behavior just like ads influence voting behavior, right? But AI could write better ad copy or do other things. I worry a bit about that becoming the reason that people claim that they should not regulate things just like they got really a- aggressive about social networking. So-

    2. SG

      Yeah.

    3. EG

      ... I don't know. What do you think?

    4. SG

      Uh, I, I largely agree with that. I feel like it's worth, like, describing what I think are the two more rational cases. Uh, by the way, I, I don't think... I, I think it's too early to regulate. I just wanna make sure that's very clear. But I think the two rational cases I've heard, because I keep asking smart people, um, that I don't think are taking cynical actions, why they're afraid or, um, you know, short-term afraid, or why they think this makes sense. And the two things I've heard are, one, you know, this is unlike the past because of the speed of progression, like this hard takeoff idea of, like, especially if you... And, you know, I, I see you nodding and smi- smiling, but, uh, when, when very, very smart people who are working at the state of the art tell me that they're, um, concerned within a 10-year band for humanity because of the ability of this current generation of models to be used to train the next generation of models and we're all very bad at thinking about compounding, I'm like, "Okay, that's not a, like, completely unreasonable point of view." I think the other is more of a, um, it's more of a tactical thing for the industry which is, as you said, for, um, you know, whether it be the election or some other trigger, like, there's a version of the reaction to this from, um, you know, people who are afraid or from, uh, you know, political opportunists who, um, go in two directions. One is, like, mass surveillance, right? Or one is, like, complete lockdown, right? So I think the tactical thing is to, like, try to create a democratic process that gets ahead of it with something, um, something that's a reasonable path forward. But I, l- largely I'm just, uh, I, I feel as far it's, it's very early to be, um, figuring that out. And then you also have the problem of like, uh, if you're talking about the more existential risks or sort of the AGI risk, like, alignment research is very tied to capability research, right? And so it's sort of impossible to be like, "We're going to stop making any progress on research, but figure out how to control this stuff."

    5. EG

      Yeah, absolutely, and I think related to that, um, you know, I think it's really important to your point to separate out almost what I consider technology risk from species risk, right? Technology risk is there are some bad things that can happen due to technology being abused, right? And that could be a nuclear disaster or that could be an AI being used to shut down a pipeline or to crash a flight or to do something really bad, and those sorts of things already happened but, you know, you could imagine it could accelerate it. Uh, in that case you could literally turn off a bunch of servers, right? You could turn off every machine on the planet if you really needed to and humanity would keep going and it'd be a reset, but we'd reset fine. Separate from that, there's species level risk like, is there an existential threat to humanity? And that's like an asteroid hits the planet and kills everybody, and if... I think a lot of the, the people who talk about these things mix those two things, and I think the true doomer view is while AGI eventually becomes a species and we compete with it and then it wipes out all humans-And in order for an AI to irrationally wanna kill everybody, um, y- you'd need some replacement for the physical world, because eventually all the hard drives would burn out and the AI would die, right? (laughs) If it existed as a species or a life form. So, you need physical form for the AI in order for it to truly be an existential threat. And that's why if I were to focus on an area, it'd probably be robotics or something like that, because that's where you suddenly give physical form to something. And if you're like, "Oh, isn't it great? If AI can now build my house, then AI can now, you know, build a data center, and now build a solar farm." And no one... Y- you're eventually... Now build a factory. You've basically created an external system that no longer needs people, and that's when I think there's real risk. And that's why on the 10-year time horizon, I'm not that worried because r- robotics and a- atoms in the real world takes a lot of time. So even if you have this hyper-intelligent thing running, the reality is if you really needed to, you could turn off every server on the planet.

    6. SG

      Yeah, I- I agree with the embodiment being like a key piece in this theory of the AIs that are gonna kill us, and we're- we're pretty far away from that. Okay, so one question we got from listeners, and I'm sure you get all the time, is there's a ton of hype in the AI investing and startup world right now. What do you, what do you think of it? Is it justified? Is it appropriate?

    7. EG

      Yeah, I think we've both lived through a couple different hype cycles now, right? There was hype cycles around social and mobile, and then the cloud, and then, you know, multiple crypto hype cycles. And the reality is, out of all those hype waves, interesting things emerged, right? And maybe in the standard hype cycle, 95 or 99% of things fail, but there's still like the 1% that work, or maybe it's 5% work and 1% end up being spectacular.

  3. 13:4923:28

    Investing in AI and Exploring the AI Opportunity Landscape

    1. EG

      And I think the hard part usually is to know what's actually gonna work, because so many things seem so overlapping and similar. And so I remember when the mobile wave happened, or mobile and social at the same time, a bunch of different people I know started, um, mobile pho- photo apps, and each one of those things took off. And so you'd suddenly see something go from zero to a million users in a week, just literally. It just spread virally. And none of them stuck, right? They all burnt out as cycles, and the only one that really stuck was Instagram. And in part, that's because Instagram emphasized filters, which things like Camera Plus already had. And then in part, it emphasized the network. It's like, "Let's have a follow model like Twitter," and that's the thing that really worked. And so, it- it feels to me like if you'd, um, gotten excited about the overall cycle, you were right. But if you got involved with the wrong set of photo apps or you built the wrong thing, then you were wrong in some sense. I guess you were right about the trend, but wrong about the specific substantiation of it, and it seems like the same thing here. And so I think often it's that question of, you know, Peter Thiel has a good saying, which is, "You wanna... You don't want to be the first to market, you wanna be the last standing." And so I think it's a similar thing here. How do you end up being the last person standing and that... or last company? And it may be the same thing as being the first mover, right?It's Amazon and Books or- or things like that. Um, but sometimes it means you actually do something a bit smarter and you come later in the cycle and it's fine. So, I mean, are there specific areas you're most excited about in this wave or cycle, or opportunities that you think are... these things that are obviously gonna happen or important to happen?

    2. SG

      Uh, absolutely. I mean, um... Well, and I want your ideas too. Some of them are shared ideas, to be fair. Um, but I would agree. It actually... Uh, to add a data point, I was just over at OpenAI yesterday, and they're biased, perhaps in a way that I'm also definitely biased. But a friend was saying they actually think that investors are being somewhat wary at the application level right now, because they can't figure out what's going to be standing, right? It's a, it's a very different competitive dynamic. Um, but the market is extreme for researcher-led foundation model companies, because everybody is, uh, pretty sure OpenAI is going to be around, right? Um, and I- I agree, the applications are gonna be non-obvious. But as- as one example, like any investor that claims they knew image generation from text was a killer use case like a year or two ago, besides you, is just empirically wrong, given David's completely investor-free cap table and amazing business, right? So, David, in case you're listening, I still love Midjourney and want to invest.

    3. EG

      (laughs)

    4. SG

      Um, but (laughs) , uh, that's why this podcast exists. But, um, you know, in terms of like specific things that I'm interested in now, I'd say like, uh, I think there are a lot of things on the application side that are exciting. So to start with some of those, I think, v- you know, voice synthesis and dubbing are gonna be just a huge unlock for like content providers and publishers. Like, I'd like to back something in that space. Um, I was just talking to some people at, um, a very large financial, and they said the like biggest potential cost savings, you know, on order of tens of millions of dollars a year for us, is in turning every line of code we have into explanations for a regulator. And that's at once like pretty specific to them, but also not, right? I think the areas of audit, tax compliance, accounting, reconciliation, like, there's a lot of, um, natural language understanding that could be, um, better served by semantic understanding. Um, and- and so I- I think that's an obvious area. Um, I think annotation is changing again, right? And we can use... This is like a very specific idea, but we can use LLMs to much more here. Um, we talked about agents. And then, um, this is- like this isn't a- necessarily a specific company idea, but I think architecturally, um, retrieval is a field of active research. But the idea of personalizing LLMs with enterprise data is an important but like very tricky one. You have to do data management. You have issues in scalability, sync, access control. You likely want to apply traditional IR. If you own both retrieval and the model, you can do very magical things. And so, I think like the- the ChatGPT retrieval plugin is super cool, but it doesn't just serve a whole host of use cases. And I think this entire like half of the stack is still missing. Um, so those are like a couple of the things that we're, um, sort of explicitly like hunting around. But, um, what are you paying attention to?

    5. EG

      Yeah, I mean, I, I think we have a lot of overlap, as you know. So I'm super interested in sort of voice synthesis, dubbing and related, uh, both in terms of infrastructure, but then also in terms of application areas. And so I think that's gonna be a really big sea change that perhaps people aren't paying enough attention to. Um, I'm actually quite, uh, long on compliance in general. Like, I've done a bunch of things like AgentSync and Medallia and in other compliance-related companies in sort of the old world. And so I think that's just an area that there's always gonna be, you know, converting spreadsheets and offline processes and, you know, random checks and docs into code is, you know, really powerful. Um, I think there's a lot to do on the app side. I actually am maybe on the other side of people who think that it's impossible to tell what's good and, you know, nothing's defensible and everything's just a wrapper on GPT or whatever. And I actually think there's tons and tons to do there. I mean, Harvey.AI, which we're both, uh, involved with, I think is a great example on the legal side. But I think there's a, there's, you know, two dozen things like that to build over time. And it probably takes five years for all those things to get discovered and built and substantiated. So I don't think it's, like, this year there'll be 12 of them, but I think, like, every year there'll be a couple of really interesting ones. And then there's probably a lot to do on the tooling side, right? Obviously, LangChain is sort of a hot one in the, in the area, but there's everything from, you know, people exploring vector DBs like Chroma on through to other forms of, um, infrastructure. Uh, and so I, I... You know, LLaMA index and other things. So I just think there's a lot that, um, to be done at every level of the stack. It'll be interesting to ask, like, what happens on the foundation model side, because to some extent the question is if we locked in a few of the leaders or is more to come. And I think the Elon Musk startup that's rumored to exist is sort of an interesting, um, example of a new entrant. Uh, and back to regulation, you know, Musk was asking for a six-month moratorium on progress, which seems to me very self-serving if you're simultaneously starting an LLM company (laughs) , you know? And if I was in that-

    6. SG

      Just hold off until I catch up, right? (laughs)

    7. EG

      Yeah. And if I was in that position, I'd do the same thing. Don't get me wrong, you know? (laughs) So it's not, it's not meant as a dis, it's just meant as a, you know, remember people's incentives. Um, but I do think there may be some interesting things to do on the foundation side, and I, I do think some people are doing that in a vertical-specific way. They're saying, "Hey, we're gonna build a healthcare-specific model and we're gonna build a..." You know, uh, Bloomberg did like their, their BloombergGPT or whatever it was called on the financial side. And so I think, um, you can clearly see these verticals, uh, uh, um, emerge and a lot of people obviously are debating, "Will a general purpose model just cover all those use cases? Are you gonna have bespoke sort of vertical models?" And what, what parts of the actual logic and synthesis and sort of magic of these AI models comes from the fact that you've trained on a massive amount of data and language and then you're applying it to a specific area with potentially unique datasets overlaid? Or is it something that's just, you know, that can be dealt with vertically specific and you don't need that broad-based understanding of the world? So I think that's a really interesting area of, like, exploration and it's, I d- I have no idea what to predict there. I don't know if you have any thoughts on that.

    8. SG

      Uh, well, I, I would agree. I think it's, um... I think there is real opportunity for vertical-specific models where you can imagine that control for either a compliance or a safety or a, um, just performance, like reliability of input data makes sense, right? As well as, like, if there are architectural differences because, um, for example, you have multimodal data in healthcare and pharma, right? Um, if you are looking at protein structures and, um, radiology and, um, healthcare records, it's not clear that, um, you would wanna do that, train that in exactly the same way as a general web text, um, model, right? So I, I think that makes sense. On the, um, the broader foundation model question, I, I... You know, we were talking about open source at the beginning. I think that OpenAI will con- continue to be a leader. Anthropic is very dangerous here, like, really talented team. Um, but the number of people who know how to train large models and the cost if a flop goes down, right? And so I, I think there's, like, just a lot of incentive in the ecosystem for additional players to, um, to compete. Um, what do you, what do you think is the opportunity for, uh, incumbents or how do you... How should they h- how should they react to all this?

    9. EG

      Yeah. I think, um, you know, obviously with every technology wave there's a differential split in terms of where market cap, revenue, employees, innovation, et cetera, goes in terms of incumbants versus startups. And, you know, every wave is a little bit different, right? The internet wave was, uh, almost... you know, it was probably 80% startups in terms of value and 20% incumbants. And then mobile was sort of the other way around. It was 80% incumbants and 20% startups, right? The big platforms for mobile were the, were Google and Apple, um, but then you had a lot of interesting apps like Instagram and Uber and others emerge. Um, uh, for crypto it was like 100% startup value, right? And it feels like in this wave it's probably 80/20 again, right? It's probably... Google will probably become a player, right? OpenAI is closely aligned with Microsoft. And then, you know, uh, Salesforce with AI is probably Salesforce, right? It probably isn't a new company. Might be,

  4. 23:2831:55

    The Impact of Regulation on Innovation

    1. EG

      right? I actually think certain companies are vulnerable for the first time because of these capabilities, and that includes everything from ERP providers where there's, like, a defensive mode through integrations. And obviously this could make integrating your data into multiple things really easy and fast. Instead of six months to roll out SAP, maybe you could have a next gen approach where it takes, you know, a day or two on a new product, right, to, to do all the integrations that you would've spent six months on consulting fees for. Um, and so there may be certain types of, uh, thi- uh, companies that are vulnerable, but the reality is I think in most cases, you know, if a incumbent is already doing something and they're quick to integrate it, then it works great. The one area that may be really interesting is almost like there's, there's probably room for a new private equity approach where if you think about how private equity companies bet on things, they basically look at cash flows and costs and all the rest of it. And if you can radically decrease cost for people-heavy businesses-... by using, um, LLMs as, like, a replacement for certain types of work or at least an augmentation, then you can differentially bid on companies as a private equity shop. And so I-I think, like, people who do buyouts could have this as a strategy. I don't know that any of them will, 'cause most of them tend not to be very technology savvy, but I think there's really interesting alternative things to do at scale there that—that tend to be kind of under-discussed. The healthcare side that you mentioned earlier I think is kind of fascinating, because if you look at the cost of developing a drug, for example, say it's a billion or two billion dollars to develop a drug, whatever it is, most of the early stage development is in the tens of millions of dollars at most. And so I think a lot of the default focus of people who don't understand healthcare very well is to say, "I wanna use this for drug development." And it may help with certain aspects of drug development later, but usually I think the places in healthcare where this will really get applied fast is on the more operational or services-intensive related side. It's healthcare delivery. It's lowering the cost of a doctor visit or telemedicine. It's making payments easier and more streamlined if you're dealing with insurance reimbursement. And so I think there's really exciting things to be done there. Like, Color, um, a company I co-founded is, for example, thinking about different application areas, and I just think that that's, like, a real wealth of—of fruitful areas for people to explore if—if they're healthcare savvy. And of course, with healthcare, the technology usually isn't the issue. Usually, the go-to-market is the hard thing, right? So I think market access is really hard there.

    2. SG

      Yeah. Um, I push back on that a little bit. I'd start with saying I agree on just the, um, operational, uh, friction in healthcare that we can take down, right? There are so many processes, like if you look at prior authorization, it's a battle on two sides to fill forms and, like, uh, compare, like, EHR data and clinical recommendations against a policy, right? And so there's a piece of that you can't get rid of because, you know, insurance company has incentive not to pay and, like, hopefully providers trying to provide the best care. But there is a piece you can get rid of, right? Like, we have models that can, you know, read data, try to understand it, fill out a form. And so I-I think that there are lots of interesting applications there. The—the minor pushback and, you know much more about healthcare and pharma than I ever will, but, you know, VC is the job of having opinions anyway. And, um, I think if this wave of AI can change the cost curve in drug development, it's because it, you know, you're not actually impacting the $10 or $20 million upfront on, um, what's traditionally considered, like, research. You're increasing the probability that you're right, right? And so, like, all of the cost of, you know, expensive recruiting and clinical trials, it is more efficient because you're right more often. You would just understand more-

    3. EG

      Maybe, yeah.

    4. SG

      ... about that already.

    5. EG

      I think—I think the hard part is that a lot of, um, a lot of drug development ends up being, hey, this works great in mice and let's try it in people now. And to your point, there may be things that you can learn heuristically in terms of when do things translate versus not. Uh, but I think one piece of it is just—just ba- basic biological differences, and then the second piece of it is, this is back to the point on regulatory capture. To some extent, the incumbents have an incentive to drive up the cost of drug development so no new startups can actually ever enter in terms of actually making it-

    6. SG

      Oh, that's very cynical, yeah.

    7. EG

      ... all the way to a launched drug. Oh yeah, but, you know, it's interesting, like, it's, it really is this weird regulatory capture. And so if you look at, um, the last time a biotech company, outside of Moderna, which I think is, you know, an exception because of COVID, the last time a biotech company hit, I don't remember what it was, 30, 40, $50 billion in market cap, something like that. The last year such a thing was founded was in the late '80s. So it's been f- uh, at this point, what is that, 35, 40 years without a new major biotech company started in terms of biopharma actually developing drugs. That's shocking, right? In tech, during that sim- that same time period, there's dozens of companies, and if you actually look at the aggregate market cap of the entire biopharma industry, and as a reminder, healthcare is 20% of GDP and pharma is about 20% of that, right, or 10% of that, um, if you add up the top four or five tech companies, their market cap equals the entire industry for biopharma, and that includes Pfizer and Eli Lilly and Genentech and Amgen and all these companies, as well as all the small startups and all the mid-cap companies and everything else. And so then you ask, why is that? And these are very profitable companies, right? They're, they have software like margins in some cases. And so as you start digging into the history, you realize, wow, like there's—there's, um, strong reasons for incumbency to remain as incumbents. And there is this, um, regulatory process that really delays things quite a bit, in some cases rightfully, in some cases wrongfully. And if you look, for example, at the COVID era, we were able to develop multiple vaccines and do clinical trials on multiple drugs really, really fast. Part of that was we had a lot of patients, but part of that was we removed all the regulatory constraints. And we didn't have mass scale adverse events and bad things happening to people. We just moved really fast. This actually also happened during World War II. Winston Churchill wanted a way to treat soldiers in the field for gonorrhea, and so they rediscovered and developed penicillin in nine months. They again removed all the regulatory constraints and boom, nine months later, they had a drug that worked really well that was safe. And so I think it's—it's something to really think about deeply in terms of what are the incentives that we're driving against and how are we thinking about cost-benefit societally, but also the second you start adding a lot of regulation, things slow way down and innovation goes way down and costs go way up. And that's the reason that, you know, per the earlier conversation, I think regulation of AI for most things, you know, export controls make sense, a few other things make sense, but for most things it's probably a really bad idea right now.

    8. SG

      I would agree with that. Um, I do think that there is one-

    9. EG

      That was my rant, by the way. (laughs)

    10. SG

      No, no, no. Uh, stay on the soapbox.

    11. EG

      (laughs)

    12. SG

      Um, uh, learned something about gonorrhea today. Uh, but, um, b- but I-I think, like, you know, if you think about...... the power of government, and I'm strongly on the, like, reduce regulation, encourage innovation side. Um, you also have these wartime examples of production of airplanes in World War II going from a few hundred planes to 6,000 in also less than a year, right? And that you're, here we're fighting, like, atoms, not bits, right? You have to build plants, and like figure out all these engineering processes. And so, you know, I, I think that there are ways in which, like, from a, um, industrial policy national security perspective, like, if we wanted to be winning in AI in a really durable way, like, I think the paths are pretty clear actually. Like, people need compute, and like, we have to make it a, um, a priority in the United States. But I, I, I would also say, in the field of pharma, I remember like asking you, like, I don't know, seven, eight years ago, like, "Hey, Elad, like, I know you're interested in aging, and like weight loss, and the intersection of areas where like, um, where the demand is very consumer-driven, right? You might break out of, um..." And demand and also like, uh, the ability to access, um, different solutions that are on the edge of like consumer purchase, right? Especially as we have more like web, um, diagnosed, doctor network-diagnosed prescriptions, right? "Do you think this is interesting?" And I'd send you a company or two, and you, uh, gave me the same extremely consistent view, which was like, "Hey, despite the PhDs, like, you know, the data-driven person investor inside me says, like, 'Don't do this, just do tech companies.'" So no change?

    13. EG

      Um,

  5. 31:5533:30

    AI in Healthcare and Biotech

    1. EG

      you know, I think that the healthcare services and operations side is super interesting right now due to LLMs. And so, you know, that's an area where I think there's lots and lots of room to do interesting things, and I, I have invested in some software-related, um, companies in the past, like Benchling or Medallion, um, in these areas. But I think it's really about what's the healthcare infrastructure that can be served through software, and then how can LLMs accelerate it? I think drug development can be extremely useful societally and, um, really important and impactful, and obviously there can be really great outcomes for people, um, as well as financially it could be a really great thing. But it just comes back to like, why hasn't anybody built a generational company in a really long time in the area? And there's all sorts of reasons behind that. I mean, we tried that when I co-founded Color, right?

    2. SG

      Mm-hmm.

    3. EG

      The whole focus was trying to make healthcare more accessible to people, and I still really believe in that mission. So it's more just, you know, what are the obstacles to getting there for different types of companies, and do you want to take on those obstacles? And, and if nobody takes them on, then society really suffers, and so it's almost like, how can you make sure that you remove as many obstacles as possible while still safeguarding the public, right? So that people don't get hurt by this stuff, but at the same time, perhaps these things have gotten too extreme, and that really, you know, strangles the ability for the industry to innovate in ways that it, it could otherwise. So, it's a really interesting area. Are there any other topics that we should cover from the audience?

    4. SG

      Uh, I'm good. What do you think, Elad?

    5. EG

      I think we got it all.

    6. SG

      Thanks to everyone who submitted their questions.

    7. NA

      (instrumental music)

Episode duration: 33:30

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode B5461t6ACpk

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome