Skip to content
No PriorsNo Priors

No Priors Ep. 109 | With Sarah and Elad

In this episode of No Priors, Sarah and Elad examine the current state of AI. They break down the recent dip in public markets, how tariffs could impact the tech industry, and where opportunities remain in large language models. They highlight the opportunities in more specialized models, new approaches to model development, and how the market is beginning to standardize with integrations like the Model Context Protocol (MCP). The episode ends with a look at early consumer AI applications and what types of expertise will matter most in the coming years. Show Notes: 0:00 Improvements in image gen 4:42 Public markets 8:08 Effects of tariffs on tech 9:42 Today’s large model market 11:34 Opportunities in specialized models 16:30 Research advances in model approaches 21:10 What expertise will matter? 24:30 Anthropic’s Model Context Protocol 26:30 Consumer applications

Sarah GuohostElad Gilhost
Apr 3, 202527mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:004:42

    Improvements in image gen

    1. SG

      Hey, listeners. Welcome back to No Priors. Uh, today you've just got me and Elad again.

    2. EG

      It's a favorite type of episode. Sarah Habibi, how you doing?

    3. SG

      I'm great. I'm so excited. Everything is adorable cartoons that are also, like, slightly nostalgic and sensitive. And tell me about how you react to, uh, Studio Ghibli and also just better image generation.

    4. EG

      I mean, I'm a longstanding anime fan, so I think converting the world into everything anime or manga is a very positive step for humanity. So, I view this as something I've been (laughs) waiting for, for a while. I feel like every year or two, there's sort of this moment in the image gen world where people have a "Wow, that's amazing" moment again. And the first version of that was like, oh my God, these, th- th- you know, I think it, maybe even the GAN wave was the first wave. There was a GAN artwork in, like, 2019 or so, or 2018 that went to Sotheby's for, um, auction, which was one of the first sort of, um, AI generated arts back when people were doing these adversarial network-based approaches to generating artwork. And it was kind of these kludgey tool chains, but even then people were like, "Whoa, look at what AI can do right now." And it was super bad, you know (laughs) , in comparison to what you can do today. And then there was kind of the Midjourney, um, early Stable Diffusion wave where those models came out and people were like, "Oh my gosh, this thing is amazing, but everybody has seven fingers in the images, but oh my God, it's amazing. And look at all the things we can do with it and it's gonna transform society," et cetera, et cetera. I feel like we've periodically had these and I feel like this is the latest version of that. And part of it is we're just on this amazing curve of quality and fidelity in this artwork and the ability to do... I mean, even back in the GAN world there was, like, style transfers and, you know, do this in the style of van Gogh and et cetera. But the degree to which it does it so well now and so cohesively and in so many styles and with so much aesthetic beauty and oversight is really striking. And I think we're just hitting another one of those moments where people are like, "Wow, this can really do it for forms of animation and other things." And all this is obviously in the context of, um, uh, ChatGPT and OpenAI and sort of the, the 4.0 models sort of incorporating a lot of this stuff directly in. So I, I think it's fantastic. We're gonna see another thing like this in another year, I think. (laughs) And then that, I think there'll be the very commercial versions of this, uh, which are already sort of happening, but look, we can use it for graphic design completely seamlessly versus it kind of works and we can use it for all these different use cases. And so I feel like we're doing the horizontal version of it, and soon we'll have the vertical versions all come out and obviously there's companies like Recraft and others working on the vertical versions directly, but I just view this as a super interesting evolution of the technology. So I, I think it's super exciting. What do you think?

    5. SG

      Um, I think it is funny how, uh, much at least, like, our little niche of the technology ecosystem, but, like, you know, uh, anime and manga is pretty popular. The, the world reacts to, like, they want more cute, they want more beauty. Um, I think it's really exciting. One of the interesting things this exposes is users, people overall are very good at projecting, like, where we are, um, in terms of quality and controllability and how much more room we have, right? Uh, I think, like, going from, you know, it's eight bits of grayscale to you have images that might be perceived as photos of real people was a huge jump, to your point of people being shocked at, at some point, you know, in, you know, two generations ago of image generation. And then, uh, you know, I think one of the things that Midjourney did was really have an aesthetic point of view, uh, and, like, take a bunch of user feedback into account in terms of what was preferred. I actually feel like a lot of people thought of image generation, like, end users not researchers as, you know, a little bit more of a solved problem. And I think just this is another data point of how much more, like, we're going to get and that people want, nevermind in, in video and everything.

    6. EG

      Yeah. Also text and logos and there's just, there's just a lot that's coming that people haven't done, or sort of these truly integrative things where you can start truly clicking into the images and modifying pieces. And there's, there's apps that are doing that or there's things like Krea that sort of do these real time modifications as you're working on things. But I, I do think, um, there's so much room still. We're, we're very early, but it's still so striking. So it's a very exciting area.

    7. SG

      And I, I think ease of controllability is also gonna give, um, people a lot more creative power. Like, one of the things that HeyGen has, um, demonstrated and is gonna come out with in product very recently is the ability to use natural language to describe emotion and voice, right? So you can, like, whisper ASMR and just, you know, say, "I want the whole video with this person in this way," with three, you know, three words of text description. I think that kind of controllability is gonna be really powerful.

    8. EG

      You can incorporate it into augmented devices and then I would just be working through an MSR world. That's all I would, I would just live in that.

    9. SG

      Is that the ideal?

    10. EG

      No. (laughs) Maybe the manga part, but the rest not so much.

  2. 4:428:08

    Public markets

    1. EG

    2. SG

      Are you freaked out about the macro?

    3. EG

      You mean, uh, the NASDAQ or what? The markets?

    4. SG

      Yeah, the m- the markets.

    5. EG

      Tariffs, inflation, which part of it?

    6. SG

      You know, consumer confidence is at a multi-year low, the NASDAQ's down 8%, um, tariffs on Chinese imports and on autos. I, I think there are, uh, investors and companies in market talking about how stressed they are about that.

    7. EG

      Yeah. I'm not very stressed about it. (laughs)

    8. SG

      (laughs) .

    9. EG

      I feel like there's a, a degree of uncertainty in the world right now for sure, but from the perspective of people building technology, companies barring something truly existential happening, it's kind of business as usual. And I've been through a few of these cycles now where markets are way up and everybody's freaking out in a different direction and markets are way down and people... And, you know, the main place where it impacts the venture world or the startup world sometimes is if it soaks money out of the venture capital ecosystem and therefore valuations come down or there's less funding for the, you know, marginal startup or things like that. But other than that, these sorts of cycles tend to really wash away unless you're a super late stage company that's, like, uh, about to go public and there's some issue with your valuation in terms of expectations versus where you want to go out or something like that. But for day-to-day technology startups, particularly ones that are not doing hardware which would be impacted by the tariffs, right? People who are just writing software, it should really be of minimal actual day-to-day impact. E- especially if your startup's working, like, you'll, you'll be able to get customers to pay you or find, um...... funding or whatever it may be. I've been through a few of these and every time it's been a bit of a, of a shrug. I actually remember, I went to the Rest in Peace Good Times presentation Sequoia did in 2008. So back in 2008, there was the great financial crisis, and I was running a startup at the time. I was CEO of this small company. And, uh, Sequoia did this big all-hands where they pulled together all their founders and they had people come in and tell war stories from when the dot-com bubble collapsed and how it's time to batten down the hatches and do layoffs, and the world will never be the same again and everything's over. And they were doing this as a service to the startup community, right? They were trying to help their founders kind of figure this stuff out. And I remember talking to one of the Sequoia partners during it. I'm like, "We're like a six-person startup. Like, who cares?" And he's like, "Yeah, you're right. You shouldn't worry about this at all." (laughs) You know? And that's as all these financial institutions were collapsing around us. And so this strikes me as, like, very, um, small in comparison to that. And I think back then, it didn't have that much of a real impact to tech. You know, maybe Google did its first layoff ever, but other than that, tech just kept humming along. And if anything, the biggest tech companies in the world are now 10, you know, 20 times bigger than they were back then. So I think this is, this is an even more minor blip that from a long-term tech perspective, like, who cares? But again, barring some unexpected path that's splitting off of this. I don't know. What do you think?

    10. SG

      It has, like, almost no impact on me, right? I think especially at the early end of the market, I'm like, well, you know, the really high-quality opportunities are, um... There's plenty of capital for them. I keep discovering that the capital markets are much deeper, and we should talk about this, are much deeper than I thought for very expensive, for example, foundation model plays. I still expect, like, capital availability and a lot of inflow there. I think it's probably a little different for investors who have, um, more public equities exposure, right? I bet pre-IPO, uh, crossover investors are getting more cautious, right? You have those sort of much more long-term issues of liquidity having been starved for, you know, several years now. But I think, you know, return of M&A and, like, several companies ready to go public, um, will help that somewhat.

  3. 8:089:42

    Effects of tariffs on tech

    1. SG

    2. EG

      The place where the tariffs kind of matter that I think are interesting is for very specific industries where, to some extent, it's useful for America or the West to protect themselves. So, I think automotive would be a good example where some of the Chinese car companies seem to be getting so good that if I was Europe, for example, and given the industrial base is so automotive-dependent, I would probably be pushing for tariffs relative to Chinese imports of cars, right? Because the internal car industry may not be as competitive. And so I do think there are some areas where the tariffs may be useful. There'll be some areas where they're probably being used as, like, a negotiation tool and then some areas where, you know, they may be either net beneficial or net harmful in terms of actual costs passed on and things like that. But I think, um, there may be a few areas where we should make sure that we actually have some in place and then there may be some areas where it's gonna be net negative or destructive and then there may be some areas where it's just good for negotiating broader policy or relationships with certain external parties. People are kind of using a catch-all for all of them versus, you know, looking item by item.

    3. SG

      Yeah. I agree with that and I think the productive version of tariffs is as, you know, a... I think there's a need for a broader industrial policy that is more supportive of the industries that we care about. And, like, that's gonna be a big investment, right? If we want to make key components for defense or automotive in the United States, like, we are quite behind in many domains in terms of getting competitive from a skill and cost perspective and some of those things are worth investing in on both the positive and the protection side.

    4. EG

      Yeah. I guess you mentioned, um, depth of funding for models is part of all this. What do you think is happening in the foundation model

  4. 9:4211:34

    Today’s large model market

    1. EG

      world?

    2. SG

      You and I were just talking about these artificial analysis charts, uh, showing convergence, like, kind of monotonically more competitive market for, uh, capabilities and amazing improvement over the last 18, 24 months. But you just had the most recent Gemini release from Google. Like, they're clearly still in the game. I don't know who was doubting that given they have infra, they have researchers. No- not just researchers, but, you know, very smart people at the helm, they're competing here as well. Um, I think one of the more, more interesting things is that you have convergence not just on capability but also in the, like, product surface areas. Like, most people have search, they have a research product, um, they have reasoning in the models. I think, like, a lot of it is gonna end up with, like, consumer surplus and distribution being the question.

    3. EG

      There's a- actually a really great website called artificialanalys.ai that shows different benchmarks that they've run against these various models for reasoning or for, um, different aspects of, you know, how you test a model for knowledge base or for, you know, other forms of performance, speed of tokens per time unit, et cetera, et cetera, et cetera. So, um, I think that's really worth taking a look at and you see that for certain areas, there is really strong convergence and then there's almost, like, a cluster of models that seem reasonably within ballpark and again, certain things spike dramatically in one form or another around coding or around reasoning or other things and then you have sort of a longer tail of other models. Um, and so at least for the core language model world, which those benchmarks were for, there definitely seems to be some forms of convergence happening and then there's outliers, right? Like, Rock or x.ai coming out of nowhere with a, a s- roughly SOTA model in, like, nine months was super impressive or, you know, some of the things DeepSeek or others have been doing. Um, and then, you know, we- they don't really have benchmarks for image generals. Those- those obviously exist in a variety of sites and other places.

  5. 11:3416:30

    Opportunities in specialized models

    1. EG

      Um, but then there's a whole other suite of models that I think are, um, discussed a lot less, right? And part of that is just the economic value, part of it's what's in the market today, uh, but that's things like physics, it's materials, it's robotics, it's certain types of science, right? Uh, there may be things that are more specialized in terms of post-training, like health-related data on top of some of these core models. And so I do think that there's a lot of, um, other types of models that people spend a lot less time on, some of which are becoming quite interesting.... probably the place that gets the most attention outside of the foundation model world, or the, the core LLM world, I should say, the language models, is probably actually biology, right? I feel like there's a new biology model every week. But there's all these other fields and disciplines where I actually think there's some very big opportunities. And opportunities that obviously are both societal in terms of impact, but also, in some cases, actually, I think there's very big markets behind them. And I think, often, the interest level of, uh, people working in the industry to build models is divorced from the economic value of these models, right? And sometimes that's rightfully so. You know, there may be really interesting scientific applications that aren't very commercially applicable and sometimes it's really misaligned where you're like, "Why are all these things getting funded when there's these wide open spaces for certain types of models that just nobody's working on?" And so, at least I've been looking a lot at what are these alternative models that are interesting from a market perspective that maybe are get- are, are getting a little bit ignored right now. Um, and then I guess there's the other question of like, is this ... and I'd like to get your thoughts on this. Like, how many things do you get subsumed into these core LLMs versus they're, they're own standalone thing? Like, do you think it's all one ring to rule them all or do you think it's gonna be a fragmented landscape? And where do you think that fragmentation happens?

    2. SG

      It's somewhat of, like, too binary a distinction to say, like, it's a model company versus not a model company, actually. Even many of the companies that you and I in the industry would consider to be, like, model research companies, they're, they are starting with some base of pre-training of, like, existing knowledge, which is more and more readily, uh, m- existing knowledge and reasoning that is more and more, um, readily available. In the case of robotics, um, you start with video pre-training. Uh, the case of other domains, um, if you were gonna start separately focusing on code, and we can talk about whether or not that's a good idea, you want both language and code in terms of being able to interact with the model. I, I 100% believe that there are big opportunities in some of these domains, but one of the biggest distinctions to me is, um, what does, like, the data collection engine for this look like? Uh, so if you were thinking about physics, chemistry, biology, robotics, like, and, uh, you know, maybe even some more near-term commercial applications, uh, the, the data you would want, the understanding for the model to learn from, uh, it often doesn't exist yet. And so, I think a theory of many of these companies that is interesting is, "Our job is to go collect or generate it efficiently and use that to train the model." And in that case, I think the, the question of like, does it need to be, uh, you know, will it be in this single model to rule them all, there's a question of, well, is it reasonable to expect one of the existing large labs to go do that data generation, right? Like, if you have to set up a physical lab with robotics to do, um, experimentation on new chemicals, that feels more far afield than, uh, code generation RL environments, for example.

    3. EG

      Anytime you go into the physical world, it's always harder to generate data. And that's one of the reasons that the language models, where you just effectively collect the wisdom of the internet digitally, are the first places where we've really seen, um, this scale of sort of breakthrough happen in recent times. And coding is a great example where you not only have a lot of the data resident either online or digitally, but also you have very clear utility functions or things that you can test against in terms of code and its performance and et cetera. Is it doing what you think it's gonna do? So, those are always gonna be the easiest areas. Uh, it's kind of funny. This is a, a odd pet peeve of mine, but, um, it always annoys me when people who do really well as founders in traditional software and tech start telling everybody else to go and do the hard stuff in biology, in materials and physics, and, "Oh, you're, you know, you need to go do, be hardcore." And you're like, "Well, you made all your money in fucking software. What are you talking about?"

    4. SG

      (laughs)

    5. EG

      And so I feel (laughs) like, there, there's be- there's been a long history of that, right? Like, I remember interviews with Bill Gates from 20 years ago where he's like, "If I was to start today, I'd go into biology." (laughs) So, I feel like sometimes there's the model versions of this.

    6. SG

      You're so funny. I feel like you're, you're the opposite. You're like, "I actually have a PhD in biology," and-

    7. EG

      Yeah, that's why I know.

    8. SG

      (laughs)

    9. EG

      That's why I know reality.

    10. SG

      I think the other distinction I would draw is like, is it some, like, orthogonal, like, totally different technical thesis? Do I think there's, like, a, a research advance that is just very different,

  6. 16:3021:10

    Research advances in model approaches

    1. SG

      uh, architecturally quite different? I'll, like, describe categories of companies that could be relevant here. We had, uh, Karan and Albert from Cartesia on the podcast. I think state-space models are an interesting direction that are highly efficient for certain types of data that are compressible, right? Um, if you look, there are several plays on, like, formalism and, like, translating problems into lean and, uh, taking that as a path to increasing reasoning capability for math and code. Um, I think there's a number of com- there, there are a number of companies that are trying to train, uh, models that are better at taking, um, actions in software and on the web. This is clearly also right in line of the, uh, large foundation model labs. But I think they're at least trying to work on a question that doesn't feel fully answered in terms of, uh, consistent generalizable RL environments for, for agents. And so, there are spaces where I think there, there is a theory of why the company should, should exist, um, if true, uh, versus just being, like, straight in line of the, of the, um, OpenAI, Anthropic, X, like Steamroller, of course, and Google, Steamroller. Um, what did I miss? What, what else do you draw as a distinction or like where, where do you think there is opportunity?

    2. EG

      To your point on state-space models, there may be advantages in terms of the speed and size of some of those models on a relative basis for very specialized tasks. And so usually I think of it as a two-by-two matrix where you have like one axis which is sort of speed, performance, cost, 'cause those are roughly the same thing for many of these models is inference time effectively, and then there's, um...... reasoning, fidelity, whatever you want to call it, and depending on where you are in those different quadrants, you have one quadrant which is, like, it's slow and it's expensive and it's not very smart and obviously nobody wants to use those models. There's the, it's very slow, uh, oh, and expensive but it's very smart and, uh, very capable, and that's where you're, you're like, "I'm gonna upload a 100-document Supreme Court brief and it'll give me this amazing analysis I can use to argue a case," or whatever, right? So high value, um, and it'll take a, it, it'll take a while to process and do it. And then there's the super-fast, super performant tends to just be these very specialized niche models for specific applications, and I think some of the space state models tend to work very well for that, some of the SSMs, for very specific application areas. And then there is the last quadrant and, uh, you know, based on wh- which of those quadrants you're in, I think it really determines the type of things that you can build and some of, you know, the really fast, high performance tend to be more vertical focused or tend to be more focused on very specific types of tasks. And the really slow, you know, expensive ones that are actually very performant, you could imagine virtualized versions but it seems like the backbone for a lot of those are actually these very generalizable models where a big chunk of what you're getting is the reasoning and broader linguistic capabilities that you then apply to a domain. And then, of course, there's stuff that people build on top of it in terms of orchestration layers and specialized bespoke things or route things at different models differentially relative to your use case, and it seems like everything that's, uh, that's quote-unquote "agentic" right now is basically doing that, you know, across customer success and code and you, you go through every domain that has, like, a specialized approach and they always have this, sort of, orchestration layer built on top. So, you know, I think, I think it's super exciting to watch all this stuff and I do think some of the applications and some of the, the less just purely linguistic domains may be interesting in the short run.

    3. SG

      I think going back to the question of, like, is the macro stressing you out, um, uh, this, there's, like, such a virtuous cycle in technology happening right now. This is actually quite dominated by the fact that M&A is alive again and so we're gonna have outcomes, but, like, to your point, there's exploding surface area of stuff that, um, these models can attack. Uh, you have, you know, research progress, people making different technical bets. You mentioned DeepSeek. I, I think model development and continued increase, like, of the, of, uh, just more aggressive use of reasoning and test time compute is quite expensive, and training continues to be more expensive. So I think the fact that there are now people trying to solve, um, data and scale and latency problems, like, that'll help everybody too.

    4. EG

      Do you know if it's true that the DeepSeek researchers are not allowed to leave China?

    5. SG

      I do not know if that is true. I think you, in any country, should want to hang on to your best talent but perhaps not restrict people's movement. Um, I think we should be trying to attract great talent here.

    6. EG

      We should keep all the AI researchers in the Mission District and just not them, let them leave.

    7. SG

      Somewhere between Mission and Dogpatch.

    8. EG

      (laughs)

    9. SG

      That's what... Yeah. Like, actually, we could just draw a line between our offices.

    10. EG

      They all have to go to Atlas Cafe every day.

    11. SG

      Let's talk through the, like, talent categories, actually for anybody who is

  7. 21:1024:30

    What expertise will matter?

    1. SG

      not thinking about their kids in 10 years from now but just thinking about the, like, next two, three years. Like, what type of expertise is valued where you should stay between, you know, my office and Elad's office in the Mission and, uh, the Dogpatch? Okay. Like, you have researchers, you have infrastructure, scaling and efficiency. We welcome all of you. Um, hardware/software co-design, right? Ex- like, design, you know, the next generation TPU or whatever.

    2. EG

      There's a special visa for you to move into that region.

    3. SG

      Yes. We're, we're here to sponsor you.

    4. EG

      Visa program. (laughs)

    5. SG

      Yes. If you are ready to design chips to better handle sparsity or massive MoE models or something, like I, I've, I've got a visa campaign for you.

    6. EG

      (laughs)

    7. SG

      Kind of what you said, right? Like, anybody who has got deep, like, domain user understanding combined with the basic product engineering, it's not basic, but the product engineering sense for this, like, orchestration applied ML area, evals for agents, um, setting up RL environments, like, still very nascent area of, like, gather context, plan, make mod- m- like, a bunch of model calls, parallelize, verify, retry. Like this orchestration layer we described. All of that, we've got a visa program for you. We're thinking about naming it. We'll hire somebody to run it. It'll be great.

    8. EG

      We'll call it Gilingo.

    9. SG

      We're, we're gonna work on the marketing.

    10. EG

      I feel we're in the business as usual phase of AI. I think the stack is reasonably well defined and obviously it'll change and there'll be new things in it, but I feel like, um, if anything, the last couple months have been very clarifying in terms of the consolidation of the things that are, uh, short term crucial. And there's the model layer of that and all the various accouterments around agentic stuff and reasoning and et cetera. And obviously that will only accelerate and get dramatically better and it's on its own scaling curve. And then on the infrastructure layer, I think that's solidified a bit, right? I remember when RAG was a big deal about a new thing. You know, all these things are, I feel like are kind of falling into place, evals and how do you do them? And I think things are solidifying there with companies like BrainTrust and others. And then I feel like on the application layer side, I think bought into a notion we've been discussing for a year or two now around, you know, AI really starting to impact different services related industries and vertical applications and, uh, and different use cases. And then I'm starting to finally see some inkling of consumer stuff again. Um, and I think it's nascent and early but at least people are trying, you know? I feel like there was two or three years where nobody was really trying to do anything consumer, although one could argue that Perplexity and ChatGPT and Midjourney and all these, sort of, prosumer-y things were early consumer forays, right? And so maybe ChatGPT is the world's biggest new AI consumer product. I mean, Google was really the original one in some sense. It feels like a period of brief consolidation and in a handful of verticals I think we're starting to see some of the winners emerge. Um, and so I think, I think it's a interesting clarifying time and of course, you know, the, the thing I say about AI is that the more I learn, the less I know, right? It's the only industry where I feel like the more I learn about the market...... the more confused I am, I feel like there's this brief moment of clarity and then I'm guessing in a year, all bets are off and (laughs) you know, all things, all sorts of things will scramble again. But, um, at least for now, it feels to me like a few things have kind of fallen into place, for at least temporarily.

    11. SG

      This actually, like, feels like a very comfortable time to invest for me because to your point, it feels more like a, I don't know, maybe it's like inning three instead of inning one where there's a little bit of stability in the ecosystem. There's r- a real goodness around, um, standardization, some standardization

  8. 24:3026:30

    Anthropic’s Model Context Protocol

    1. SG

      of integration with different, like w- MCP I think is gonna accelerate a bunch of, uh, development for people. Like, oh, I'm meeting companies where they set up a data source that is useful to the enterprise in some way that, uh, these models can interact with well and they're like, "Oh, MCP server."

    2. EG

      Do you wanna quickly, um, explain to people Model Context Protocol and MCP and what that is and how it works?

    3. SG

      I'm gonna fudge this, but I will try to describe it. So, um, this is an attempt by Anthropic, came from, um, Ben Mann's group in Labs, uh, it's called Model Context Protocol, which is, uh, an attempt to spec out a standard interface for connecting, like, model capabilities to systems where you already have useful data, that could be, like, documents, it could be logging, it could be business tools, it could be, like, the IDE, whatever. And, um, Sam from OpenAI said, like, they're going to support it as well, and I think, um, this is not a complete solution. It has gotten a lot of popularity with developers over a very brief period of time, but it's just, like, how you expose your, your data to the model.

    4. EG

      And it's an open standard, so it's not proprietary, anybody can use it, and it's like a two-way connection between data sources and AI powered tools.

    5. SG

      Mm-hmm. And big companies have done it. Yeah, I think there's still a bunch of work for developers to do in terms of, like, describing, you know, their tools and how to use them very specifically and cleanly, but it, it does make it much easier and I think it will accelerate agent development a lot. But, you know, going back to this idea of, like, what does it mean for the ecosystem? I think the fact that you have, like, inc- like y- you're accelerating the ways for models to interact with existing ecosystems, we expect agents to get better, um, you have a bunch of choices around model availability. As you said, there's this, like, clear pathway about how to automate certain types of work that is orchestration of these capabilities, and, uh, I think that's gonna be super fertile.

  9. 26:3027:44

    Consumer applications

    1. SG

      Um, I do think it's very unclear what types of winning consumer experiences are possible here, there aren't, uh, consumer agents that don't look just like search or research, you know, in the large model products that are really working yet that I've seen, but I expect to see them this year. I'm excited about it.

    2. EG

      Yeah, I think it's, uh, cool stuff coming.

    3. SG

      When everything destabilizes, Elade and I will be back on No Priors, we'll talk to you all then.

    4. EG

      It's gonna get destabilized again, but I think it's a, it's a moment of calm, and, uh, calm is all relative, right? There's enormous innovation, huge changes coming, big technology waves, new things every week, but at least there's a little bit more of a view of, okay, who are going to be some of the main players in some of these areas and, you know, how do all these things fit together? So I, I think, um, we should enjoy the calm while, while it lasts for, you know, the next week or whatever it is, (laughs) the next few hours before the next thing drops.

    5. SG

      All right. Signing off, y'all.

    6. EG

      Good to see you.

    7. SG

      Find us on Twitter @nopriorspod. Subscribe to our YouTube channel if you wanna see our faces, follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.

Episode duration: 27:44

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode OABMBgztyRE

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome