Skip to content
No PriorsNo Priors

No Priors Ep. 52 | With Pinecone CEO Edo Liberty

Accurate, customizable search is one of the most immediate AI use cases for companies and general users. Today on No Priors, Elad and Sarah are joined by Pinecone CEO, Edo Liberty, to talk about how RAG architecture is improving syntax search and making LLMs more available. By using a RAG model Pinecone makes it possible for companies to vectorize their data and query it for the most accurate responses. In this episode, they talk about how Pinecone’s Canopy product is making search more accurate by using larger data sets in a way that is more efficient and cost effective—which was almost impossible before there were serverless options. They also get into how RAG architecture uniformly increases accuracy across the board, how these models can increase “operational sanity” in the dataset for their customers, and hybrid search models that are using keywords and embeds. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @EdoLiberty Show Notes: 0:00 Introduction to Edo and Pinecone 2:01 Use cases for Pinecone and RAG models 6:02 Corporate internal uses for syntax search 10:13 Removing the limits of RAG with Canopy 14:02 Hybrid search 16:51 Why keep Pinecone closed source 22:29 Infinite context 23:11 Embeddings and data leakage 25:35 Fine tuning the data set 27:33 What’s next for Pinecone 28:58 Separating reasoning and knowledge in AI

Sarah GuohostEdo LibertyguestElad Gilhost
Feb 22, 202431mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:002:01

    Introduction to Edo and Pinecone

    1. SG

      (techno music) Hi, listeners, and welcome to another episode of No Priors. Today, Elad and I are talking with Ido Liberty, the founder and CEO of Pinecone, a vector database company designed to power AI applications by providing long-term memory. Before Pinecone, Ido was the director of research at AWS AI Labs, and also previously at Yahoo. We're excited to talk about the increasingly popular RAG architecture and how to make LLMs more reliable. Welcome, Ido.

    2. EL

      Hi.

    3. SG

      Okay, let's start with, uh, some basic background. Can you tell us more about Pinecone for, for listeners who haven't heard of it? Like, what does it do and how does it differ from other databases?

    4. EL

      So Pinecone is a vector database, and what vector databases do very differently is that they deal with data that, uh, has been analyzed and vectorized, I'll explain in a second what that means, by machine learning models, by l- large language models, by foundational models and so on. The most large language models or foundation models, actually any models, really understand data in a numeric way. Models are mathematical objects, right? And when they read a document or a paragraph or an image, they don't save the pixels or the words, they save a numeric representation called an embedding or a vector. And that is the object that is manipulated, stored, retrieved, and searched over and, and, and, uh, operated on by vector databases very efficiently at large scale. Um, and that is Pinecone. When we started that, uh, category, people called me concerned and said, uh, "What is the vector and why are you starting a database?" And now, uh, I think they know the answer.

    5. EG

      Uh, how did you think about this early on? 'Cause you started the company in 2019. At the time, this wave of generative AI hadn't happened quite yet. And so I was wondering what applications you had in mind

  2. 2:016:02

    Use cases for Pinecone and RAG models

    1. EG

      given that there's so much excitement around Pinecone for the AI world. The prior AI world had a slightly different approach to a variety of these things, and I'm just curious, like, were you thinking of different types of embeddings back then? Were you thinking of other use cases? Like, what was the original thinking in terms of starting Pinecone?

    2. EL

      The tsunami wave of AI that we're going through right now, uh, didn't hit yet. But, uh, in 2019, the earthquake had already happened. Deep learning models and, and so on had already been grappled with. Large language models and transformer models like BERT and others started being used by the more mainstream engineering cohorts. You could already kind of connect the dots and see that where this is going. In fact, before starting Pinecone, I myself had founder anxiety between, "Are we already too late?" versus "Nobody knows what the hell this is, and we're way too early." And it took me several months of, like, wild swings between those two things until I figured maybe the fact that I have those (laughs) too early, too late mood swings maybe means it's exactly the right time.

    3. SG

      Maybe, Ido, you can actually just expand a little bit about, um, you know, in what, what use cases people want to use embeddings, right? I, I think there are ways to interact directly with language models and then reasons for, for example, reliability or constant- context length like that, um, people... And performance that people, um, interact with embeddings in, like, a RAG architecture or in semantic search. So maybe you can sort of talk about some of the driving use cases.

    4. EL

      I mean, the obvious way, in some sense, to add knowledge to your conversational agent, whether it's chat or what have you... Genere- w- we talk about it as generative AI now, but it's, it's much more general than that, is to, again, not shockingly bring the relevant information into the context, right? So that you can actually, uh, arm the, the foundational model with, with the right pieces of content, with text, with images, with what have you, right? You want to be able to retrieve that from a very large corpus of knowledge that you have, whether it's your own company's data or whether it's the internet or what have you. It so happens that LLMs are already very, very good at representing data in the way that they want to consume it, which is these embeddings. And so you can add question time in real time, okay? Or at the time of the interaction, go and find relevant information. And relevant might be associated with or call it with or something that is, is, uh, similar to whatever it is that you're being asked about. And once you bring that into the context, you can now, uh, give, uh, much more accurate, accurate des- uh, answers, right? Um, and as an, uh, you know, as a side experiment, we actually loaded what's called Common Crawl, which is the top internet pages crawled fairly frequently. We load that into Pinecone and saw what happens when you augment GPT-3.5 and 4 and LLaMA and Mixtral and models from Cohere, Anthropic. And you could see that if you augment all of them with RAG, on, even on the internet, which is data that they were trained on, you can reduce hallucinations significantly, up to 50% sometimes. Interestingly enough, many of them actually start behaving quite similarly in terms of level of accuracy, even though without RAG, they, they actually have quite different behaviors. So it's sort of both like a, an- a uniform improvement and a little bit of leveling the playing field. Now, you know, because we know we can do that very well now, now you can do that also with proprietary data, with com- data inside your company and so on, stuff that, of course, is not available on the internet and stuff that those models were never trained on. And interestingly enough, again, the quality ends up being incredibly high.

    5. SG

      I assume

  3. 6:0210:13

    Corporate internal uses for syntax search

    1. SG

      most Pinecone users are not, you know, using LMS and retrieving against general internet data. Like, what kinds of companies were your earliest or- or biggest users? Like, what kind of data do they want to retrieve against?

    2. EL

      So most companies do use their own company data. Um, it could be whatever it is, depends on the application they're building. Could be legal data, medical records, internal wiki, uh, information, sales calls, uh, you name it. There's- there's an infinite variety. I wanna say that this is just RAG. I mean, this is just semantic search. I mean, there are many other applications that we didn't talk about but, uh, we can keep it, uh, focused on this application for this conversation.

    3. SG

      And is it- is it dominated by a specific use case? Like, were there customers that you feel like really represent the Pinecone use case well?

    4. EL

      Yeah, 100%. Uh, first, uh, text is probably most of what we see. Uh, nowadays, models are really good at images and so on, but, uh, text is still the predominant data type. Notion Q&A now runs on- on Pinecone and they serve essentially question answering with AI, uh, to tens of thousands, um, probably hundreds of thousands of- of their own customers. Uh, Gong does the same thing with sales calls. Again, serves all of their use cases for all of their customers and so on. So one of the most common patterns is companies that themselves become trailblazers and innovators with AI, and they themselves hold a lot of their own users' or customers' text, and they wanna search over it or generate information on top of it. Uh, that ends up being an incredibly common pattern.

    5. EG

      I- I guess earlier, um, this month, uh, one of the things that Pinecone announced was the serverless offering called Canopy. Could you tell us a little bit about why you decided to go down this serverless direction and how you view that in terms of either use cases or adoption or other things?

    6. EL

      So Canopy is actually an open source that we put out there as a framework for people to learn how to use RAG. Pinecone serverless is just going Pinecone. It's just Pinecone but serverless. Uh, what, uh, it does is basically removes the limits from, uh, what people used to experience before. Um, when we started Pinecone, a lot of the applications had to do with recommendation, uh, engines and- and anomaly detection and other- and other problems where usually the scale was actually fairly small, uh, and the requirements had to do with super low latencies and sometimes high throughput. And- and as a result, you still see a lot of databases kind of play in that field. Uh, we very quickly figured out with our own customers and our own experimentation that something else is much more significant with just the s- scale and cost. If you want to be able to answer correctly, you just have to know a lot. If you want to do that, you have to ingest hundreds of millions, billions, sometimes tens of billions of- of vectors into your own... into your vector database, and you want to query it efficiently in terms of cost. You just don't want... you know, you don't want that to explode, uh, in terms of, again, spend. And finally, you want to do that easily, so you don't want to spend weeks and months setting things up and- and getting it to work. And doing that in our old architecture, and frankly with any other architecture today that's not serverless, is- is very difficult. And serverless is here to basically resolve those main problems. It's incredibly easy to operate. It scales massively. I mean, again, uh, there's no theoretical limit to how much it can scale. We've tested it with tens and... uh, tens of billions with live customers and live traffic. And I'm not gonna go into the architectural design, but it's actually designed to be incredibly efficient, like asymptotically better than- than, you know, what can be done with any other architecture. It's fundamentally about removing the... all limits so people can actually have all the information they need, uh, ready for the foundational models.

  4. 10:1314:02

    Removing the limits of RAG with Canopy

    1. EL

    2. SG

      You mentioned Canopy is, um, to help enable more people to build RAG products. Like, where do you... where do you see developers or your customers struggle to get embedding space to AI products generally successful? Or what were you... what were you trying to achieve with- with Canopy?

    3. EL

      Yeah. So v- vector databases, and Pinecone specifically, are very foundational model, are very foundational pieces of technology. We're, uh, we're very deep in the stack, and to build a... you know, a proper full end-to-end solution, say like Notion Q&A, there's quite a lot that you have to build on top of it. You have to ingest documents and- and what's called chunk them, you have to figure out how to break them into, like, factoids and pieces of information. You have to embed everything with models, you have to ingest them into the vector database. They... you know, when you get a query, you have to figure out how to manipulate it and how to embed that, you have to search over it, you have to re-rank, you have... you know, there's- there's a lot. There's a whole system you have to build around it. And a lot of people told us that this is actually quite complex, and they're right. Right? We put out Canopy as really an example, at least an end-to-end kind of cookbook. It's if you just take this, it should work. You should probably... once it works, you should figure out how to make it better for your own application, right? Because of... you know, medical data is not Jira tickets, you know, and Jira tickets are not Slack messages, and you might be building a different product. But at least you have some end-to-end starting point that- that already does something and you can start improving on.

    4. SG

      Two of the, I think, the most common s- comparison points for, um, vector databases that people use are A, like traditional databases, right? Like, why not just use PostgreSQL, pgVector or some index, uh, associated with an ex- existing database? Or, um, B, like, sort of more traditional search, uh, or incumbent search technologies or services like Elastic or Algolia. Can you... can you talk about, like, you know, why not other databases or, like, how you think about traditional search?

    5. EL

      Yeah. I'll just go back to the fundamentals, about what are you trying to achieve, right? And what we're trying to achieve is to give as much context and as much knowledge to foundational models as possible. Do that easily at scale, uh, you know, on a budget, uh, get to a unit economics that actually works for your product, which is incredibly hard to do with AI with, like, many, uh, uh, discussions going on about that now. Um, those other products don't work. They don't work either because they don't scale in terms of the, uh, efficiency, uh, scale, cost, the trade-offs that they can offer, because they're not designed to do this. They're designed to do something else. They kind of thought about vector index as a, as a bolt-on, you know, retrofitted fe- feature, and so yes, it works at small scale, but when you try to actually go to production with it, you, you, you understand the limitations. With other search technologies, this is again, this is the wrong search mode. If you're searching with keywords and just not finding the relevant information because the embeddings, the, the contextual space in which these pieces of text, documents, or images live is in vector space, in high-dimensional numeric space, not in keyword space. And like everyone that's ever searched their inbox for an email you know for a fact you have, uh, and not find it knows that keyword search is a deeply, uh, flawed retrieval system.

    6. SG

      I'm just curious if customers or, you know, developers are trying to combine the existing search systems they

  5. 14:0216:51

    Hybrid search

    1. SG

      have. I know you also are increasingly supporting hybrid search, so kind of wanted to understand that. Where are embeddings, like, amazing and useful and, like, delivering new experiences and where they're not enough or not, like, the, the full experience that end users want?

    2. EL

      So it's interesting. Uh, our research actually shows that when you do this well, we, you very rarely need keywords alongside embeddings. But getting embeddings to perform perfectly is, is actually, it could be quite intricate, and we find that it's very convenient to have, um, keywords alongside embeddings and to score those things together. We call this hybrid search, and in fact, we made this even more general. And we said, okay, why not, you know, keywords under the hood are actually represented as, as sparse vectors. Uh, that's true of any keyword search, by the way. This is not, this is just how, kind of the ma- they're mathematically identical. And then we said, why don't we just make this more general and just say, hey, you can give either sparse or dense vectors or both of them and kind of have the buil- best of both worlds, and people find that very convenient. Uh, and so I'd highly encourage people to look at it and, uh, improve, you know, by boosting and all sorts of other tricks that you can bake into sparse vectors, including keywords. My guess is that that's not gonna be the dominant mode of, of search in the very near future.

    3. SG

      You think we progress, like, you think hybrid search is a, like, more temporary convenience?

    4. EL

      I mean, I think it'll be used for boosting and other types of levers to control your search. I think the mode of you baking keywords into that is going away, yes.

    5. SG

      And, uh, maybe just, uh, going back to, like, the traditional database market, like, why not in my Postgres or my Mongo or whatever I'm using already?

    6. EL

      Again, I mean, uh, uh, we, we see this in the market a lot. People tell me, "Hey, I already use tool X or database Y, and why not?" And frankly, oftentimes, when it's some tiny workload or just learning how to use, uh, embeddings for the first time and so on, it might actually work okay. It's when people try to actually do something in production, they're trying to scale up, they're trying to actually push the envelope, or they're trying to launch a product that needs to have some unit economics attached to it that makes sense for the, for that product, that's where people run into huge problems. And so, uh, many of them just, you know, start with us to begin with. To be honest, a lot of them are enthusiasts, and they actually kind of enjoy learning, uh, how to use a new kind of database and our u- you know, user experiences smooth enough and, and to, you know, uh, there's so many tutorials and notebooks and examples that they actually find it exciting. But I guess some don't, and that's, that's, that's fine.

  6. 16:5122:29

    Why keep Pinecone closed source

    1. EL

    2. SG

      So maybe one more on database dynamics. Pinecone is closed source. It's gotten great adoption, uh, but many databases in, like, you know, mature market are open source. How'd you think about this decision, and has that, has that been an issue for you?

    3. EL

      Uh, I'll say that most databases started before cloud was really, uh, a fully mature product or market or, you know, platform, okay? And so the, the, that was the precursor to PLG essentially, or whatever. It was PLG, right? It was s- you know, it was, that, that was the only way to put a technically complex product at the hands of engineers was to open source it, right? And you see, I think all, I mean, maybe not all, but m- definitely the larger databases that are open source out there, I think that's the reason they did that. When we started Pinecone, we asked, you know, the very basic question of why, why do people open source the platform, right? Um, one of it was to earn trust, one of them was to get contribution from the community, and one of them was a channel to, you know, users. And we figured we can earn trust by being excellent at what we do and providing an amazing service. We don't, uh, need external contribution, and in fact, if you look at statistics, even companies that ha- are open source, 99% of the contributions are actually from the company itself. No, not 99, but-... high 90s, and so that doesn't actually make a huge difference. And i- in terms of experience, we figure that we can actually provide a much better experience and much better access to the platform than what open source does, and Pinecone is a fully managed and multi-tenant service, and to be able to run that at scale and provide the cost tray, the cost/scale trade-offs, we actually run a very, very complicated system and in s- in some sense, even if we gave it as open source to somebody, they wouldn't know what to do with it. It will be a Herculean effort to even run this thing. The right decision was basically that w- we should offer this as a service. We should manage it end-to-end, and as long as you give people a fully reliable interface and you keep doing that year after year, you earn the trust and the ease of use, uh, that open source becomes in- i- I hope not- not an issue.

    4. EG

      I- it's funny 'cause two an- two anecdotes around- along those lines, um, I remember talking with, uh, I think it was Ali from Databricks, and he said that if you can avoid doing open source, you should. You know? (laughs) He felt like it was an incremental challenge because you get distribution through open source but then you have to figure out the business model and so he viewed it as, like, y- you know, I think the- the analogy he uses is, like, making an open source project work is like hitting a hole-in-one in golf and then you pick up a baseball bat and you have to hit a grand slam because then you have to do the second act to make sure the thing actually works as a company.

    5. EL

      (laughs)

    6. EG

      (laughs)

    7. EL

      That's right. No, I mean, I agree 100%. I mean, this is exactly what we're experiencing, and, in fact, we- we already see, even though new players in the vector database space that- that- that basically started to try to take us down all took the open source angle. We already see them, even young as they might be, they are already strug- struggling with the open source strategy. Serverless is the fourth comp- almost complete rewrite of the entire database at Pinecone.

    8. EG

      Mm-hmm. Yeah, the one other thing that's coming in terms of the LLM world, which may or may not impact you, and I'm sort of curious how you think about it, is increasingly long context windows for foundation models. Does that change how people interact with embeddings in vector databases or does it not really impact things much? There's things people are talking about in terms of infinite context or other things like that.

    9. EL

      Look, I mean, I don't know what "infinite context" means, to be honest.

    10. EG

      It's, like, very big.

    11. EL

      (laughs)

    12. EG

      (laughs)

    13. EL

      (laughs)

    14. EG

      It's infinite! It's, like, huge!

    15. EL

      Oh, oh. I got it.

    16. EG

      It never ends.

    17. EL

      Thank you.

    18. EG

      Yeah, yeah.

    19. EL

      All right.

    20. EG

      You're welcome.

    21. EL

      Ah.

    22. EG

      Yeah. (laughs)

    23. EL

      I should take a note. F- first of all, those companies sell, uh, their services by the token, so the fact that they allow you to b- use infinite context windows is not shocking, okay? Uh, that's good for business. The second thing is there- there's plenty of evidence that increasing the context size doesn't actually improve results unless, you know, you do this very carefully, right? So just what's called context stuffing is not helping. You just pay more and don't actually get much for it. And the last thing, i- that, even that, even if you- you kind of buy into the- the- the- the marketing, th- that runs its course, right? If you're... it's like saying, "Oh, I don't need Google 'cause I can... e- every time I query Google, I can send the internet along with my query," right? It's like, yeah, (laughs) that... well, theoretically that's maybe possible, but clearly, practically, that's- that's not feasible, right? So at some point the context window just becomes gigabytes and gigabytes and gigabytes of data, like terabytes. I mean, when do you... where do you stop, right? And so already today we have users who use not ev- very large models, you know, maybe a few billion parameters, and the vector database next to the model contains trillions of parameters, right? And they get, you know, much better performance that way, right? Just a- attaching

  7. 22:2923:11

    Infinite context

    1. EL

      all the context to everything you do I think runs its course very, very quickly.

    2. EG

      Mm-hmm.

    3. EL

      And it's also unnecessary, to be honest.

    4. EG

      Yeah, I guess related to that, another place where people have been talking about, um, embeddings and vector databases is in, uh, sort of aspects of personalization and privacy, and I'm a little bit curious how you think about that because, you know, one of the- the risks people view as running an LLM over a large data corpus or fine-tuning it against a specific company's data is the issue of data leakage. You know? Say, for example, you're- you're an HR company and you don't want different people's salaries to leak across an LLM because you're using it as, like, a chatbot to help you with context regarding your own personal data in an enterprise or things like

  8. 23:1125:35

    Embeddings and data leakage

    1. EG

      that. Can you talk a bit more about how embeddings can provide personalization and in some cases potentially other features that may be attractive to- to enterprises?

    2. EL

      Yeah. So tha- that's a very common and reasonable thing to be concerned about. Uh, data leakage can happen in- in two main ways. A, if you use a service for your foundational model that- that frankly, uh, retrains their models with your data or records it, right? Or saves it in some way that is opaque to you, right? That is a huge, uh, problem and I think a lot of people are... a lot of people are struggling with that. The second is, if you're building an application in-house, whatever it might be, and you fine-tune your models on added data, that added data might end up popping where it shouldn't, uh, in answers to, you know, other people's questions or whatever. What people do with vector databases is actually incredibly simple, right? You don't fine-tune a model on your own proprietary data, at which point you know for a fact it doesn't contain any proprietary data 'cause it's never seen any of it, okay? And then at retrieval time or at- at-... you know, whenever you, you apply the, uh, the chat or the agent, you retrieve the right information from the database, give it as context to the model, but only do inference. You don't actually retrain and you don't save that interaction. At which point, that data doesn't exist anywhere. It's like a, an ephemeral thing. And the added benefit of that is, by the way, that you can be GDPR compliant. You can actually delete data. So if, if, you know, so, you know, if you're a- if you're a company, uh, like a legal company and somebody deletes documents, you can just delete it from the vector database and that information will never be available to, uh, your foundational model again. So you don't even have to, uh, devise some complex mechanism for forgetting. You just don't know it anymore. One of the main reasons why people attach vector databases to, uh, foundational models is it gives you this operational sanity, uh, that is almost completely impossible without it.

    3. EG

      Mm-hmm. That's interesting. Yeah, I guess, um, it feels like there's three different approaches

  9. 25:3527:33

    Fine tuning the data set

    1. EG

      that people are using and they're not mutually exclusive for models that kind of overlap in terms of what the hoped-for output is. Uh, one is really changing or engineering prompts, uh, or adding more information into the prompt. The second is fine-tuning. And the third would be RAG/different aspects of embeddings or other approaches like that. How do you think about fine-tuning in this context? Like, when should you fine-tune versus, you know, use some of the approaches that you've talked about earlier?

    2. EL

      I can answer both as a scientist and as a business owner, right? As a scientist, I'm all for fine-tuning. We have all the evidence to show that, done right, it helps tremendously. As a business owner, I can tell you that it's actually extremely hard to do, t- to do well. I mean, this is something that unless you have the research team and the AI experts that know how to fine-tune, you might actually make things significantly worse, okay? So there is, there is nothing that says that more data is gonna make your model do better. In fact, it oftentimes gets, uh, it regresses to something significantly worse. With prompt engineering, again, I think it's necessary, especially when you build applications, you want the response to, you know, uh, conform to some format or, or have some property, I think that's, that's a given, you should do that. It runs its course after a while. I mean, it's, in some sense, you get what you get. It's necessary but th- there's a limit to what you can do with that. And RAG, I think, is, is incredibly powerful but like I said before when we talked about Canopy, that's not... You know, that's not simple either. I mean, it's simpler than the other ones but still requires work and understanding, experimentation and so on. This is almost a, a hallmark of nascent market when the simplest solution is still, uh, somewhat complex.

  10. 27:3328:58

    What’s next for Pinecone

    1. EL

      (laughs)

    2. EG

      Mm-hmm. Yeah, makes sense. What's next for Pinecone? Or what are, what are some major things coming that you'd like to talk about?

    3. EL

      So, I mean, th- there's a ton. We are... We're an infrastructure company, uh, and so we obsess about ease of use and security and, and stability and cost and scale and performance. Also, as a, an engineer at heart, I am, I'm very excited about those things, and all of that is coming. Again, serverless is becoming faster, bigger, better, uh, more secure, m- easier to use, and we're starting to really grapple with, uh, what very large companies and very, you know, kind of trailblazing tech companies are going through. I said that getting AI to be truly knowledgeable is still complex. I, I think we're starting to grapple with deeper issues that, that the entire information retrieval community has been dealing with for about 40, 50 years now. We're starting to see those, you know, come to the fore in, in, uh, in RAG and in AI in general.

    4. EG

      I guess putting aside, um, Pinecone and sort of the database world and everything else, what, what are you most excited about in terms of what's coming next in AI?

    5. EL

      It's, it's, it's hard to say. I, I really do wanna see a... A, a distillation in some sense of foundational models,

  11. 28:5831:28

    Separating reasoning and knowledge in AI

    1. EL

      and by distillation, I, I know, it's, it's a... I don't mean what usually people say when... "There's distillation of models." I don't mean that. I mean the separation of, of reasoning and, and, and, and knowledge, right? Foundational models get it fundamentally wrong. When we learn how to build the subsystems of AI correctly and for each one of them to do their roles optimally, either we're gonna do, be able to do the, to achieve the same tasks much cheaper, faster, better or we're gonna still want to use the same amount of resources but achieve much more. What happens today is that we, we have very crude tools and we try to use everything for everything. Uh, delightfully or shockingly enough depending on who you are, that kind of works. I mean, we, we've found these, like, very, very efficient and very general purpose tools, right? But they're still very general purpose. They're still super blunt instruments. Again, as a technologist or as somebody who cares deeply about how things are built, I, you kind of see the inefficiency and it hurts the brain to figure out that, you know, we take, uh, half the internet and cram it into GPU memory. I'm like, "Holy... W- w- why?" (laughs) Like, this can't be the right thing to do. Um, so I, I'm very excited about us as a community truly understanding how the different components interact and how to build everything much more, in some sense, correctly. And I hope, uh, we get to build some, uh, exciting products, uh... By we I mean the community gets to build some exciting products this year. I think we're gonna see a year of a lot of experimentation that when, uh, uh... That people went through last year, they're gonna take the production and, to build cool products this year and I can't see... I can't wait to see how that looks like. I, I, I have a feeling that this year's gonna be very, very exciting for consumers of AI.

    2. EG

      Yeah, I totally agree. It's a, it's a very exciting year ahead so thank you so much for joining us today.

    3. NA

      Thanks, Ido.

    4. EL

      Thank you, guys.

    5. NA

      Find us on Twitter @nopriorspod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.

Episode duration: 31:28

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode 6R7YCX4Q91Q

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome