Skip to content
YC Root AccessYC Root Access

This Startup Is Trying To Solve The AI Memory Problem

While LLMs continue to evolve, they still struggle with memory. The startup Mem0 is working to change that by building the memory layer for AI agents. In this episode of Founder Firesides, YC’s Nicolas Dessaigne sat down with co-founders Taranjeet Singh and Deshraj Yadav to discuss why agents need persistent memory to improve over time, how Mem0 reduces cost and latency compared to native context stuffing, and why memory must remain neutral across models as AI becomes more agent-driven. Chapters: 00:05 What Is Mem0? 00:49 Traction & Open Source Adoption 01:24 Why Memory Improves AI Agents 02:01 Saving Cost and Latency 02:31 Founder Origins & YC Pivot 05:13 How Mem0 Works Under the Hood 06:04 Hybrid Memory Architecture 07:10 Custom Memory Rules & Expectations 08:00 Real-World Use Cases 10:05 Competing With Model-Native Memory 11:48 Fundraising & What’s Next

Nicolas DessaignehostDeshraj YadavguestTaranjeet Singhguest
Jan 23, 202617mWatch on YouTube ↗

CHAPTERS

  1. Mem0’s mission: a memory layer for stateless LLM agents

    The founders explain Mem0 as an infrastructure layer that gives AI agents long-term memory, addressing the core limitation that LLMs are inherently stateless. Without memory, agents effectively “start from scratch” each interaction, limiting personalization and improvement over time.

  2. Open-source traction and ecosystem distribution

    Mem0 describes rapid adoption through open source and integrations with major agentic frameworks. They share key growth metrics and position themselves as a widely used default choice for memory.

  3. Why memory improves agents: personalization that compounds over time

    They illustrate how memory turns agents into systems that learn a user’s preferences and behave more consistently across sessions. This enables applications to improve with usage rather than staying static.

  4. Cutting token cost and latency vs. dumping everything into context windows

    Mem0 argues the naive approach—pushing all history into the prompt—is expensive and slow. Their system selects the most relevant information to include, reducing tokens and speeding retrieval.

  5. Founder story: repeated startup attempts, Tesla background, and teaming up

    The founders share their backgrounds: long-term collaboration since undergrad, one founder’s multiple prior startup attempts, and the other’s experience leading AI platform work at Tesla Autopilot. Their relationship and complementary skills set the stage for Mem0.

  6. YC application, pivot from Embed Chain, and the Sadhguru AI spark

    They recount entering YC with a different framing (Embed Chain/RAG) and pivoting toward “LLM statelessness” as the deeper problem. A viral consumer side project (Sadhguru AI) revealed the need for persistent memory, prompting a fast launch after YC feedback.

  7. Developer workflow: two primitives—write memory and search memory

    Mem0’s interface is described as simple building blocks: adding memories and retrieving them. The system tries to extract what matters from user-level inputs and returns key context when a new conversation or task begins.

  8. Under the hood: hybrid memory datastore (KV + semantic + graph)

    They explain a hybrid architecture that classifies incoming unstructured data into multiple storage representations. Retrieval queries pull from all three sources to balance accuracy and low latency in real time.

  9. Customization and “expectation problems”: rules in plain language

    Mem0 emphasizes that what counts as a “memory” varies by user and application. During onboarding and configuration, developers can specify—using natural language—what to store or ignore, and Mem0 converts this into operational rules in the pipeline.

  10. Use cases across industries—and a shift toward agent self-memory

    They outline broad applicability: anywhere an LLM app should improve with time, memory helps. They also note a new pattern: developers increasingly store memories about agents (their actions/decisions), not only about end users.

  11. Staleness and decay: hard expiration, exponential weighting, and domain-specific retention

    They discuss how memories can become outdated and how different applications need different forgetting strategies. Mem0 supports multiple decay mechanisms and configurable controls, including cases where certain preferences should never decay.

  12. Competing with model-native memory: neutrality and portability across LLMs/frameworks

    They position model-provider memory (e.g., OpenAI’s) as market education rather than a direct replacement. Mem0’s differentiation is decoupling memory from any single model vendor so developers can use multiple LLMs and retain control of read/write memory.

  13. Fundraising, hiring, and roadmap: “make it work, neutral, portable”

    They explain their $24M seed+Series A raise, how insiders doubled down based on traction, and what they’ll do next: hire and build the best memory product. Their longer-term vision is portable memory that works across a future of many agentic interfaces.

  14. Founder lessons: focus (DFS vs BFS) and conviction over the long haul

    They close with advice from their journeys—staying focused and going deep, while also acknowledging how a side project can unexpectedly create breakthroughs. They emphasize persistence and belief as key founder traits.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome