Skip to content
YC Root AccessYC Root Access

The Q/A Layer for the AI Coding Era

In this episode of Founder Firesides, YC Managing Partner Harj Taggar talks to Weiwei Wu and Jeff An, co-founders of Momentic (W24), who just raised a $15M Series A. Momentic is the verification layer for software — an AI-powered testing platform that impersonates end users to catch bugs before they ship. Powering companies like Notion, Quora, and Built with over a million test runs a day, they discuss why the explosion of AI-generated code makes testing more critical than ever and their vision for a future where engineers write specs, not code. https://momentic.ai Apply to Y Combinator: https://www.ycombinator.com/apply Work at a startup: https://www.ycombinator.com/jobs

Harj TaggarhostWeiwei WuguestJeff Anguest
Mar 23, 202633mWatch on YouTube ↗

CHAPTERS

  1. Momentic in one sentence: the verification layer for software

    Harj introduces Weiwei Wu and Jeff An, co-founders of Momentic, and they define the company’s core mission. They frame Momentic as a “verification layer” that helps ensure software works as intended at scale, already powering large product teams.

  2. Why raise a $50M Series A now—and why Standard Capital

    Weiwei explains the timing of the Series A as driven by achieving a repeatable sales motion and the need to scale engineering and go-to-market. He also describes the fundraising process and the specific appeal of Standard Capital’s peer-group model.

  3. Testing 101 for non-engineers: why it exists and why it’s painful

    The conversation grounds what “testing” means: ensuring code changes don’t break an increasingly complex application. Jeff shares firsthand experience at Robinhood trying (and failing) to enforce high test coverage and pass rates, highlighting why engineers resist testing work.

  4. Code generation accelerates shipping—verification becomes the bottleneck

    Harj points out that AI coding tools are increasing the volume of code shipped daily. Weiwei argues this creates a new bottleneck: proving the code works in production beyond linting and review.

  5. Where linters and code review stop, and functional testing begins

    Weiwei explains linters (style/pattern checks) and code review (human or AI) as upstream checks. Momentic is positioned as the downstream, user-perspective validation layer that confirms real flows work, avoiding reliance on slow pre-release “bug bashes.”

  6. How Momentic fits into the dev stack: functional tests and agent tool-calls

    Momentic runs functional tests by impersonating users and exercising real flows in the product. Jeff describes integrations (e.g., MCP) where coding agents can call Momentic during development to write/run tests and verify changes via a real browser session.

  7. Why generic browser agents fall short: speed, complexity, and debuggability

    Jeff contrasts Momentic with general-purpose browser agents: they’re slow, not optimized for testing, and hard to debug. Momentic optimizes interaction speed, supports complex UIs (rich text, drag/drop, canvases), and provides better failure diagnosis.

  8. The future dev stack: less code review, more specs + external truth

    Jeff predicts code review of implementation details will matter less as models improve, with code becoming a commodity. Engineering shifts toward writing requirements/specs and validating outcomes—creating demand for an independent verification source like Momentic.

  9. Truth-driven (spec-driven) development: specs as the real source of truth

    Weiwei lays out two philosophies: code-as-truth vs spec/truth-driven development. He argues production code can’t be the source of truth because it contains bugs; instead, detailed specs (flows, success criteria, edge cases) should be the truth, with Momentic enforcing them.

  10. Why Momentic must be a standalone system: maintenance and evolving truth

    Weiwei and Jeff argue verification can’t live solely inside coding agents. Momentic provides an independent source of truth and a system that maintains tests over time, avoiding massive brittle test suites and continuously adapting as the product changes.

  11. Notion case study: from Selenium + manual testing to massive automated coverage

    Weiwei tells the origin story: a Notion engineer tweets a desire for plain-English testing, Momentic gets recommended, and Weiwei onboards them the same night. Notion transitioned from manual testing and flaky Selenium suites to Momentic tests that gate merges at high volume.

  12. Measuring ROI: dev hours saved vs preventing regressions and SEVs

    Weiwei describes how customers quantify value, from direct engineering time savings to the more meaningful north star: incidents prevented from reaching users. The emphasis is on reliability outcomes, not just faster test authoring.

  13. Roadmap, hiring, and culture: expanding platforms and staying adaptable

    They outline product expansion (mobile/desktop support) and a focus on developer experience and deep workflow integration. They discuss hiring for adaptable, high-ownership engineers with product intuition, and a culture centered on radical candor and strong team processes.

  14. Founder origins, teaming up, YC journey, and what keeps them driven

    Weiwei and Jeff share how they moved from other career paths into engineering, then met through a mutual connection and decided to merge efforts. They recount applying to YC with only a prototype and early pilots, discuss early challenges (talent and fast-changing AI landscape), and close with ambition about Momentic’s impact and competitive drive.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome