No PriorsNo Priors Ep 56 | With Baseten CEO and Co-Founder Tuhin Srivastava
Episode Details
EPISODE INFO
- Released
- March 21, 2024
- Duration
- 38m
- Channel
- No Priors
- Watch on YouTube
- ▶ Open ↗
EPISODE DESCRIPTION
At a time when users are being asked to wait unthinkable seconds for AI products to generate art and answers, speed is what will win the battle heating up in AI computing. At least according to today’s guest, Tuhin Srivastava, the CEO and co-founder of Baseten which gives customers scalable AI infrastructures starting with interference. In this episode of No Priors, Sarah, Elad, and Tuhin discuss why efficient code solutions are more desirable than no code, the most surprising use cases for Baseten, and why all of their jobs are very defensible from AI. Show Notes: (0:00) Introduction (1:19) Capabilities of efficient code enabled development (4:11) Difference in training inference workloads (6:12) AI product acceleration (8:48) Leading on inference benchmarks at BaseTen (12:08) Optimizations for different types of models (16:11) Internal vs open source models (19:01) timeline for enterprise scale (21:53) Rethinking investment in compute spend (27:50) Defensibility in AI industries (31:30) Hardware and the chip shortage (35:47) Speed is the way to win in this industry (38:26) Wrap
SPEAKERS
Sarah Guo
hostTuhin Srivastava
guestElad Gil
host
EPISODE SUMMARY
In this episode of No Priors, featuring Sarah Guo and Tuhin Srivastava, No Priors Ep 56 | With Baseten CEO and Co-Founder Tuhin Srivastava explores baseten CEO Explains Fast AI Inference, Infrastructure, And Enterprise Adoption Baseten CEO Tuhin Srivastava discusses how his company provides fast, scalable AI inference infrastructure for teams deploying large models, emphasizing "efficient code" over no-code abstractions. He contrasts training vs. inference workloads, and explains why inference is more repeatable, SLA-driven, and reliability-sensitive. The conversation covers performance optimization (e.g., speculative decoding, TRT-LLM, continuous batching), GPU supply dynamics, and how customers move from shared endpoints to dedicated and self-hosted deployments. They also explore how AI will change enterprise software economics, build-vs-buy decisions, and what defensibility looks like in rapidly growing AI markets.
RELATED EPISODES
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome




