Nikhil Kamath ft. Perplexity CEO, Aravind Srinivas | WTF Online Ep 1.

Nikhil Kamath ft. Perplexity CEO, Aravind Srinivas | WTF Online Ep 1.

Nikhil KamathMar 23, 20252h 16m

Nikhil Kamath (host), Aravind Srinivas (guest)

Aravind’s journey: IIT → ML → Berkeley → OpenAI → PerplexityDefining AI, AGI, superintelligence; narrow vs general systemsNeural networks, loss functions, backprop; ML vs neural netsLLMs: pretraining, transformers, tokens; post-training and RLHFWhy AI took off: compute scale + high-quality data + product interfaceAI product differentiation: search, sources, latency, multimodal UXAgents and integrations: action-taking assistants, transactions, voiceCompetitive moats: Google default search, Android/Play Store, Meta networksData centers and vertical integration; inference vs training economicsNVIDIA and CUDA; Google’s full-stack TPU/JAX/XLA approachIndia opportunities: domestic models, Indian voice and dialectsRegulation: focus on risky applications (kids/companionship) vs model bans

In this episode of Nikhil Kamath, featuring Nikhil Kamath and Aravind Srinivas, Nikhil Kamath ft. Perplexity CEO, Aravind Srinivas | WTF Online Ep 1. explores perplexity CEO explains AI basics, industry shifts, and India’s opportunities Aravind Srinivas recounts his path from Chennai and IIT to Berkeley, OpenAI, and founding Perplexity, emphasizing learning through humility, fundamentals, and sustained effort.

Perplexity CEO explains AI basics, industry shifts, and India’s opportunities

Aravind Srinivas recounts his path from Chennai and IIT to Berkeley, OpenAI, and founding Perplexity, emphasizing learning through humility, fundamentals, and sustained effort.

He explains AI from first principles—narrow vs general intelligence, neural networks, machine learning, and how large language models are trained via next-token prediction plus post-training (e.g., RLHF).

The discussion argues the 2020s AI leap came from scaling compute with higher-quality data and training methods, and that differentiation is shifting from “chat” to agentic systems that take actions and complete transactions.

They examine competitive dynamics (Google’s distribution moats, Meta’s network effects), data centers and chips (NVIDIA’s CUDA/software moat), India’s role (model-building and voice), and a light-touch approach to regulation focused on applications rather than models.

Key Takeaways

Modern AI progress was driven by scaling simple ideas with compute and data.

Srinivas describes a key lesson from OpenAI/Ilya Sutskever: sophisticated academic ideas often lose to simpler approaches once you “throw a lot of compute” at them—provided data quality is high and training is done correctly.

Get the full analysis with uListen AI

General-purpose capability—not single-task performance—is what feels disruptive now.

Earlier “AI” like chess engines or calculators excelled at narrow tasks; today’s LLMs are one system that can handle thousands of economically valuable tasks (coding, writing, summarizing), creating broad labor and business impact.

Get the full analysis with uListen AI

An LLM is a giant neural network trained mostly to predict the next word.

Pretraining consumes massive text corpora (internet-scale tokens) using transformers; post-training then reshapes the model into a useful chatbot via fine-tuning and learning from human feedback (RLHF).

Get the full analysis with uListen AI

Neural networks learn patterns only when the task and data contain real signal.

Using the stock-market example, he notes models can’t reliably extract predictive power from irreducible noise; performance depends on whether the dataset and objective expose true structure that generalizes.

Get the full analysis with uListen AI

Chatbots are converging; the next differentiation is “agentic” action and workflow.

He predicts question-answering becomes a commodity, while winners will integrate personal context (email/calendar), tools/APIs, voice UX, and execution (booking, purchasing, emailing) with reliable reasoning.

Get the full analysis with uListen AI

Search distribution and transactions are the real battlefield against Google.

Even if AI does better research, users often still “start at the search bar” and finalize purchases via Google/Amazon—so AI products must solve default placement and enable native transactions to capture value.

Get the full analysis with uListen AI

Perplexity’s edge (as described) is product/infrastructure orchestration, not one model.

He explains Perplexity can route multiple models per query (query rewriting, retrieval/chunking, summarization, follow-up suggestions) and emphasizes speed via indexing and optimized inference runtimes.

Get the full analysis with uListen AI

Data centers are necessary but commoditize without software and integration.

He’s positive on India building data centers (sovereignty, local demand), but expects long-term margin pressure unless paired with cloud-like software layers (orchestration, hosting, scaling, developer experience).

Get the full analysis with uListen AI

NVIDIA’s moat is CUDA + general-purpose performance + ecosystem speed.

GPUs fit neural nets due to parallel matrix math; CUDA lock-in, closed components, hyperscaler relationships, and rapid chip cycles make displacement difficult, though inference-specialized challengers may appear.

Get the full analysis with uListen AI

India’s practical near-term wedge is Indian voice, accents, and dialect coverage.

He argues Western labs under-prioritize Indian speech recognition/synthesis; building high-quality real-time voice across languages/dialects could be a defensible, large domestic opportunity.

Get the full analysis with uListen AI

AI will likely increase productivity and creativity while displacing labor.

He anticipates widely available personal assistants (like smartphones in ubiquity), more personalized software creation, and near-term job disruptions as companies need fewer hires for the same output.

Get the full analysis with uListen AI

Regulation should target high-risk applications, not “regulating models.”

He’s skeptical model-level regulation is enforceable (downloadable/open-source) and suggests focusing on harmful use cases—especially children forming dependency/companionship relationships with chatbots.

Get the full analysis with uListen AI

Notable Quotes

AI is just two circles... The big circle is generative AI, and the smaller circle is reinforcement learning... and the only thing that remains is to throw a lot of compute at it.

Aravind Srinivas

Even though other people in academia... respect you for the more complicated ideas, what matters in reality is making things work, and it's often the simplest ideas... thrown a lot of compute at them.

Aravind Srinivas

A large language model... is essentially a giant neural network that's trained on... predicting the next word... training on the whole internet.

Aravind Srinivas

I feel like the real magic is gonna come from AIs doing things.

Aravind Srinivas

Regulating models is not necessarily a great idea... The best way is to regulate applications.

Aravind Srinivas

Questions Answered in This Episode

You mention Ilya’s “two circles” (generative + RL). Concretely, what problems does RL solve that scaling pretraining alone can’t?

Aravind Srinivas recounts his path from Chennai and IIT to Berkeley, OpenAI, and founding Perplexity, emphasizing learning through humility, fundamentals, and sustained effort.

Get the full analysis with uListen AI

When you say “high-quality data tokens,” what are the top 3 data improvements that most boost reasoning: textbooks, code, step-by-step solutions, or something else?

He explains AI from first principles—narrow vs general intelligence, neural networks, machine learning, and how large language models are trained via next-token prediction plus post-training (e. ...

Get the full analysis with uListen AI

You describe post-training as key for usefulness. How do you think RLHF will evolve (or be replaced) for agentic systems that act in the real world?

The discussion argues the 2020s AI leap came from scaling compute with higher-quality data and training methods, and that differentiation is shifting from “chat” to agentic systems that take actions and complete transactions.

Get the full analysis with uListen AI

Perplexity uses multiple models per query. What’s the exact decision logic for routing (cost, quality, query type), and where does retrieval fit versus pure model reasoning?

They examine competitive dynamics (Google’s distribution moats, Meta’s network effects), data centers and chips (NVIDIA’s CUDA/software moat), India’s role (model-building and voice), and a light-touch approach to regulation focused on applications rather than models.

Get the full analysis with uListen AI

If all chatbots converge on similar benchmarked answers, what are the 2–3 product features that will most strongly drive paid retention in 2025–26?

Get the full analysis with uListen AI

Transcript Preview

Nikhil Kamath

Are you like a Chennai boy? Have you grown up there all your life?

Aravind Srinivas

My parents live in Chennai, so I first go there.

Nikhil Kamath

What were these fancy ideas?

Aravind Srinivas

I'm very upset to hear that, 'cause I actually thought my ideas are cool. [upbeat music]

Nikhil Kamath

Hi, Aravind.

Aravind Srinivas

Hi, Nikhil.

Nikhil Kamath

Hi. This is a bit weird for me 'cause I'm doing it this way, a conversation after a bit.

Aravind Srinivas

Okay.

Nikhil Kamath

Yeah.

Aravind Srinivas

Yeah, I wish we could be in the same place.

Nikhil Kamath

Where are you now?

Aravind Srinivas

I'm in San Francisco.

Nikhil Kamath

Right.

Aravind Srinivas

Yeah. I was traveling to Europe last week.

Nikhil Kamath

Okay.

Aravind Srinivas

A lot more travel coming up, but, uh, hope to be in India pretty soon, um, before my May, hopefully.

Nikhil Kamath

Oh, that's not far. When you come to India, where do you typically go to? Is it-

Aravind Srinivas

My parents live in Chennai, so I first go there. And then, um, depending on the arrangement, like who I'm meeting, I spend-- Like last time I came, I went to Mumbai and Delhi. Um, and this time I probably will try to go to Bangalore, too, in addition to, um, these two cities.

Nikhil Kamath

Mm-hmm.

Aravind Srinivas

But everything's, like, in the flux right now.

Nikhil Kamath

Right. Super. Are you like a Chennai boy? Have you grown up there all your life?

Aravind Srinivas

Yeah.

Nikhil Kamath

So you know, like, the local stuff about Chennai kind of thing?

Aravind Srinivas

Um, I mean, I, I'm, I'm... Yeah, I grew up there, so I hope- [chuckles]

Nikhil Kamath

Mm-hmm.

Aravind Srinivas

I, I don't know exactly what you mean by local, but I definitely know Chennai pretty well.

Nikhil Kamath

Right. How did this begin? Would you like to start by telling us, like, a little bit of-- little bit about your journey, how it was from where you began in Chennai to where you are today?

Aravind Srinivas

Yeah. Um, well, I was just like any other, um, student in Chennai, um, just, just studying. Uh, people in Chennai study a lot. I think that's one thing I've known.

Nikhil Kamath

Mm-hmm.

Aravind Srinivas

Um, I think I was pretty interested in, um, all, all sorts of statistical things. Um-

Nikhil Kamath

Mm-hmm

Aravind Srinivas

... mainly coming from following cricket a lot, is people generally, like, try to analyze the stats and run rate and, like-

Nikhil Kamath

Mm-hmm

Aravind Srinivas

... how many fifties or hundreds, and I, I got a s- intuitive sense for numbers pretty early on. I was pretty good at math. Um, and I also, like, early on, picked up programming towards the end of my, um, I think, eleventh standard. So that's, uh, that was how it began, and obviously, my mom wanted me to get into IITs. Uh, every time, like, we would go on a bus, um, and, and pass by the IIT Madras campus, uh, my mom would point to the campus and say: "This is where you're going to study." The-

Nikhil Kamath

Mm-hmm.

Aravind Srinivas

It's not even like you should study. "This is where you're going to study." So that was the expectation. I definitely grew up thinking that, okay, um, I w- I do wanna, like, compete and win against the best people. Uh, and, um, it was-- The JEE exams are pretty competitive, as you know. And so we, we had a pretty, uh, good rivalry among, like, fellow students, and I obviously didn't do as well as I wanted in the JEE, but, uh, I got into IIT, got into electrical engineering, and inside the, inside the campus, again, like, uh, a lot of our friends got into competitive programming, so I, I was into that, too. But I, I figured I was not as good as what you needed to be to get to the world finals of, like, ICPC or something. Um, and so I, I, I got a good understanding of computer science, uh, and, and, and got a lucky opportunity to learn about machine learning pretty early on. Um, I got, you know, a roommate of mine, uh, or, like, someone neighboring to my room told me about this contest that was running-

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome