Skip to content
Nikhil KamathNikhil Kamath

Nikhil Kamath ft. Perplexity CEO, Aravind Srinivas | WTF Online Ep 1.

In this episode, we sat with Perplexity AI co-founder & CEO, Aravind Srinivas, to explore the evolution of artificial intelligence, what the big AI giants are up to & if we can even predict the future. We also speak about the biggest AI advancements, the role of India in this fast-moving sector - where the real opportunities lie, what’s being overlooked, and finally, the questions we didn't even know we should be asking. #NikhilKamath Co-founder of Zerodha, True Beacon and Gruhas Twitter: https://x.com/nikhilkamathcio LinkedIn: https://www.linkedin.com/in/nikhilkamathcio Instagram:https://www.instagram.com/nikhilkamathcio Facebook: https://www.facebook.com/nikhilkamathcio #AravindSrinivas Co-founder and CEO of Perplexity AI Twitter: https://x.com/AravSrinivas LinkedIn: https://www.linkedin.com/in/aravind-srinivas-16051987 Instagram: https://www.instagram.com/aravindsrinivas Timestamps - 00:00 - Intro 00:45 - Meeting Aravind Srinivas | His Journey & Career Path 12:14 - AI’s Evolution | From Basics to Super Intelligence 29:06 - Understanding the Process Behind AI 35:54 - WTF is a Neural Network? 45:25 - Large Language Models (LLMs) & it’s Evolution 53:59 - The Latest AI Shifts 57:03 - Aravind’s Hustle | Work, Education & Family 01:05:13 - What are Big Players of AI Doing? | Perplexity, Google, Meta, Open AI, Anthropic, and more 01:22:00 - Where the Real AI Opportunities Are 01:34:44 - AI Features & Tools | Text-Videos, Chatbots, Translations 01:39:15 - Why Data Centers Are the Next Big Thing 01:49:42 - India’s Role & Scope in this Industry 02:02:47 - Aravind’s AI Platform Recommendations 02:05:26 - Where AI is Headed Next 02:08:43 - AI Regulations | Tackling Complications 02:16:17 - Outro #WTFiswithnikhilkamath #WTFOnline #nikhilkamath #perplexityai #ai #google #meta #neuralnetworks #perplexity #chatgpt #openai #gemini #manus #deepseek #technology #tech #data #datacenter

Nikhil KamathhostAravind Srinivasguest
Mar 23, 20252h 16mWatch on YouTube ↗

CHAPTERS

  1. 0:00 – 0:45

    Remote catch-up from SF to Dubai: Chennai roots and travel plans

    Nikhil and Aravind open with light context: where Aravind is (San Francisco), when he’s coming to India, and his ties to Chennai. It sets an informal tone before moving into Aravind’s background.

  2. 0:45 – 12:14

    From cricket stats to code: IIT Madras, Kaggle, and early ML wins

    Aravind traces his path from a numbers-first upbringing (cricket statistics) to programming and the IIT track. A Kaggle contest and a fast-paced Bangalore internship push him from curiosity into practical ML.

  3. 12:14 – 29:06

    Berkeley grind and the OpenAI apprenticeship: humility, mentors, and focus

    He describes arriving at Berkeley without an advisor, building momentum through intense self-driven work, and earning mentorship. The OpenAI internship becomes a turning point: rigor, blunt feedback, and prioritizing what works over “fancy” ideas.

  4. 29:06 – 35:54

    Ilya’s ‘two circles’ view: generative AI + RL and the compute bet

    Aravind recounts Ilya Sutskever’s framework: generative modeling plus reinforcement learning as the core recipe toward AGI, with scale (compute) as the accelerator. Aravind contrasts this with his own research ambition of models learning their own loss functions.

  5. 35:54 – 45:25

    AI/AGI explained simply: narrow vs general intelligence and economic impact

    Nikhil asks for a ‘10-year-old’ explanation of AI, leading to distinctions between narrow task programs (like chess engines) and general systems that handle many tasks. Aravind emphasizes why modern AI matters: it affects paid knowledge work at scale.

  6. 45:25 – 53:59

    Compute-to-LLM evolution: from circuits and PCs to transformers and chatbots

    They build a timeline from calculators (circuits), to personal computing (Moore’s Law, VisiCalc), to internet/mobile/cloud, and finally today’s AI. Aravind highlights what changed since ~2010: neural nets + scale + high-quality data + human feedback.

  7. 53:59 – 57:03

    Neural networks vs machine learning: patterns, loss functions, and irreducible noise

    Aravind explains neural networks as layered functions that transform inputs into outputs, trained by minimizing a loss across large datasets. Using stock-market prediction as intuition, he clarifies that models only learn signal; noise can’t be eliminated and won’t generalize.

  8. 57:03 – 1:05:13

    What an LLM is: pretraining next-token prediction, transformers, and post-training

    They define large language models as giant neural nets trained to predict the next token using internet-scale text. Aravind outlines the two-stage pipeline: pretraining (most compute) and post-training/fine-tuning to become a helpful chatbot, plus multimodal capabilities often added later.

  9. 1:05:13 – 1:22:00

    Why LLMs may not equal AGI: physical common sense, robotics, and reasoning

    Prompted by Yann LeCun’s critiques, Aravind discusses what’s missing: physical reasoning and embodied common sense. He explains why dexterous tasks remain hard and argues that stronger reasoning/planning plus better world models are needed for physical generalization.

  10. 1:22:00 – 1:34:44

    Big AI players and the coming differentiation: from chat commodities to agents

    Aravind argues most chatbots are converging—benchmarks and similar training lead to similar answers. Differentiation will shift toward richer UI and, more importantly, agentic systems that take actions (book, email, schedule) by integrating tools and personal context.

  11. 1:34:44 – 1:39:15

    How Perplexity works under the hood: multi-model pipelines, speed, and cost dynamics

    Nikhil probes Perplexity’s architecture and business tradeoffs. Aravind explains a multi-model workflow per query (rewrite, retrieval/chunking, summarization, suggestions), infrastructure optimizations for tail latency, and why per-query costs and margins are moving targets.

  12. 1:39:15 – 1:49:42

    Platforms, distribution, and ads: Meta vs Google; why ‘the search bar’ still wins

    Discussion shifts to strategy and moats: Aravind picks Meta as the best public-market bet due to social network effects and ads in an AI world. They examine Google’s distribution advantages (defaults, Android, Play Store leverage) and why AI search must also enable transactions to truly disrupt Google.

  13. 1:49:42 – 2:02:47

    India’s opportunity stack: voice, models, data centers, and entrepreneurship paths

    Aravind argues India should train competitive global models and build the infrastructure (chips/data centers) to avoid dependency, even if outputs converge. For founders without massive resources, he suggests starting with products on open models, then moving into post-training, pretraining, and infrastructure; he flags Indian voice as a near-term wedge.

  14. 2:02:47 – 2:05:26

    Compute economics and hardware moats: data centers, NVIDIA, and full-stack challengers

    Nikhil asks about data center investing, hyperscaler risk, and whether structural shifts could undermine the thesis. Aravind expects commoditization unless paired with strong software layers; he explains NVIDIA’s durability via general-purpose performance, CUDA lock-in, and the difficulty of replicating a full stack—crediting Google as a rare end-to-end alternative.

  15. 2:05:26 – 2:08:43

    Where the next opportunities are: personal software, ‘build your own app’ platforms, and tools to try

    Aravind predicts a wave of personalized apps where users generate software tailored to their needs, reducing reliance on one-size-fits-all SaaS. He points Nikhil to tools like Cursor, Replit, and Bolt as early signs, while noting that top-tier engineering still matters—especially infrastructure and reliability.

  16. 2:08:43 – 2:16:17

    Five-year outlook and regulation: assistants everywhere, displacement risk, and app-level guardrails

    Aravind forecasts ubiquitous personal assistants—affordable and widely accessible—plus more creative output and easier software creation. He warns about labor displacement and suggests regulation should focus on harmful applications (especially kids/companionship dynamics) rather than trying to regulate model weights directly.

  17. 2:16:17 – 2:16:30

    Closing: paywalls, attribution, and Nikhil’s internship offer at Perplexity

    They touch on the uncertain future of paywalled data and whether training should require compensation, contrasting human reading with model distillation. The episode ends with Nikhil asking—seriously—to intern at Perplexity to learn firsthand, and Aravind welcoming the idea.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome