No PriorsNo Priors Ep. 18 | With Kevin Scott, CTO of Microsoft
Sarah Guo and Kevin Scott on microsoft CTO Kevin Scott on AI Platforms, Partnerships, and Purpose.
In this episode of No Priors, featuring Sarah Guo and Kevin Scott, No Priors Ep. 18 | With Kevin Scott, CTO of Microsoft explores microsoft CTO Kevin Scott on AI Platforms, Partnerships, and Purpose Kevin Scott traces his unlikely path from rural Virginia to Microsoft CTO and explains how that background shapes his conviction that AI must broadly benefit society. He details Microsoft’s strategic bet on large-scale AI: consolidating GPU resources, partnering deeply with OpenAI, and co-building supercomputing infrastructure with NVIDIA to enable platform-scale models. Scott argues that AI’s real impact comes from assistive, “copilot” products and a stack that blends large models, orchestration, retrieval, and safety layers, rather than models as standalone products. He also emphasizes balancing optimism about AI’s potential in areas like education and healthcare with serious, proactive work on safety, regulation, and responsible deployment.
At a glance
WHAT IT’S REALLY ABOUT
Microsoft CTO Kevin Scott on AI Platforms, Partnerships, and Purpose
- Kevin Scott traces his unlikely path from rural Virginia to Microsoft CTO and explains how that background shapes his conviction that AI must broadly benefit society. He details Microsoft’s strategic bet on large-scale AI: consolidating GPU resources, partnering deeply with OpenAI, and co-building supercomputing infrastructure with NVIDIA to enable platform-scale models. Scott argues that AI’s real impact comes from assistive, “copilot” products and a stack that blends large models, orchestration, retrieval, and safety layers, rather than models as standalone products. He also emphasizes balancing optimism about AI’s potential in areas like education and healthcare with serious, proactive work on safety, regulation, and responsible deployment.
IDEAS WORTH REMEMBERING
7 ideasModels and infrastructure are not products; hard problems are where value lies.
Scott stresses that simply “adding an LLM” is not enough; the most important products will be those that turn previously impossible tasks into hard but feasible ones, in the same way smartphones enabled non-obvious apps like TikTok and DoorDash rather than just early novelty apps.
Concentrated, conviction-driven investment in compute is a strategic advantage.
Microsoft stopped “peanut butter spreading” GPUs and instead centralized capital-intensive compute around high-conviction AI efforts, enabling the scale necessary for frontier models like GPT‑3 and beyond.
AI is evolving into a platform best delivered through assistive copilots.
From GitHub Copilot to Microsoft 365 and Bing Chat, Scott describes a generalized copilot pattern: LLMs orchestrated with tools, retrieval, prompts, plugins, and safety filters to assist humans in domain-specific workflows rather than replace them.
Open source and closed models will coexist in a portfolio of systems.
Real-world deployments already use multiple models for cost, latency, and quality tradeoffs; Scott is excited by open-source innovation but notes that robust safety and responsible AI practices must evolve alongside it.
AI can dramatically widen who can build with advanced tools.
Tasks that once required deep ML expertise and months of work can now be done in hours by far less specialized users; Scott sees this accessibility as a path to more equitable opportunity for people far from traditional tech hubs.
Optimism about AI’s benefits must coexist with serious safety and regulation.
He argues that regulation is a sign the technology matters, likening it to electricity standards, and calls for industry-wide norms and safeguards that enable widespread, trusted deployment while deterring harmful uses.
Human-centric roles and creativity will remain essential despite cognitive automation.
Scott expects continued demand for physically grounded jobs (e.g., surgeons, nurses, technicians) and human-driven creative work, noting that audiences care about human stories and agency even when machines surpass us in narrow capabilities.
WORDS WORTH SAVING
5 quotesModels aren’t products, and infrastructure isn’t a product.
— Kevin Scott
Probably the place where the most interesting products are, are where you’ve made the phase change from impossible to hard.
— Kevin Scott
We will no longer peanut butter these resources around.
— Kevin Scott, on centralizing Microsoft’s GPU budget
There’s no historical precedent where you get all of these beneficial things by starting from pessimism first. Pessimism doesn’t get you to optimistic outcomes.
— Kevin Scott
Nobody’s trying to regulate frivolous things.
— Kevin Scott, on why regulation signals AI’s real importance
QUESTIONS ANSWERED IN THIS EPISODE
5 questionsHow should a non-tech company decide which ‘impossible-to-hard’ problems in its domain are worth building AI products around, rather than chasing shallow LLM features?
Kevin Scott traces his unlikely path from rural Virginia to Microsoft CTO and explains how that background shapes his conviction that AI must broadly benefit society. He details Microsoft’s strategic bet on large-scale AI: consolidating GPU resources, partnering deeply with OpenAI, and co-building supercomputing infrastructure with NVIDIA to enable platform-scale models. Scott argues that AI’s real impact comes from assistive, “copilot” products and a stack that blends large models, orchestration, retrieval, and safety layers, rather than models as standalone products. He also emphasizes balancing optimism about AI’s potential in areas like education and healthcare with serious, proactive work on safety, regulation, and responsible deployment.
What concrete practices has Microsoft found most effective for aligning large internal teams around high-conviction AI bets while avoiding wasteful experimentation?
How can the open-source community realistically tackle responsible AI and safety for powerful models without the resources of hyperscalers?
In education and healthcare, what are the most promising near-term AI deployments that could achieve “two-sigma” style gains without exacerbating inequality?
What kinds of regulatory frameworks would best balance rapid AI innovation with the need for safety, accountability, and public trust at the foundation-model level?
EVERY SPOKEN WORD
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome