Skip to content
No PriorsNo Priors

No Priors Ep. 53 | With AMD CTO Mark Papermaster

Compute is the fuel for the AI revolution, and customers want more chip vendors. AMD CTO Mark Papermaster joins Sarah and Elad on No Priors to discuss AMD’s strategy, their newest GPUs, where inference workloads will live, the chip software stack, how they are thinking about supply chain issues, and what we can expect from AMD in 2024. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Show Notes: 0:00 Introduction and Mark’s background 2:35 AMD background and current markets 4:40 AMD shifting to AI space 8:54 AI applications coming out of AMD 10:57 Software investment 15:15 The benefits of open-source stacks 16:58 Evolving GPU market 20:21 Constraints on GPU production 24:11 Innovations in chip technology 27:57 Chip supply chain 30:18 Future of innovative hardware products 35:42 What’s next for AMD

Sarah GuohostMark PapermasterguestElad Gilhost
Feb 28, 202439mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

AMD CTO Reveals Strategy Powering Next-Generation AI Chips and Computing

  1. AMD CTO Mark Papermaster discusses AMD’s decade-long transformation from a struggling PC-focused company into a central player in high-performance computing and AI. He explains how AMD built a competitive CPU and GPU portfolio, culminating in the MI300 accelerator for large-scale AI training and inference. The conversation covers open-source software strategy, supply-chain and packaging constraints, energy efficiency, and the impact of Moore’s Law slowing. Papermaster also outlines how AI will increasingly span cloud, edge, and end-user devices, making 2024 a major deployment year for AMD’s AI-enabled portfolio.

IDEAS WORTH REMEMBERING

5 ideas

AMD’s AI strategy is built on a long-term CPU and GPU roadmap.

Before attacking AI head-on, AMD rebuilt its Zen CPU line and strengthened GPUs, enabling it to offer competitive heterogeneous systems that pair high-performance CPUs with massively parallel GPUs.

The MI300 targets leading-edge LLM training and inference workloads.

MI300 variants are designed for both high-performance computing and AI, with strong training performance and leading FP16 VLLM inference efficiency by tightly coupling math engines with high-bandwidth memory and advanced packaging.

Software ecosystem and openness are critical competitive levers against incumbents.

AMD’s ROCm stack is open source, tightly integrated with PyTorch, ONNX, TensorFlow, and platforms like Hugging Face, aiming to make porting workloads from CUDA-like environments straightforward and avoid vendor lock-in.

GPU supply constraints are easing, but power and packaging are rising bottlenecks.

Wafer capacity, advanced packaging, and substrates have been key constraints, but industry expansion—especially via partners like TSMC—is addressing them; long term, data center power availability and energy efficiency become the dominant challenges.

Innovation beyond Moore’s Law requires holistic, system-level design.

With node shrinks delivering less automatic benefit and higher cost, AMD leans on chiplets, heterogeneous compute engines, advanced 2D/3D packaging, and co-designed software stacks to keep performance-per-watt and capability improving.

WORDS WORTH SAVING

5 quotes

We’re not about locking in someone with a proprietary walled garden software stack. We want to win with the best solution.

Mark Papermaster

It was clear that the industry needed that powerful combination of the serial computing of CPUs and the massive parallelization you get from a GPU.

Mark Papermaster

With Moore’s Law slowing, it demands what I call holistic design—from transistor design all the way up through packaging and the software stack.

Mark Papermaster

This is a huge year for us because we’ve just completed AI-enabling our entire portfolio—cloud, edge, PCs, embedded, and gaming.

Mark Papermaster

The devices that are successful really serve a need… it’s got to be something that you love, and that creates a new category.

Mark Papermaster

Mark Papermaster’s career and AMD’s corporate turnaroundAMD’s CPU/GPU portfolio, MI300, and heterogeneous computingAI workloads, large language models, and inference at scaleSoftware ecosystem, ROCm, and open-source philosophySupply constraints, advanced packaging, and fab/geopolitical strategyInnovation beyond Moore’s Law: chiplets, 3D stacking, holistic designAI across cloud, edge, PCs, and emerging hardware devices

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome