No PriorsNo Priors Ep. 53 | With AMD CTO Mark Papermaster
At a glance
WHAT IT’S REALLY ABOUT
AMD CTO Reveals Strategy Powering Next-Generation AI Chips and Computing
- AMD CTO Mark Papermaster discusses AMD’s decade-long transformation from a struggling PC-focused company into a central player in high-performance computing and AI. He explains how AMD built a competitive CPU and GPU portfolio, culminating in the MI300 accelerator for large-scale AI training and inference. The conversation covers open-source software strategy, supply-chain and packaging constraints, energy efficiency, and the impact of Moore’s Law slowing. Papermaster also outlines how AI will increasingly span cloud, edge, and end-user devices, making 2024 a major deployment year for AMD’s AI-enabled portfolio.
IDEAS WORTH REMEMBERING
5 ideasAMD’s AI strategy is built on a long-term CPU and GPU roadmap.
Before attacking AI head-on, AMD rebuilt its Zen CPU line and strengthened GPUs, enabling it to offer competitive heterogeneous systems that pair high-performance CPUs with massively parallel GPUs.
The MI300 targets leading-edge LLM training and inference workloads.
MI300 variants are designed for both high-performance computing and AI, with strong training performance and leading FP16 VLLM inference efficiency by tightly coupling math engines with high-bandwidth memory and advanced packaging.
Software ecosystem and openness are critical competitive levers against incumbents.
AMD’s ROCm stack is open source, tightly integrated with PyTorch, ONNX, TensorFlow, and platforms like Hugging Face, aiming to make porting workloads from CUDA-like environments straightforward and avoid vendor lock-in.
GPU supply constraints are easing, but power and packaging are rising bottlenecks.
Wafer capacity, advanced packaging, and substrates have been key constraints, but industry expansion—especially via partners like TSMC—is addressing them; long term, data center power availability and energy efficiency become the dominant challenges.
Innovation beyond Moore’s Law requires holistic, system-level design.
With node shrinks delivering less automatic benefit and higher cost, AMD leans on chiplets, heterogeneous compute engines, advanced 2D/3D packaging, and co-designed software stacks to keep performance-per-watt and capability improving.
WORDS WORTH SAVING
5 quotesWe’re not about locking in someone with a proprietary walled garden software stack. We want to win with the best solution.
— Mark Papermaster
It was clear that the industry needed that powerful combination of the serial computing of CPUs and the massive parallelization you get from a GPU.
— Mark Papermaster
With Moore’s Law slowing, it demands what I call holistic design—from transistor design all the way up through packaging and the software stack.
— Mark Papermaster
This is a huge year for us because we’ve just completed AI-enabling our entire portfolio—cloud, edge, PCs, embedded, and gaming.
— Mark Papermaster
The devices that are successful really serve a need… it’s got to be something that you love, and that creates a new category.
— Mark Papermaster
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome