Skip to content
No PriorsNo Priors

No Priors Ep. 53 | With AMD CTO Mark Papermaster

Compute is the fuel for the AI revolution, and customers want more chip vendors. AMD CTO Mark Papermaster joins Sarah and Elad on No Priors to discuss AMD’s strategy, their newest GPUs, where inference workloads will live, the chip software stack, how they are thinking about supply chain issues, and what we can expect from AMD in 2024. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Show Notes: 0:00 Introduction and Mark’s background 2:35 AMD background and current markets 4:40 AMD shifting to AI space 8:54 AI applications coming out of AMD 10:57 Software investment 15:15 The benefits of open-source stacks 16:58 Evolving GPU market 20:21 Constraints on GPU production 24:11 Innovations in chip technology 27:57 Chip supply chain 30:18 Future of innovative hardware products 35:42 What’s next for AMD

Sarah GuohostMark PapermasterguestElad Gilhost
Feb 29, 202439mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:002:35

    Introduction and Mark’s background

    1. SG

      (techno music) Hi, listeners. For potential AI founders, my early stage AI fund, Conviction, is accepting applications for its Embed Accelerator for two more days. Embed offers $150,000 in an uncapped safe, more than half a million of free compute and API credits, a hand-selected set of peers, and access to leading founder and research mentors. Apply at embed.conviction.com by March 1st. Hi, listeners, and welcome to another episode of No Priors. Today, we're excited to be talking to the CTO of AMD, Mark Papermaster. Mark has had a storied career in chips and hardware with previous leadership positions at IBM, Apple, and Cisco. We're excited to have Mark on to get into GPUs and the competition that's been driving this industry. Welcome, Mark.

    2. MP

      Thanks there. Glad to be here with you and Elad.

    3. SG

      Can you start by telling us a bit about your background? You've worked on all sorts of interesting things from the iPhone and the iPad to, like, the latest generation of AMD SuperComputing chips.

    4. MP

      Well, sure. I've been around a while. So what's really fun is my timing was pretty good getting into the industry as an electrical and computer engineering grad, University of Texas, and got really interested in chip design. And so it was back at a time when chip design was radically changing. The kind of technology everyone uses today, CMOS, was just coming into, uh, you know, production usage. And so I got, uh, on s- IBM's very first CMOS projects and created some of the first designs. So I got to get my hands dirty and do just about every facet of, uh, chip design, and had a number of years at IBM and, uh, took on different roles, uh, took on, uh, driving the microprocessor development at, uh, IBM across, uh, first, their, uh, Power, uh, PCs and that wa- you know, meant working with Apple and, uh, Motorola as well as the big iron. The- the big computing chips that we had in the mainframe and the- and the big RISC servers. So, uh, really got all facets o- of technology there and included, uh, working on some of their, uh, server development, but then, uh, shifted over to, um, uh, to Apple, uh, Steve, uh, Jobs hired me to run the iPhone and, uh, iPod, uh, and so I was there for a couple years. But it was, uh, a- a time of a great transition in the opportun- in the, uh, industry and- and for me, it was a great opportunity because I ended up in 2011, fall of 2011, uh, taking the role here at AMD of being both CTO,

  2. 2:354:40

    AMD background and current markets

    1. MP

      uh, and- and really running the technology and engineering and right at a point where Moore's Law is starting to slow down and so, eh, you know, tremendous innovation was needed.

    2. SG

      Yeah. I wanna get into that and sort of what we can expect in terms of computing innovation if we're not just draming more transistors on chips or we're un- unable to do that. Um, every one of our listeners I think has heard of AMD, but can you give, like, a- a very brief overview of the major markets you serve there?

    3. MP

      Sure. So AMD is a- a storied company. It's been a- around, well over 50 years and it all- it started out really being, you know, a second source company, really bringing, uh, you know, second source on key components and x86 microprocessors. But you fast-forward to where we are, uh, today, uh, and it's a very, very broad portfolio. Uh, when, uh, Lisa and Sue, our CEO, and I were brought, uh, into the company just over 10 years ago, uh, it was with a- a mandate to, uh, get, uh, AMD back into very, very strong competitiveness. And so, uh, we started with the CPU line, brought the CPU, uh, very, very competitive and then really across the portfolio and just in February of 2022, acquired Xilinx so that expanded the portfolio further. So AMD s- creates the world's largest supercomputers. It's got a massive install base now in the cloud so many of your cloud operations that you're running are running on, uh, AMD EPYC, uh, x86 CPUs. Gaming we're- we're- we're huge. We're underneath all the, uh, Xbox, all the PlayStation as well as, uh, many, uh, gaming devices that, uh, uh, that- that you buy when you buy your- your, uh, add-in boards and then across, uh, embedded devices with all of that rich, uh, Xilinx portfolio as well as embedded x86 and we em- we acquired Pensando so it extends that, uh, portfolio, uh, right into a networking interconnect that we need as we- as we scale out these workloads. So

  3. 4:408:54

    AMD shifting to AI space

    1. MP

      very, very broad portfolio.

    2. EG

      Yeah. AMD has had a pretty amazing run over the last decade plus since you joined. Um, one of the things that you folks have really emphasized over the last couple years as well is AI and there's been a big shift both in terms of the adoption of AI over the last decade or so in terms of the traditional, uh, CNN, RNN and other types of, um, neural network architectures but also in terms of this shift to transformers and diffusion models and everything else. Um, can you tell us a little bit more about what initially caught your attention in the AI landscape and then how AMD started to focus more and more on that over time and what- what sort of solutions you've come up with?

    3. MP

      You bet. Well, uh, we all know the AI journey, you know, has been going since, uh, really the, uh, the race began when, uh, the application space for AI opened up, uh, and GPUs were obviously, uh, pivotal there. When you look at the, uh, the, the key work that, uh, you know, uh, Hinton had done in terms of showing how GPUs could drastically improve the, uh, accuracy of image recognition, natural language processing, uh, and so, that, that, that's been known, uh, for some time. And so what we did at AMD is, uh, we, uh, right away, uh, saw the opportunity. Uh, the question was plotting our course, uh, to be that strong player in AI. So it was a very, uh, thoughtful and delib- deliberate strategy because AMD, we had to turn around the company. So if you look at where AMD was, uh, in, uh, t- you know, 2012, uh, you know, through, uh, you know, really 2017, uh, it was largely al- all of the revenue was based on PCs and then gaming. And so it, it was about making sure that the portfolio, the building blocks were competitive. Those building blocks had to be leadership. They had to attract people to, uh, get on that AMD, uh, platform for high performance applications. And so first, we actually had to rebuild the CPU roadmap, and that was the Zen microprocessors that, uh, that we released in, uh, 2017, uh, in both, uh, PCs with our Ryzen line as well as EPYC, our x86 server line. So that started the revenue ramp (laughs) for the company and, and started extending, uh, our portfolio. And so right about, uh, uh, that time, uh, in parallel, as we saw where heterogeneous computing was going, we had, we had called the ball on hedro- heterogeneous computing before, uh, myself, before Lisa ever joined the company. Uh, uh, AMD had made a, a great acquisition of ATI that brought GPU into the portfolio. It's one of the big reasons I was attracted to, uh, to AMD, uh, in the role is that, wow, w- uh, it was one of the, it was the really the only company that had a, a very strong CPU portfolio and a very strong GPU portfolio. And to me, it was clear that the industry needed that powerful combination of the serial, the scalar competing of these traditional CPU workloads, and the massive parallelization that you get from a GPU. Uh, and so we started with that heterogeneous compute, uh, and created an architecture around that. So we've been shipping CPUs and GPUs combined for PC applications longer than anyone. Started shipping those in 2011 with what we call APUs, accelerated processor units. And then for big data applications, we started with HPC, the kind of high performance compute technology that's in national labs, it's in, uh, oil exploration companies. And so, uh, we, uh, focused, uh, first with, uh, you know, big government bids that ended up leading, uh, to supercomputer wins that, that we now have AMD, uh, CPU and AMD GPUs under the world's largest supercomputers. But that work started years ago, and it was equally a hardware and a software effort. Uh, and so, uh, we've been building that hardware and software capability, and it really culminated in December 6th of 2023, of last year, when we announced our flagship, the MI300, which just is a beast for both, uh, high performance

  4. 8:5410:57

    AI applications coming out of AMD

    1. MP

      compute with one variant we have and takes high performance, uh, AI for both training and inference, uh, head on, uh, with, with a variant which is optimized for those AI applications. So it's been a long journey, and we're really pleased, uh, to be where we are, where, uh, our, our sales are taking off.

    2. EG

      No, it's fantastic. I mean, I, I guess when you launched the MI300, um, you had public commitments from Meta and Microsoft, for example, to purchase that. And you just mentioned that there's a series of applications that you're pretty excited about there. Could you tell us more about which AI applications and workloads you're most excited about or mo- most bullish on today?

    3. MP

      Sure. So if you think about where the bulk of AI is today, you're still seeing just tremendous capital expenditures and building up the accuracy of capabilities for large language model training and inference. So it is the, the likes of ChatGPT, of Bard and, and, you know, and the other, uh, you know, LLMs that you can, uh, ask it anything because it's trying to ingest the vast of data that is, that is out there and it can be trained upon. And it's, it's with really a, an, you know, an ultimate goal of artificial general intelligence, an AGI type of, uh, of capability. And so, uh, that is where we focus the MI300 is to start with that, that halo product that could take on the industry leader. And in fact, MI300's done that. It's competitive on training and it leads in inferencing. It has o- over, uh, 2X. Now if you look at, uh, you know, FP16 VLLMs, which is a, a, a metric that, you know, generally e- everyone in the, uh, can run that, it's got a tremendous performance advantage and, and we did that very purposely. We created very efficient, uh, engines for the, the math processing that you need for that, uh, training or inference, uh, processing. But we also brought the memory that you need to have more efficient computing. So that's more computing at less power,

  5. 10:5715:15

    Software investment

    1. MP

      less rack space, uh, than you need with competition.

    2. SG

      A big front of competition is, as you just pointed out, there's performance, like overall performance, there's efficiency, and then there's, uh, like the software platform, like CUDA, ROCm, et cetera. How do you think about the investment in the optimized math libraries and, like, how you want developers to understand your approach versus competitors?

    3. MP

      Yeah. You're, you're so right, Sarah. It's multifaceted to be able to compete in this arena. Uh, you see many, uh, startups going after the space, but th- the fact is, the, the bulk of inferencing, uh, done today is done on general purpose CPUs, not the huge LLM inferencing, but, you know, just general, uh, inferencing for AI applications. And then for large language model applications, it's almost all on GPUs because that is the software and developer ecosystems out there. And so, we've been competitive on, on, uh, CPUs. We've been gaining, uh, share at a rapid clip because we've got, you know, a, a very strong CPU, uh, generation after generation that we've been releasing on, on schedules we've laid out for the industry. But for GPU, it did take us, uh, until now to develop really world-class hardware and world-class software. And what we've done, uh, is ensure that because we're a GPU, it, it should be easy to deploy. Uh, and so really making sure, uh, that we leverage the fact that we have all the GPU semantics. So if you're, you're a coder, uh, it's, it's just, uh, easy to code if, if you're using the, the lower level semantics. Uh, but also, uh, we support all of the so- key software libraries that are out there. When you think about the kind of frameworks, whether it be PyTorch, we're a founding member of, uh, PyTorch Foundation, whether it be ONNX, um, whether it be TensorFlow, we are out there very closely working with, uh, developers. And so, what we've now gone to, now that we have, uh, you know, competitive and leadership offering, uh, is what you'll see is that, uh, when you're deploying with AMD, very facile. If you're, uh, let's say you're using Hugging Face, any of the, you know, thousands and thousands of LLMs, open source LLMs out there on, on Hugging Face. Well, we partnered with, uh, Clem and his team. They, they test as they release any of those language models, uh, they're testing on AMD with our, uh, Instinct GPUs equally as they're testing on NVIDIA. So we've, uh, really, uh, done the same thing as well with PyTorch, where we're one of two qualified, uh, offerings on, uh, on, uh, PyTorch. And so all of that testing is being done, uh, you know, routinely with the, the regression testing that's run, uh, literally every night on any software release. Uh, the other thing that's, that's key, uh, is to learn from deployments. And so we've had early engagements like Lamini, uh, who's, who's running on AMD and they've been, they've been, uh, offering, uh, you know, um, services of getting on AMD and running your LLMs on their, on their cloud, on their, uh, their rack, uh, configurations they have. Uh, and so they've already been working with customers. And now, as you saw other people on stage with us at our December event, you can see, uh, that we're, uh, in there with a key, uh, hyperscaler, uh, and we're also, uh, being sold through, uh, many, uh, OEM applications. And we're directly working within customers. So there's nothing like that feedback from key customers that are running on your platform, uh, to speed us, uh, you know, ensuring that we can just be, uh, easily de- deployed and, and make sure that it's, that it, uh, it's a seamless process.

    4. SG

      Yeah. Yeah. Lamini, uh, is a portfolio company for me and Sharon and Greg are great. I think it's an indication of y- uh, you guys having a big ecosystem of software developers and machine learning people that want to see, uh, competition and more heterogeneous compute out there for these AI applications.

    5. MP

      S- Sarah, you cannot underestimate that. It tells you that it was a very, uh, uh, constrained environment. There was a, there was a lack of a competition, which bad for everybody, by the way, if there's a, if there's not competition, because you, you

  6. 15:1516:58

    The benefits of open-source stacks

    1. MP

      really end up with a stagnant industry. Uh, you can look at the CPU industry before we brought, uh, competitive and leadership, it was really getting stagnant. You're just getting incremental improvements. And so, the industry knows that, and we've had tremendous, uh, pull and partnership, and we're very appreciative of that. Uh, and, and in return, uh, we're gonna, we're gonna keep providing generation after generation of, uh, of competitive product out.

    2. SG

      For such a huge, like, software stack, like ROCm to be open source, like, talk about that philosophy.

    3. MP

      Oh, it's, uh, it's a great question. It's very near and, and dear to us because, uh, we are, uh, as I mentioned, all about collaboration. That's the, you know, just a, such a strong part of our culture. And what open source, uh, does is it opens up technology to the community. And so if you look at the, the history of, of AMD, it's been, um, very focused on open source. Our, our compiler for our CPUs is LLVM. It's, it's open source. The LLVM is underneath our, uh, our, our compilers on our, on our GPU. But more than just the compiler on the GPU, we've opened up the ROCm stack. It is, it is our enabling stack. Uh, it was a huge piece, uh, in our, uh, winning, uh, super computing, uh, with, uh, such large installations we have. Why is it our philosophy? And by the way, uh, Xilinx had exactly the same, uh, philosophy. And so bringing Xilinx and AMD together in, uh, in 2022, uh, did, did nothing more than, um, e- even deepen that commitment to open source. But Sarah, the, the point is, we're not about locking in someone, uh, with a proprietary wall garden software stack. Uh, what

  7. 16:5820:21

    Evolving GPU market

    1. MP

      we want is, uh, we wanna win with the best solution. And we want, or we're committed to open source, uh, and we're committed giving our customers choice. Uh, we expect to win having the best solution, uh, but we are, we are not gonna lock our customers in. We're gonna, we're gonna win on merit, uh, generation in and generation out.

    2. EG

      I guess one of the areas that I think is evolving very rapidly right now...... is sort of the clouds for AI compute. And so, there's obviously the hyperscalers, the Azure from Microsoft and AWS from Amazon and GCP from Google. But there's also other players that have been emerging, um, you know, Baseten, Together, Modal, uh, Replicate, et cetera, et cetera. And one could argue that, um, th- they both are providing differentiated services in terms of different tooling, API endpoints, et cetera, that the hyperscalers don't currently have. Um, but also, that in part, they have, um, access to GPU and there's a GPU shortage, and so that's also driving part of their utilization. How do you think about that market as it evolves over the next three, four years, and perhaps, you know, GPU becomes a bit more accessible and maybe shortages or constraints fall away?

    3. MP

      Well, uh, that's definitely happening. I mean, the- the- the supply constraint will go away. We'll be a part of that. We're, uh, ramping up and- and shipping, uh, as we speak on our Instinct line, uh, and it's going quite well. It's going according to plan. But moreover, uh, to answer your question, I think the way to think about it is that it's just breathtaking how the market's expanding so rapidly. I said earlier that most of the applications today that- that started on the, you know, degenerative AI with these LLMs, th- th- that's been largely cloud-based, and not just cloud-based, but hyperscaler-based, because it's such a massive cluster that's required, not just for the training, but frankly, uh, for, uh, qu- quite a bit of the- the- that type of generative AI LLM inferencing osos on these massive clusters. But what's happening now is we're getting application after application, uh, uh, that- that is just taking off non-lin- linearly. It's, uh... And what we're seeing is a proliferation as people are understanding, uh, how they can tailor their models, how they can fine-tune it, uh, how they can have smaller models that don't have to answer, uh, any question you have or any application you need to support. But it might be just for your business and your area of- of expert- exploration. So, that allows a tremendous variety of the size of compute and- and how you need to configure that cluster. So, a rapidly expanding market, application-specific configurations you need for your compute cluster, and it moving even further, not just from these massive hiper- hyperscalers to, uh, you know, I'll call it, you know, kind of tier two kind of data centers, but it just keeps on going. Because when you think about, uh, applications which are really bespoke and they can be run on the edge, right on your factory floor where, you know, very low latency, put the- put the, uh, inferencing, uh, in, uh, you know, right at the source of data creation, uh, right to end user devices. We've added, uh, our AI inference accelerators, uh, right onto our PCs. We- we have been shipping it, uh, throughout all of, uh, 2023 and actually at CES this year announced already, uh, our- our next generation (laughs) of, uh, AI accelerated PCs. Uh, and then, of course, with our

  8. 20:2124:11

    Constraints on GPU production

    1. MP

      Xilinx portfolio across, uh, embedded devices, we're getting a lot of pull from industry, uh, that has bespoke inference application right in, uh, a plethora of embedded applications. So, with that trend, um, uh, we- we're gonna see more of that, more tailored, uh, compute installations, uh, with- with, uh, you know, an attempt to service this ballooning demand.

    2. EG

      Yeah, that makes a lot of sense. I mean, I guess a lot or a subset of, um, inference is gonna push to the edge and obviously we'll have things on device, both on laptops as well as phones, in terms of, you know, where certain small models will be running. And then it seems like there may be some ongoing potential set of constraints for larger models or larger data centers, at least in the short run. Um, what are the main drivers of the constraints on the GPU supply side? Is it... You know, I've heard things around packaging, I've heard things around TSMC capacity. I've heard sort of a mix of, like, potential drivers of constraints. Some people say the next constraint after that is do you have enough power in the data centers to actually run these. Th- I just don't know what's real in terms of (laughs) all this stuff. And so, I'm a little bit curious, like, how to think about, you know, what are the constraints and how do we think about when those- those, um, the supply-demand things come a bit more into balance?

    3. MP

      You know, supply-demand is, uh, frankly something that, uh, any chip manufacturer, uh, you know, has to- has to manage. You have to secure your supply. You look, uh, during the pandemic, uh, we had, uh, uh, actually, a- a- a tremendous, uh, run on our devices, uh, that- that, uh, stretched our supply chain, because the demand for PCs went way up. People were working from home. Uh, the demand for our, uh, XA6 servers went way up. And so, we were in a scramble mode (laughs) during the pandemic, uh, and we did very well. We worked... Uh, we- we had shortages of substrates and we- we secured more, uh, substrate manufacturing capability. We worked, uh, closely, uh, with our primary, um, wafer foundry supplier, TSMC. Uh, we're- we're... Have a such a deep partnership with them. We've had it for decades, uh, that if we get out ahead of it and we understand the signals, uh, we are- we are gener- generally able to, uh, to meet the supply. Or if there's a- if there's a shortage, it's, uh, generally well contained. Uh, and so, what's happening with AI is, uh, yes, it is clear that we're seeing this, uh, you know, this massive, uh, increase in the demand and, uh, the fabs are res- responding and you're having to not think of it just as a wafer fab, but you're absolutely right, it is the packaging. Uh, our sales and our GPU competitor both use advanced packaging. I mean, I'll show you. I don't know in the camera if it'll come- come across here, but that is our MI300. And what you see is a whole set of chiplets, uh, so smaller chips with either, you know, a CPU function, an IO and memory controller. It can be- it can be, uh, the- the CPU for what the version we have, uh, that focuses on high performance compute. We literally drop, uh, uh, our CPU chiplets, right, in that same integration and all the high bandwidth memory that you have around it, uh, to be able to feed those engines.And those are connected laterally. And on the MI300, we connect them, those devices vertically as well. So it's a complex supply chain, uh, but it's one in which, uh, we are very, very good at. We're a fabless company. We've been fabless for, you know, coming on 18 years now. Uh, and so we've got it down. I c- Uh, hats off to the AMD supply chain team. Uh, I th- Uh, and I think overall as the industry, you'll hea- you'll hear that generally we're gonna move beyond those type of supply constraints. Now, you mentioned power. This is, I think, uh, ultimately going to be certainly a, uh, a key constraint. Uh, and you see, uh, you know, all the major operators looking for sources of power. And for us as a ch- as

  9. 24:1127:57

    Innovations in chip technology

    1. MP

      a, a developer of the engines which are consuming that power, it, we, it brings tremendous focus, uh, for energy efficiency, and that we can drive into, uh, each generation of our design. And, and we are committed to, uh, to that certainly, uh, very top priority.

    2. SG

      One thing you said before, Mark, is that you were actually excited about the innovation of the end of Moore's Law, um, and that being a reason that you actually wanted to go to AMD. Like, what directions of innovation should we expect investment in? I don't, I don't know if it's, like, too deep to ask you to give us a, a layman's understanding of, like, 3D stacking, but I, I think it is really interesting to, to think about it at a, at a time when it's not obvious where to go.

    3. MP

      Well, no, Sara, it's a, it's a great question. And, and the reason that I was so attracted to, uh, to AMD is, one, it's, it had a stor- history of being a, a disruptor in the industry. Uh, and, and I certainly felt very strongly that, uh, AMD could disrupt, uh, with very strong CPU and GPU, but more importantly, uh, putting the pieces together. Uh, the, the idea of chiplets was just coming together. There was, there was early exploration of tha- uh, of that around that, uh, around that time. And, uh, the engineering, uh, team here at AMD, we were able to, um, you know, really, uh, get the team rallied and the, the, the, the key leadership rallied around it and drove that, uh, that, that innovation. So that, the, the reason it's so important is when Moore's Law slows down, you know, the easy way to think about it is, it used to be that the chip technology itself, the foundry, going from one generation to the next, did most of the heavy lifting. So you could just bank on that new semiconductor te- technology node shrinking your devices, giving you more performance, yet have less power and to be at the same cost. So that was what Moore's Law was about. And with Moore's Law slowing, it, it means you still get those device improvements, but it costs more. Uh, your power's not coming down as much as it used to, uh, and, uh, you are, are still getting that integration. You're still certainly being able to pack, uh, more devices. And, but it, it demands more innovation. It demands what I call holistic design. So you're, you're gonna, you're gonna rely on those new transition devices, new foundry nodes, but how you use heterogeneous computing, meaning bringing the right compute engine for the right application, a CPU, a GPU, a, a dedicated, uh, engine, like we have super low power AI acceleration, uh, that we have in our, in our PC devices and our embedded, uh, devices. So it's about getting, uh, you know, tailored engines for the, for the right application, leveraging chiplets that you combine them, put them on what is the best technology node you want each of those chiplets, each of those functions to be on. And then frankly, holistic design means you gotta keep going right up, uh, through the packaging, how you package it together, how you interconnect it, and how you think about the software stack. And so it's literally got a... The, the, the optimization has to be the full circle of transistor design all the way up through the integration of your computing devices, and equally with the view of the software stack and applications. Uh, and what I'm, uh, thrilled about, uh, along with the, all the engineers I, that, uh, I work with at AMD is that we, we have that opportunity. We have the building blocks, and we are built on collaboration. It's just such a part of our culture, uh, that, uh, we don't need to develop the entire system. We don't need to be the ones developing the application stack and the end applications.

  10. 27:5730:18

    Chip supply chain

    1. MP

      What we do is partner incredibly deeply, uh, and ensure that the solution is optimized end to end.

    2. SG

      I think everybody is very suddenly interested in the chip industry from a strategic perspective as well. I think everybody's thinking more about the supply chain, um, from the, you know, TSMC near-monopoly to the idea of fab security in an increasingly complex geopolitical environment. Uh, how does AMD prep for this or think about these issues?

    3. MP

      Well, y- you know, you, you have to think about these things. We are very supportive of working with certainly the US government and, and, uh, other governments, uh, across the world, which, uh, have exactly that question, how... You know, our, our country is running now on chip design, uh, that, that, uh, powers such essential systems that, uh, it becomes a matter of national security to make sure that there will be continuity of supply. And so we build that into our strategy. Uh, we build it in, uh, with our partners, and so we've been supportive of, uh, fab expansion. So you see, uh, TSMC, uh, building fabs in, in Arizona when we're partnering with them. You see, uh, Samsung, uh, building fabs in, in Texas. But it's not just in the US. They're actually expanding, uh, as well just, uh, our global, uh, uh, facilities in, in Europe and, and other parts of Asia. And so, uh, it goes beyond the foundry. It's the same thing with the packaging. So where do you... As you put those chips onto-... carriers, and you need interconnected, you need that ecosystem, uh, to have geographic, uh, diversity as well. So, the way we think about it is, it, it is a, a matter of importance for everybody to know that, uh, that there will be, uh, geographic diversity, and we are, uh, heavily engaged. And actually, I'm, I'm quite pleased with the, the progress that we're making. It takes... It doesn't happen overnight. That's the difference between, uh, chip design, uh, versus software. If someone can up, you know, with software, you can come up with a new idea and get that product out very, very quickly. Get that, uh, you know, MBP design, get it out there, and, and it can go viral. Uh, but it does take years of prep, uh, in expanding the supply chain. Uh, sem- the whole semiconductor industry was built up as, historically as well, this is a global industry, and we'll create

  11. 30:1835:42

    Future of innovative hardware products

    1. MP

      geographic pockets of expertise. So, that's how we got to where we are today. Uh, but when you have, uh, you know, more volatile, uh, you know, macro that, uh, that we're facing today, uh, with, uh, political tensions, with, uh, you know, economic tensions, uh, it's just imperative, uh, that we, that we spread out, uh, that manufacturing capability, and it's well underway.

    2. EG

      I guess one of the, um, other things that's been happening a lot recently, uh, is... And, uh, you know, you've been involved with s- I think some of the most interesting and exciting new consumer hardware platforms like iPhone and iPad and other things, and obviously, um, AMD now is powering many, uh, interesting types of devices and applications. Um, what's your point of view on the new hardware things that people are building today? There's the Vision Pro, there's Rabbit, which is sort of an AI-first device, there's Humane, focused on the health side, there's Figure. It seems like there's suddenly an explosion of new sort of hardware devices, and I was just curious to get your perspective on what do you think tends to predict success for those types of products, um, what tends to predict failure, like how to think about this whole sort of suite of, uh, suite of new things and devices that are coming our way?

    3. MP

      W- well, that's a great question. I'll give you, um, you know, o- one point. I'll start just with sort of a, a technological point of view. I mean, uh, I'm proud of the fact, uh, that, uh, chip design, uh, is part of the reason you're seeing all these different type of applications because you're getting more and more (laughs) compute capability that has shrunk down a- and, and draws, uh, such a low power that you c- you can see, uh, more and more of these devices that have simply incredible, uh, computing and audiovisual capabilities, uh, that, that they can bring to you. I mean, you look at, uh, Meta Quest and Vision, uh, Pro and things like that, this isn't happening overnight. It's, it... You look at the earlier ve- versions, they were simply too heavy, too big, not enough computing, uh, oomph, because if the, uh, the lag between, you know, seeing a photon on the, that screen and on your head-mounted device, uh, and actually being able to process, if that lag's too high, you actually get physically ill (laughs) wearing, uh, you know, wearing that and then trying to watch a movie or play a, play a game. So, one, I'm very proud of the, the technology advances, uh, that, uh, we've been able to make as an industry, and we, we're certainly, uh, very proud of our, uh, aspects that, uh, that, uh, we drive from AMD. Uh, but the b- broader k- question (laughs) that you've asked is, "Well, how do you know what's gonna be successful?" The technology is an enabler, but, uh, if there's one thing I learned at Apple, uh, uh, the devices that are successful really serve a need. I mean, they really give you a capability that you love. Uh, it's not just that, "Oh, it's incremental. I can do this a little better than something else I did before." It's gotta be something that you love, and that creates a new category. Uh, so it, it's enabled by technology, but it is the product itself that has to really excite you and give you new capabilities. I will mention one thing. I mentioned the AI enablement in PCs. Uh, that's gonna... I think it's almost gonna make PCs a new category, because when you think of the kind of applications that you're going to be able to run, uh, with, with super high performance but yet low power inferencing can run, i- imagine right now if I'm, uh, I don't speak English at all, and I'm watching this, uh, podcast. Let's say it was li- you know, if it, it's broadcast live, and I click my live translation button, I, I could just have it translated, uh, uh, with, to my, uh, spoken language with a- no perceptible delay. Uh, and that's just one of, (laughs) a myriad of new applications, uh, that, that will be enabled.

    4. SG

      Yeah, I, I think it's a really interesting time because for l- m- many years, like, increasingly... And AMD benefited from some of this, right? You're also in, um, in the data center, but there is so much compute load moving to, uh, servers, right? Era of cloud, era of, like, all these, like, you know, complex consumer social applications. I, I think in, like, in the new era of trying to create experiences and fighting... Like, all these, like, new application companies are fighting latency as a, a primary consideration because you have, you have the network, the models are slow, you're trying to chain models, and you have, you know, things you, you want to do on-device once again. Um, and I just think that hasn't been, like, a real design consideration for a while.

    5. MP

      Sara, I, I agree with you, and I think it's, it's one of the next set of challenges, uh, and that is really tackling the idea of not just enabling a high-performance and AI applications on the cloud, on the edge, in these end, end user devices, but thinking about how are they working together synergistically, writing applications that where you don't have that latency, that, uh, you know, that, uh, dependency on, on a lag in computing. Just run it on the cloud. It's gonna be the most, uh, it's gonna be the most efficient because you're optimizing this massive data center, uh, with the most efficient computing.But write the algorithm such that where you do have that need for super low latency, you just need that instant response, have those aspects of the algorithms be at the edge, or in fact, uh, on your end-user device. And often when you need

  12. 35:4239:03

    What’s next for AMD

    1. MP

      to react quickly, uh, it just has to be the case. I mean, uh, do, do you want to, to be in your vid- vehicle that's being driven, uh, at a high degree of, uh, autonomous driving, suddenly you get a, uh, a loss of signal back to the cloud (laughs) and, and you just stop, (laughs) you know, because it says, "I don't have a signal." You, you wouldn't stand for that.

    2. SG

      So our, our audience is lots of, uh, engineers, founders, tech executives, um, consumers too. What, what do you want people to know about that AMD's focused on in 2024?

    3. MP

      Uh, this, uh, for us is a, is a huge year because we, uh, have spent so many years developing our hardware and software capabilities for AI. We've just completed, uh, AI enabling our entire portfolio, so cloud, edge, uh, you know, our PCs, our, our embedded devices, our gaming devices. We're, we're enabling our, our gaming devices to, to upscale using, uh, AI. Uh, and 2024, uh, is really a huge deployment year for us. So now, now's the... The bedrock's there, the capability's there. Uh, I talked to you about a-all the partners that we're working with. Uh, so 2024, uh, is, is for us a huge deployment. I think we're often unknown, uh, in, in the AI space. Uh, everyone knows o- our competitor, uh, but we not only wanna be known in the AI space, but based on the results, based on the capabilities and the value we provide, we want to be known, uh, eh, uh, y- you know, over the course of 2024 as the company that really enabled and brought AI across those breadth of applications. Yes, in the cloud, in those, you know, massive, uh, LLM, uh, training and inference, uh, for regenerative AI, uh, but equally across the entire compute space. And I think this is also the year that that expanded, uh, portfolio of applications comes to life. Uh, I look at what, uh, Microsoft is talking about a- in terms of the, uh, enablement that they're doing of capabilities, uh, cloud, uh, to client, uh, and, uh, it's incredibly exciting. And, and many, many, uh, ISVs that I've talked to are doing the same thing. And frankly, Sarah, they're addressing the very question you asked. How do I write my application such that I give you the best experience tapping both the cloud and the device that's in your hand or in, you know, in your, your laptop, uh, you know, as sh- as you're, as you're running the application? Uh, so it will be a transfor- transformational year, and we're so excited at AMD, uh, to be right in the middle of it.

    4. SG

      Huh, awesome. Looking forward to the year ahead and seeing great things. Thank you so much for joining us.

    5. MP

      Yeah, thanks for joining us. Oh, thank you both. This is, uh, like I said, you guys have, uh, just done a, a wonderful job here with No Priors and, uh, very, um, uh, happy and, uh, appreciative that you invited us on and loved the time with you. It's a real pleasure.

    6. SG

      Find us on Twitter @nopriorspod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.

Episode duration: 39:03

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode EtqTnLoiXUo

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome