Aakash GuptaAI Agents for PMs in 69 Minutes — Masterclass with IBM VP
EVERY SPOKEN WORD
60 min read · 12,356 words- 0:00 – 2:39
Intro
- AGAakash Gupta
Can you break down for us what is an AI agent? Because we've all experienced ChatGPT, but what makes an agent so special?
- ARArmand Ruiz
Well, for me, agents is really delivering on the promise of AI. Now we got into this chatbot era, but, like, agents, they really deliver the wall of automation.
- AGAakash Gupta
You have this tremendous handwritten drawing demonstrating what are agents. So can you walk us through the steps?
- ARArmand Ruiz
Yeah, four simple steps. The first one is thinking. Second step, the planning, and then the third component is action. Fourth step is reflection.
- AGAakash Gupta
Armand Ruiz is the VP of AI Platform at IBM, and he has amassed over 200,000 LinkedIn followers in just two years for his takes on AI. So you've been in AI for sixteen years. What have been the biggest open source releases over time?
- ARArmand Ruiz
LLMs. I think Mistral did, uh, massive things when they came into the market. They were also, I think, the first to provide a mixture of experts. There are hundreds of thousands of open source models.
- AGAakash Gupta
So let's get to IBM. How is IBM gonna make big waves in the AI space?
- ARArmand Ruiz
One of the things I'm very bullish is about providing customers flexibility to deploy AI anywhere and to tap into any AI engine they want.
- AGAakash Gupta
It's about jumping in, using the tools. What's, like, a good roadmap if you had to give somebody, if they're going from zero to one to ramp up on all these tools? Like which tools should they try first, in what order?
- ARArmand Ruiz
I think first-
- AGAakash Gupta
Really quickly, I think a crazy stat is that more than 50% of you listening are not subscribed. If you can subscribe on YouTube, follow on Apple or Spotify podcasts, my commitment to you is that we'll continue to make this content better and better. And now on to today's episode. Armand Ruiz is the VP of AI Platform at IBM and has amassed over 200,000 followers on LinkedIn in less than two years for his takes on AI. In today's episode, we're gonna break down everything you need to know about AI agents and open source AI. We also cover his path from intern to VP in less than fourteen years and his takes on the future of the product management role. Armand, welcome to the podcast.
- ARArmand Ruiz
So happy to be here. Finally, we make it.
- AGAakash Gupta
Yeah, I think we both were talking off-air that we mutually have been reading each other's work on LinkedIn.
- ARArmand Ruiz
Absolutely.
- AGAakash Gupta
So it's really exciting to chat. I think it has this cool effect, which I can see as a reader, which is I almost feel like I understand how you think. [laughs]
- ARArmand Ruiz
Same. Same here. I've been following your journey, your newsletter, uh, listening to your podcast. So yeah, very impressive work.
- AGAakash Gupta
Thank you. Likewise. Daily LinkedIn posting-
- ARArmand Ruiz
Daily
- AGAakash Gupta
... for you for two years, and I think this thread you've been on probably for almost the last year, so a lot earlier than other people, is AI
- 2:39 – 4:40
What Makes AI Agents Special
- AGAakash Gupta
agents.
- ARArmand Ruiz
Mm-hmm.
- AGAakash Gupta
So can you break down for us what is an AI agent? Because we've all experienced ChatGPT, but what makes an agent so special?
- ARArmand Ruiz
Well, for me, agents is really delivering on the, on the promise of AI. So, uh, we've been through this journey where I- I've been working in AI for fourteen years now, and at first it was just predictive analytics, doing predictions, giving, uh, just, uh, rough numbers for forecasting and, and things like that. Now we got into, into this chatbot era, but, like, agents, they really deliver, uh, the wall of automation that is gonna unlock, uh, everyone, uh, and people and businesses to g- generate way more output. So that's why I'm extremely bullish and very excited about it. And, um, yeah, I've been talking about agents from the very, very beginning, and there were already some early projects showing the, the, the potential, maybe AGI or Auto GPT. Uh, those were amazing. And, and then, yeah, the whole world now is, uh, prioritizing agents.
- AGAakash Gupta
Yeah. Those are, I think, almost 2023 news.
- ARArmand Ruiz
Right.
- AGAakash Gupta
It took two years for the world to really catch up.
- ARArmand Ruiz
Yeah. Yeah, absolutely. Absolutely. So, um, uh, part of my job right now at IBM is to lead AI platform, so really building the, the blocks, building blocks for enterprises to build securely AI agents and embed them into different business function. Uh, I just came yesterday night from meeting a CIO. I had met another CIO last week, so I'm meeting some of the biggest customers from the biggest brands. They all have AI as their number one priority in their agenda, and, and agents is one of their core components. So there are a lot of different factors on how they empower employees to experiment with the technology, but then at the same time, how they, uh, can take it into production in a very safe and secure way. Um, and there are everything in between in different levels of, uh, risk and innovation appetite that they have.
- AGAakash Gupta
Mm. So as someone who's educating people about AI agents, when I saw your handwritten drawing, I was just
- 4:40 – 7:14
The Four Steps of AI Agents
- AGAakash Gupta
like, "That is a piece of art."
- ARArmand Ruiz
[chuckles]
- AGAakash Gupta
That is something that really helps people understand AI agents. So can you walk us through this? Like what are the building blocks of an AI agent?
- ARArmand Ruiz
Yeah. So, uh, the first phase is, uh, thinking. Uh, we've been in this world of, uh, uh, LLMs, and we've seen LLMs at first that were just spitting text, and now we see that step of, like, thinking. You know? Thinking is, uh, is number one step. That's why we hear about LLMs being very good at reasoning. So, uh, and, and, um, also we see-- we hear Jensen saying, "Hey, with agentic systems and new LLMs, we're gonna use more tokens, more inference," because that reasoning step takes extra compute but gives, uh, gives you, like, the kind of like chain of thought process that we used to do manually and now is built in, into, into the LLM. And to go to step two, which is planning. So you ask for a, for a task, and, and, uh, the LLM is gonna break down that task into subtask. And for each of them, uh, we'll go and execute, and in some cases we'll challenge the output from the previous, uh, from the previous step. But it's gonna be able to create, uh, multiple, um, uh, subtask and goals, and then it's gonna go into, into act, which is, uh, step number three. And act is maybe one of the most fascinating steps because, uh, it's gonna allow you to, um, tap into execution of actions. You know? So if you need to, uh, input something to a CRM or send an email all the way to sending, not just to write the copy or, um, whatever action it can be in, in, in, for example, in a system like Workday can, uh, go and, and, uh, interact with information for, for a specific employee. There are so many different things that you can do in the act phase that is being opened up by, uh, protocols like MCP. And then reflectionUh, that's the, the step number four. Reflection is really, uh, what is gonna make agents really good, because maybe at first they're a little bit raw, but then with human input, uh, they will iterate and become better and better and better over time. Uh, so that reflect, uh, step comes with, like, some technical implementations that you need to do that, but you will be able to tap into the, all the past history of interactions and learn for the, from it, and feed it back into the agent so next time it executes, it does it better.
- AGAakash Gupta
Mm. So there's so many different terms and frameworks out there that people have heard about AI agents. What are the
- 7:14 – 12:59
AI Agent Development Frameworks
- AGAakash Gupta
frameworks they should understand, and how do they fit into these steps?
- ARArmand Ruiz
Y- frameworks, uh, you mean developing, development frameworks?
- AGAakash Gupta
Yeah.
- ARArmand Ruiz
Yeah. I think I would like to, um, classify them into two categories. There is, like, the coding, uh, frameworks, and these coding frameworks are becoming simpler and simpler, but they still you need to know m- in most of the cases, uh, Python. Um, but you have, uh, frameworks like LangGraph or CrewAI or LlamaIndex or AutoGen. Um, those are excellent frameworks, open source, uh, widely popular, and, and, uh, with a lot of information, documentation, and, and courses online. You can go and, and try them out. But then you have, on the flip side, you have, like, um, low-code, no-code tools. One we, we have at IBM that is very popular is called LangFlow. Um, I saw the new announcement from Lindy. There's, um, there is N-A8, N-A8-
- AGAakash Gupta
n8n
- ARArmand Ruiz
... yeah, uh, which is also g- getting, uh, very popular. Uh, Stack AI, uh, from a fellow Spaniard here in, in San Francisco as well, also taking off. Flowise. There are a lot of, uh, tools that help you build these agents, uh, in a, in a very simple way. Still you need to understand the concepts-
- AGAakash Gupta
Mm
- ARArmand Ruiz
... but it helps a lot with, uh, development.
- AGAakash Gupta
Okay. So if you're not a very technical person, you can use some of these no-code tools, Lindy, n8n, Make.com, Zapier, you name it, they're all becoming huge. Or if you're trying to develop a more robust internal system, you're gonna work with developers to build on top of a CrewAI or a LangGraph. Is that right?
- ARArmand Ruiz
That's, that's right. I think those, um, programming frameworks, uh, give you, um, needed control and flexibility that for very complex agentic implementations you, you still need. Um, and, and those are, those are, um, evolving quickly, and the, I think also the exciting piece of that, those are open source projects. You can go to the GitHub. Uh, I... Something I do, uh, sometimes is I go to the repos, I see the, the, the PRs and the issues and the conversation in the repo itself and what people are asking, and, and, uh, anyone can contribute to those frameworks and make them even better. So there is, there is a lot of fast innovation happening at that space because of the power of the community and the ecosystem.
- AGAakash Gupta
Mm. You just mentioned building things, and that's when I really figured out, you know, what is RAG versus what is fine-tuning versus what is the other elements of context engineering. But for someone who hasn't gone through that, what is RAG useful for?
- ARArmand Ruiz
So RAG is useful to give additional context to an LLM. So if we, uh, step back for a second, uh, LLMs are trained with data, uh, at a point in time, and, uh, obviously for most use cases, uh, you need to, uh, inject new, updated data, um, in order to get the output you're looking for. In order to do that, the, the most popular technique is called RAG. You can use fine-tuning, but fine-tuning is not really to, to inject new, updated data that is changing all the time. It's fine-tuning... Actually, one of my most viral posts is a guide on when to, you should be doing fine-tuning versus RAG. Um, but, uh, RAG is basically, uh, great in order to connect directly to a, to a knowledge base, to a database, and, and, um, is, it's a space that is evolving extremely quickly. Uh, my first year, uh, in, after the release of ChatGPT, uh, 90% of the use cases we were doing for enterprises were RAG use cases, um, because it's one of the most powerful, um, methodologies. You can tap into all this, uh, structured data, but also unstructured data, and, and just feed it directly into an LLM. So that's, I, I think it's a goldmine for most traditional companies that are sitting on, on a lot of valuable data.
- AGAakash Gupta
Wow, 90%. So tell me a little bit more. How are enterprises using RAG systems? What are employees using RAG for? How does that help them build a better agent?
- ARArmand Ruiz
Yeah. Uh, think of RAG as one of the core components of a, of an agentic system. You can use RAG just simply on a chatbot, but if you're taking it into a, an agentic, uh, application, RAG is basically the component that is gonna, uh, let's say, uh, w- just stepping back, we have think, we have planning, and most likely in the planning phase you will have a step of fetching some data from somewhere.
- AGAakash Gupta
Yep.
- ARArmand Ruiz
Um, so that's, that's a RAG pipeline right there. You know? So, um, um, um, users and my customers are, they are just looking at ways to, uh, tap into massive amounts of data instantly. Uh, we've been very big on enterprise search, but most of that enterprise search has always been at the metadata l- level, and this is taking it to a whole new level. This is, uh, allowing us to tap directly into the information on, on those, uh, documents and structure data. So you can go tap, uh, give me, like, the, the top use cases for, for a specific, uh, for my top 10% customers, and, and then you can export directly from whatever reports or documents you have and get that directly into, for example, a product manager, and then they can start making some assumptions on which feature they should be developing, for example, to go and accelerate development in certain areas. So, um, yeah, there is-Uh, some, some say we're in the age of ideas, so with these new tools, tools and new access to intelligence, tap to, um, um, enterprise data, uh, my, my area is really on, on the enterprise side. Um, that's where we see an explosion of new use cases
- AGAakash Gupta
Mm-hmm. Are
- 12:59 – 16:55
RAG Explained
- AGAakash Gupta
there particular technical frameworks or things people should know about when it comes to RAG, like different options to implement it?
- ARArmand Ruiz
Many, many different options, uh, and many different building blocks and ways you, you can do that. Um, at the end of the day, we are, we are building these, uh, pipelines that they do a lot of different things, so, uh, seems like all the hype is on the LLMs. Uh, but then you need, you need good embeddings models. Uh, embedding, uh, model are those that are gonna convert text into tokens, and, and those need to be really good, and those need to be, uh... They g- they, they, they can be m- good in different languages. They can be faster. They can be slower. Uh, it depends on your, on your application. You need vector databases. Uh, you need ways to do, like, f- uh, filtering, search, ranking, so all that. Uh, and we have folks, uh, killing it out there with, like, data engineering, uh, education-
- AGAakash Gupta
Yeah
- ARArmand Ruiz
... uh, like Zach, for example, um, because a lot of the problems in AI are s- are data engineering problems. They are connecting the LLMs to all these, uh, very complex data systems, and i- in order to do that at scale is very, very complex. And for that, you have, uh, a lot of different technologies. Uh, if you are more on the, uh, AI application layer, we have frameworks like LangChain that we did mention and, and so on, LlamaIndex. Uh, and if you are more on the heavy data side, then you, you, you have things like Spark or Airflow or things like that.
- AGAakash Gupta
Hmm. Okay, so that's where all those terms fit in. And then there's this concept of vision RAG. What does that help you do?
- ARArmand Ruiz
Yeah, so as we, as we mentioned, um, RAG extracts information, um, uh, adds, add- adds context into the LLMs, right? And a lot of that information, it's in the, it's in the, um, it's in unstructured documents, so, uh, very rich PDFs with, um, very complex tables or charts with a lot of valuable information. So vision RAG is taking the classic RAG that is more just based on text, and it's opening up to more multi- multimodal, uh, scenarios and, and is adding that component. Uh, there are some LLMs that are great for multimodality, um, nowadays. Uh, and but then also you have, like, open source projects, one from my colleagues from IBM called Do- DocLink, which is available on GitHub, is a free framework that you can go, uh, grab, and it's g- gonna be really good at, um, getting info from, like, um, Word documents, PDFs, PowerPoint, a set of different file formats, and extract all that, uh, information visually. And then you can fit it into a RAG pipeline.
- AGAakash Gupta
Hmm.
- ARArmand Ruiz
Uh, so that's what we call, um, that's what we call vi- vision RAG. Uh, it's also kind of, like, very popular and, and is opening up new, new use cases.
- AGAakash Gupta
And I think vision RAG is really important for charts, right?
- ARArmand Ruiz
Yeah.
- AGAakash Gupta
Because there's so much rich data that lives in charts.
- ARArmand Ruiz
Yeah, yeah. If, if you're in some industries, uh, like for example, in healthcare, that you need to read, uh, charts that come from, from, um, very advanced equipment, or you're in finance, and you have a lot of charts for, like, the markets, right? Um, or tables as well. Tables, uh, many times come, they are, uh, exported from, from a spreadsheet and put together in a nice report on a PDF. Uh, uh, there is, there is so much you can, uh... y- you need to do in order to understand what the chart is saying, what the table is saying, what are the conclusions. So, um, yeah, that's why vision RAG is becoming super critical, and, and, uh, there are a lot of different ways to, to build that. Uh, I think the right combination here is to, again, build the right pipeline, so with the right components, like DocLink that I mentioned, and then have very good multi-model models that are able to also understand, um, like, um, image, yeah, as input for, for the prompt.
- 16:55 – 18:46
Ads
- AGAakash Gupta
[upbeat music] Today's episode is brought to you by the experimentation platform Kameleoon. Nine out of 10 companies that see themselves as industry leaders and expect to grow this year say experimentation is critical to their business, but most companies still fail at it. Why? Because most experiments require too much developer involvement. Kameleoon handles experimentation differently. It enables product and growth teams to create and test prototypes in minutes with prompt-based experimentation. You describe what you want, Kameleoon builds a variation of your webpage, lets you target a cohort of users, choose KPIs, and runs the experiment for you. Prompt-based experimentation makes what used to take days of developer time turn into minutes. Try prompt-based experimentation on your own web apps. Visit kameleoon.com/prompt to join the wait list. That's K-A-M-E-L-E-O-O-N.com/prompt. AI evals are one of the most important skills for PMs, and I know you know they matter. The question is, are you doing them right? Most teams are winging it with basic metrics and hoping for the best. Meanwhile, the teams that actually ship reliable AI, they've cracked the code on systematic evaluation. Today's episode is brought to you by the AI Evals for Engineers & PMs course by Hamel Hussein and Shreya Shankar. This live Maven course will teach you the battle-tested frameworks from Hamel and Shreya, who are the engineers behind GitHub Copilot's evaluation system and 25-plus production AI implementations. Four weeks, live instruction. Next cohort starts July 21st. Start shipping AI that actually works. Enroll at maven.com with my code AG-PRODUCT-GROWTH for over $800 off. That's A-G-P-R-O-D-U-C-T-G-R-O-W-T-H.
- 18:46 – 26:48
Common RAG Mistakes
- AGAakash Gupta
Hmm. What do most teams get wrong implementing RAG systems?
- ARArmand Ruiz
Um, I think at the end of the day, like, a lot of the conversations that I have with customers are, um, frustrations on, on accuracy. I think the, in the ent- in the consumer space, a little, a little bit of lack of accuracy is acceptable. Uh, you can keep iterating and, and, uh, it's not like a big, uh, um, system is gonna go down and affect millions of customers. But when we are talking about, um, for example, putting, uh, a customer service, uh, chatbot that needs to connect to RAG, it needs to be very, very accurate. Like-
- AGAakash Gupta
Yeah
- ARArmand Ruiz
... 70% accuracy is not acceptable. Or we have use cases where there are, the, the human is not interacting with a, with a system. It's like machine to machine, so and then you need to have, like, the right, uh, humans in the middle. So you need to build very trustworthy, uh, systems, and what many people are, uh, getting frustrated is about the accuracy of the RAG because they are just applying some vanilla, uh, out-of-the-shelf, uh, templates and implementations. So you, you need to really build a strong practice to, to properly, uh, evaluate the outputs. And, and at the end of, of the day, it's really a, a data problem. So-
- AGAakash Gupta
Mm.
- ARArmand Ruiz
So yeah, you, you need to build that practice to properly evaluate and, and, and understand, uh, what is an acceptable business accuracy for the use case, and then, um, just keep iterating in the architecture, in the different, uh, configurations that you need to do in your pipeline in order to build that accurately. So that's, that's one. I think they are also, uh, underestimating the power of RAG, um, because, uh, RAG is providing, uh, like if Google was unbelievable to provide access to information for everyone, uh, at the end of the day, they were provid- they are providing like, uh, set of links, and then you need to go and find the information. RAG is giving that superpower to every single company to build that at scale, to tap into all the company's information, uh, for every single employee, you know? And, and there is, th- there is so much that can be done, uh, in that space. Um, but to, in order to do it right, um, it's, it's, it needs, um, very heavy, uh, engineering at this point.
- AGAakash Gupta
Hmm. So if you're thinking about evals, that's usually an area that people talk about just in the context of the final output. But it sounds like you're saying evals are really important in the RAG system itself.
- ARArmand Ruiz
Yeah, and, um, yeah, I mean, the, the evals, the RAG space and the eval space, um, keeps evolving super, super fast. And, uh, there are, there are, there are new techniques, new, um, new companies innovating in that space. Uh, y- I think evals, uh, in agentic workflows should be almost, um, put, uh, at, like, every single step if you're really serious about developing something, um, um, a critical system-
- AGAakash Gupta
Hmm
- ARArmand Ruiz
... you know? And then you need to, to evaluate, um, at different points before you put something, uh, into production. At the end of the day, evals is basically adding that kn- um, human expertise to validate the output of what the AI system is giving you, you know?
- AGAakash Gupta
Yep.
- ARArmand Ruiz
So if you have a system that has a lot of multiple steps and you are only checking the output at the, at the end of the spectrum, I think you're, you're missing, I think. I think in the, in, like it's classic software development, you will have ev- evaluation in different points. So-
- AGAakash Gupta
Yeah
- ARArmand Ruiz
... it doesn't change that much in that front. It's just more about the methodology on how you do it.
- AGAakash Gupta
Okay, and how do you do good evals for a RAG system?
- ARArmand Ruiz
There are, again, this is an area where there is, um, there, there are a lot of papers talking about good techniques, and then it's pretty cool that, uh, the frameworks and the open source community is coming with projects to help, uh, to help customers or users to, to do that. At IBM, we have something called the Eval Studio that basically allows, allows, um, the, either the developer or the business user to, uh, do, like, proper evaluation of the outputs. And, and there are different ways. There are ways that mix, um, um, there are, they mix synthetic data with, like, human, uh, check-ins with, like, having a data set that has the ground truth. There, there are different, different tools, and, um, yeah, we, we've been pushing one that is called Evaluation Studio, which is, uh, GUI-based because we also understand a lot of the use cases, the knowledge and the expertise is in the, in the SMEs and-
- AGAakash Gupta
Mm-hmm
- ARArmand Ruiz
... the business users, and they are, they know very well what is good and bad.
- AGAakash Gupta
Yeah.
- ARArmand Ruiz
And they need to be able to, um, uh, assess the outputs of an AI system.
- AGAakash Gupta
Hmm.
- ARArmand Ruiz
And, and, and it, this needs also, it's not a one-off that is just you do it once and then, uh, to the next agent. These systems need to continuously be checked and improved, and that's the, that's really the, the power.
- AGAakash Gupta
Yeah, it's like any internal tool you're gonna build. There's gonna be ongoing maintenance.
- ARArmand Ruiz
Absolutely.
- AGAakash Gupta
And with an AI agent, a lot of it is in the evals phase.
- ARArmand Ruiz
Yeah.
- AGAakash Gupta
I think that's a really interesting insight that you want to equip your SMEs to be able to help you with those evals. You don't wanna just be doing those in some engineering silo.
- ARArmand Ruiz
Yeah. That said, there, there needs to be some framework on how you do that. Um, when we're talking to companies like my set of customers that are, like, hundreds of thousands of employees, you need to put some best practices and frameworks on how you do that at scale in a, in a, in a company. Um, so I think that's a little bit the, the, um, the challenge in a lot of these companies. They see, they, they, everyone has a lot of ideas, and it's how they can experiment in a safe way. And then also, uh, a customer told me recently, they don't wanna, they don't wanna, um, spend, like, $20,000 in compute to get a benefit of, like, 100 bucks.
- AGAakash Gupta
Yeah.
- ARArmand Ruiz
You know? Uh, so this false illusion of, like, I'm using AI, I'm being more productive, but what's happening underneath is, like, uh, you're spending a lot on, on execution, either on the pipelines or on the AI compute. So th- th- there needs to be, like, frameworksSome are talking about like AI hubs or an AI office that tracks all the, all those projects, and, uh, you wanna encourage innovation and use case creation, but then at the same time you need to have a, a, a way to, to assess those projects.
- 26:48 – 31:39
Managing Multiple AI Agents
- AGAakash Gupta
So it's almost like managing a team under you, and you need to figure out what are the right stages of human in the loop or human approval so that it doesn't just go out and misrepresent our brand or something when marketing, but at the same time, it is reducing the work we have to do it.
- ARArmand Ruiz
Yeah. There is this concept of orchestration that everyone has been, um, talking a, a lot about, uh, this year, um, which is this need of orchestrating agents. And a lot of our, our job is gonna be on the judgment of the output of, uh, some of those agents. Uh, again, I think we're far from that reality, um, three to five years, not because of the state of technology. We're here in Silicon Valley, and we see these AI-first, AI-native companies that are built from day zero with this agentic mindset. But then when you, when you, you, you talk to more traditional companies, there is a long journey in order to get there. Uh, but, uh, yeah, um, orchestration of agents and, um, being able to quickly iterate on them and, um, orchestrate them and, uh, check the out- the outputs that they are generating are good, because ultimately it's gonna be the responsibility, uh, of, of the humans. Um, that's, that, that's a new, um, skill we all need to go and get used and learn.
- AGAakash Gupta
How do people get better at orchestration?
- ARArmand Ruiz
Um, I think, I-- honestly, I don't have a very clear answer. Um, a lot of, um, companies, they are working on AI literacy, so learning. Um, so I think, uh, getting hands-on with the technology is really important. Um, y- if you use the early days on the internet, you were on a, on a console, and then we moved into easier and easier interfaces, right? So, um, I'm sure the technology, it's being already democratized, so it's gonna be accessible for everyone. But then at the end of the day, it's good, good education content like the one you are creating and, and good courses, and then, um, yeah, target- targeted for specific functions is gonna be really critical.
- AGAakash Gupta
Mm-hmm. So if you're a product manager, what AI agents should you build first?
- ARArmand Ruiz
Um, I think-- Well, first of all, product management, I think, is also one of those functions that is changing. Uh, uh, I, I lead a team of product managers, and I think usually the ratio, the standard ratio that I've seen in the industry, which we don't always follow, is, like, um, a product manager for, uh, six to 10 developers. So you tend to have, like, product managers that are really focused on one specific area of your product. Um, so for example, in my, in my AI platform, I have a PM that is really focused on, on, uh, tuning techniques, another PM that is really focused on inference and serving models, and you have these PMs that are really focused on, on certain areas. I think with AI agents, we are, um, w- we can get into a different ratio. Instead of 1:6-10, maybe we can get one every, like, 20 or 30 developers.
- AGAakash Gupta
Mm-hmm.
- ARArmand Ruiz
Um, because, um-- And, and, and these PMs, they might be able to cover multiple areas, uh, all at once because they will have, uh, they have agents that can do, like, competitive. You can have a, a, an agent that is doing competitive analysis. Uh, it's a very-- It's a crazy market. Like, I mean, the AI space is a crazy market. Every big vendor, small startup, YC, there is so much action-
- AGAakash Gupta
Yeah
- ARArmand Ruiz
... you cannot keep up, you know? So you can have a, an agent that is doing, uh, research, another one that is building out reports for competitive. That's-- Those competitive analysis then need to be polished, and the, uh, the salespeople need to be equipped in order to have a good conversation, uh, in front of a customer defending the, the, your product versus the competition. Uh, then you have a lot of, um-- You need to prioritize all the user feedback. So, um, w- with very powerful, uh, AI agents, you can match-- you can have agent that can check your usage data from, uh, SaaS metrics directly with maybe user feedback that comes from social media, from other systems that you have to collect feedback, NPS systems. And, and then you can start, uh, gathering more inputs to prioritize better your roadmap. Um, and then when you prioritize the roadmap and you come with a new feature, uh, you need to write the PRDThe AI can do, like, 80, 90% of the work, and then, um, before even you validate and you prioritize the, the feature, uh, that's where we can get into more details, but you can even prototype it.
- AGAakash Gupta
Yep.
- ARArmand Ruiz
You know? And, and then work with a select of users to get some, some feedback. So that's, I think, where we are going with, um, with product management completely.
- AGAakash Gupta
And I think the final step too, right?
- 31:39 – 33:57
Ads
- AGAakash Gupta
Today's episode is brought to you by Vanta. As a founder, you're moving fast toward product market fit, your next round, or your first big enterprise deal. But with AI accelerating how quickly startups build and ship, security expectations are higher earlier than ever. Getting security and compliance right can unlock growth or stall it if you wait too long. With deep integrations and automated workflows built for fast-moving teams, Vanta gets you audit ready fast and keeps you secure with continuous monitoring as your models, infra, and customers evolve. Fast-growing startups like LangChain, Writer, and Cursor trusted Vanta to build a scalable foundation from the start. So go to vanta.com/aakash. That's V-A-N-T-A.com/A-A-K-A-S-H to save $1,000 and join over 10,000 ambitious companies already scaling with Vanta. Today's episode is brought to you by Amplitude. Replays of mobile user engagement are critical to building better products and experiences, but many session replay tools don't capture the full picture. Some tools take screenshots every second, leading to choppy replays and high storage costs from enormous capture sizes. Others use wireframes, but key moments go missing, creating gaps in your understanding. Neither approach gives you a truly mobile experience. Amplitude does things differently. Their mobile replays capture the full experience, every tap, every scroll, and every gesture with no lag and no performance hit. It's the most accurate way to understand mobile behavior. See the full story with Amplitude. Today's episode is brought to you by the AI PM Certification on Maven. Run by Miqdad Jaffer, who is a product leader at OpenAI, this is not your typical course. It's eight weeks of live cohort-based learning with a leader at one of the top companies in tech. OpenAI just doesn't stop shipping, and this is your chance to learn how. Run along with product faculty and Mo Ali, the course has a 4.9 rating with 133 reviews. Former students come from companies like OpenAI, Shopify, Stripe, Google, and Meta. The best part? Your company can probably cover the cost. So if you want to get $500 off, use my code AAKASH25 and head to maven.com/product-faculty. That's M-A-V-E-N.com/P-R-O-D-U-C-T-F-A-C-U-L-T-Y.
- 33:57 – 37:43
How AI Changes Product Management
- AGAakash Gupta
Once you release it into production, it can monitor if all of the sudden users are getting some corner case you didn't realize, and then two or three weeks later it can tell you, "Hey, here's what the statistically significant results were." So it's, like, across every step of the PM life cycle.
- ARArmand Ruiz
Uh, absolutely. Absolutely. It's, it's, um, it's, uh, one of the most exciting, um, functions because as I mentioned before, we are in the, in this wall of i- ideas, and I think PMs have the -- as part of the job description is to bring some of these ideas to life.
- AGAakash Gupta
Yeah.
- ARArmand Ruiz
You know? And if you, uh, you, you work in, in product management-
- AGAakash Gupta
Yeah
- ARArmand Ruiz
... uh, you know, like, it's been always, uh, kind of like a frustration to turn ideas into not features, but sometimes even validating the idea, right? You need to work with design, maybe get the mock up, have user feedback, and then work with engineering, and engineering is overwhelmed with, uh, production tickets on support and things that need to be fixed and new feature development. So, um, yeah, it's, it's -- that's why I think also, uh, PMs, all the PMs that I usually hire are really good technically, and now with AI, they will be able to take it to, like, three, four, uh, steps further, uh, by themselves.
- AGAakash Gupta
Mm. So you recently commented on this idea of writing first versus prototype first cultures. Talk to me a little bit more about what the future of the role looks like, the future of the PRD in this world.
- ARArmand Ruiz
Um, yeah. I'll tell you a story, a personal story about that, um, and I think that, that built, um, a lot of success in my career. Uh, that's more than 10 years ago, but, uh, I ju- I'm from Spain. I moved to the US. My English was very, very rough, and we had this kind of, like, big meeting with a lot of big executives, how to reimagine how the next machine learning platform is gonna be. And at that time, I, I still remember I had the meeting, uh, in two days, and I was really struggling to articulate all the ideas that I have. So I s- I said to myself, "I'm gonna build a prototype." And, and to my surprise, in that meeting, everyone was just talking and showing slides, and I was the only one that was showing. I was showing the product. You can touch it. You could s- uh, see it. It was not- nothing close to production. It was all kind of like, um, uh, fake, but giving the ideas and the art of the possible. And, and guess what? I, I, I got to lead the project. That became the, kind of like the default and the path forward. So I think that's what is happening right now. Before, I had to just get hands-on, start coding, and, and, and do a lot of the work. I could have done that now in maybe, like, three or, or four hours, you know?
- AGAakash Gupta
Yeah.
- ARArmand Ruiz
So, um, now with all the tools, I think every single PM should have access to, to, um, vibe coding tools, uh, different, different options out there in the market, and to just kind of like skip ahead and show some of those ideas directly into working prototypes. And, and that also helps a lot with, with communication. I-- the teams I work as well, they are, um, they are worldwide. So there's also, um, language barriers. Uh, a lot of the work in big corporations is all about communication, and that communication is either in meetings or written or in GitHub, uh, repos, you know? So a lot of things are missed in translation, so, um, I'm big fan of showing and, and skipping a lot... And even if you write the most beautiful detailed PRD, still, uh, a lot of information is, is lost, uh, in translation, and just there is nothing that speaks better than just a working, working prototype.
- AGAakash Gupta
Mm-hmm.There
- 37:43 – 41:22
Problem Investigation vs Feature Factory
- AGAakash Gupta
is some worry out there that we're gonna start to get into a more feature factory solutions-focused world. We're not gonna heavily investigate the problem space if we just jump into AI prototypes. What's the right step, the right life cycle to make sure that you are investigating the problem space, but you're also taking advantage of this new prototyping technology?
- ARArmand Ruiz
Yeah, that's, that's a very valid concern. I, I spend a lot of time with customers. I think that's-- and a- again, that's also part of how I built my career. Like, my first two years, I was tr- I was traveling every single week to visit customers. I spent two years tra-traveling. I had no, no wife, no kids, so I was-
- AGAakash Gupta
[chuckles]
- ARArmand Ruiz
... free to meet a lot of customers all over the world, and it was, it was unbelievable. Like, not only network, but just going deep into what they were doing, trying to figure out the problems. And I didn't have LLMs, but I [chuckles] I, I kind of like started framing my own ideas, hypothesis, and checking what was going on in the market to build solutions. So I think you always need to start customer first, and, um, yeah, PMs, in my opinion, they need to spend a lot of time talking to, to customers and ga- going deep, not at a high level, but just really trying to understand what are, what are their, um, challenges, and then figure out how your product can solve.
- AGAakash Gupta
All right. So let's zoom back out of product management for a second and just talk about general tech workers. You've encouraged people to learn Python, get technical. Even you've asked leadership to get more technical in the AI era. Why is that so important?
- ARArmand Ruiz
Yeah. And, uh, I, I think everyone should have technical literacy in, in this day and age. Um, otherwise, I think you are completely gonna miss out on the opportunities of AI. Um, the w-- uh, one that articulates this very well is, uh, Aaron Levie, uh, the Box founder and CEO. He, he says you can-- you have two ways, two ways to approach AI, either as a cost savings tool, that's completely fine, or, or you, you can, uh, just go do w-way more with AI, you know? So in order to do way m-- and I think that's the right approach. I think that's how, uh, companies will, will grow, will expand. Work is gonna be more fun because we're gonna be able to accomplish, uh, new use cases and new, and new work. In order to do that, you need to understand the art of the possible of AI. And there is no, there is no document, white paper, LinkedIn post, video that is gonna teach you the art of the possible unless you actually try the technology. So, um, luckily, y- uh, I mean, if you learn how to code in Python or do the basics in Python, that's, that's completely cool. Luckily, the technology's getting democratized, so you can still touch the technology, uh, and, and not code, um, at all. But you need to understand the concepts. And, and, um, a lot of the, a lot of the l-leaders in the space, they have a lot of ideas on things they, they should be doing and they can do, and, and they need to understand how the technology, uh, bridges that gap. So yeah, I'm, I'm, um, I'm spending a lot of time learning myself. Uh, many people ask me how I'm so up-to-date or I write about this content, uh, so often. It's just-- I mean, number one is I'm obsessed with it, so it happen-- it comes to me naturally. So every time there is something new, I just jump and I try it out, um, in the, in the evenings usually. But then, um, and then I start to form my own opinions based on my, uh, professional experience.
- 41:22 – 43:30
Roadmap to Build AI Agents
- AGAakash Gupta
Mm-hmm. So it's about jumping in, using the tools. What's, like, a good roadmap if you had to give somebody? If they're going from zero to one to ramp up on all these tools, like, which tools should they try first, in what order?
- ARArmand Ruiz
Yeah. I think first just under- understanding the, the, the concepts, and then tools. Um, I think everyone should just develop one AI agent. Um, and, and there, there are a lot of different tools. You can do that with, uh, no-code flow builder. There are many out there. Um, I, I, I was trying, I was actually trying yesterday, uh, the new Lindy-
- AGAakash Gupta
Yeah
- ARArmand Ruiz
... AI. Very, very, very impressive. At IBM, we have a tool called Langflow as well, which is a low-code flow builder experience as well. Um, at the end of the day, if you see each of those tools, they still require you to understand the concept of AI, so I always recommend start with the concepts and understand what is an LLM, uh, understand what is reasoning, understand what is RAG and, and, and some of those things. And then, and then use any of those tools that give you, like, the building blocks and just think about one, one use case that you have, um, in your own personal, um, um, job or life, and then try to solve it, you know?
- AGAakash Gupta
Mm-hmm.
- ARArmand Ruiz
And, and try any of those tools. And then if you wanna go deeper and deeper, um, I think f- it depends on the, on the, on the role. If you are into leadership, I think there is a lot of education about, um, how you inject AI into an organization and change management and, and, and so on. If you are more on, on a practitioner, um, practice in, in, in different domains, uh, I think you will have a lot of, um, agentic solutions that can help you speed up your, your work. So there are a lot of different options.
- AGAakash Gupta
Okay. So build with a no-code tool, then what's the next step? Do you go into, like, a Cursor or something like that? Do you learn to program? Where should people go after?
- ARArmand Ruiz
Yeah, like pro-programming. Uh, I think also vibe coding is, is a, uh, is a big one, um, because it-- I think you need to understand the different, the different levels. I, I, I don't
- 43:30 – 51:39
Can Open Source AI Win?
- ARArmand Ruiz
expect everyone right now to just create something and put it into production in an enterprise, uh, setup. I think that's, um, that, that needs to be, uh, a little bit somehow controlled, um, depending on, on, on data access and tool access. But yeah, it's start with, uh, developing some, some agents, um, try vibe coding, um-If you're, if you are, um, curious, try different things that are more advanced using Python, depending on your level of expertise there. Um, I mentioned earlier deep learning AI, amazing short courses, and, uh, yeah, w- it's, it's interesting because you, you can-- If you want to, for example, hey, I heard about RAG. Every day I see it on my timeline every single time, uh, I log into LinkedIn or X, just do a quick course on RAG. You will really understand that it takes, like, three, four hours, and then you can understand, hey, uh, basically my entire organization can access all the information if we build these RAG pipelines really good.
- AGAakash Gupta
Yeah.
- ARArmand Ruiz
So maybe something we should invest. And you, you can do it in-house, you can do it through a vendor, you can have, like, a third party help you build those. Uh, but you need to understand those concepts because, again, people in the business need h- they have the ideas and the use cases in, in, in their heads, you know?
- AGAakash Gupta
Mm-hmm. Mm. Let's shift focus to open source AI. IBM, and you in particular, have had a lot of focus on open source AI.
- ARArmand Ruiz
Mm-hmm.
- AGAakash Gupta
Can open source really win? It feels like it's always a cycle behind closed source.
- ARArmand Ruiz
Yeah, I think, um, you will need to understanding in the enterprise context that I'm coming from, and I think in that enterprise context, open source, I would say always wins.
- AGAakash Gupta
Oh.
- ARArmand Ruiz
Yeah. I think, um, it's-- If, if we are, um... Like, let's take the latest OpenAI model, right? Why everyone is so excited, especially in the enterprise. First, the license is unbelievable. It's an Apache 2 license, which is really good. Um, then it's a very good reasoning model. Like, the-- We, we, I think we've been lacking some very good reasoning models, uh, in the open source space. And then, and then you can just take them and deploy them anywhere. So you don't have to rely on a third-party API call where you most likely, most-- for most of my customers, they cannot just send a lot of, uh, confidential information there, or they cannot connect it with their own tools. So right now you can take that model, make it your own, deploy it anywhere in your own infrastructure. A lot of customers are still running on their own infrastructure. They are buying their, they are creating their own, um, AI factories with different, um, AI accelerator providers like NVIDIA, AMD, uh, so that you can deploy it on your own machine. So open source provides a lot of, a lot of control, um, for enterprises, which is a great thing. Then I think the pace of innovation of the community, even though sometimes it's a little bit slow at the beginning, at the end of the day, in the long term, it shows up. And I, I think we've been a little bit through a cycle. I think last year on-- we were-- w-we could see open source models getting closer and closer to closed source, and then I think this year it's been a little bit different. Anthropic, uh, Google with Gemini, and OpenAI with GPT-5, they, they show that they are still ahead.
- AGAakash Gupta
Yeah.
- ARArmand Ruiz
But you will see again the open source community running behind and, and, and pushing that forward. And lastly is developer ecosystems. Uh, when we were talking at the beginning on all these different frameworks that actually enable companies to develop applications, um, they are built on open source as well, and they are deployed on open source systems, uh, like Kubernetes and vLLM. vLLM is a, uh, is the, basically the, the engine to run models, you know? So yeah, I think on the, the-- It's not just the LLMs itself, it's the entire, um, AI ecosystem. PyTorch is another great example. Like, everyone is building on PyTorch-
- AGAakash Gupta
Mm
- ARArmand Ruiz
... which is also open source. So it's, it's-- I'm very passionate about open source, and I think in the long term it always wins.
- AGAakash Gupta
What's PyTorch, for people who don't know?
- ARArmand Ruiz
PyTorch is basically the, the framework that allows, uh, you to, uh, create very complex deep learning algorithm, algorithms and, and run them. Yeah.
- AGAakash Gupta
And just about all of the closed and open source foundation companies are building with PyTorch, right?
- ARArmand Ruiz
Pretty much all of them, and that's a project that came out of Meta.
- AGAakash Gupta
Mm.
- ARArmand Ruiz
Um, and, uh, yeah, it's, it's open governance and everyone is contributing to, to it, and it's being used by every single major AI lab in, in the market.
- AGAakash Gupta
So you've been in AI for 16 years. What have the-- been the biggest open source releases over time?
- ARArmand Ruiz
Um, it's been a wild, a wild journey, um, because w-when we talk about, uh, open source, a lot of the conversation right now is with LLMs. So, um, I think, um, i-if we, if we just focus for l- on open source LLMs for a moment, um, I think Mistral did a massive things when they came into the market. They, they were also, I think, the first to provide a mixture of experts, open source model. Um, also they did it, they did it in a very funny way with a torrent link. You have to get a little bit your way to go download. And then, um, we had Llama building a fantastic ecosystem around, around open source models. Uh, IBM, we open source our models. I can speak to that. And then, uh, now OpenAI and others. So there is innovation in the model space. If you go check Hugging Face, which is kind of like the repository of all these open source models that Clem and the team is building, like, there are hundreds of thousands of open source models. Then there is data as well, also available on Hugging Face. There are a lot of datasets. Datasets for many different things. For pre-training, for post-training, for alignment. So these are also components. Um, major PyTorch massive, uh, TensorFlow, uh, looked promising. Finally, um, PyTorch kind of took over. Um, um, there is always kind of like at the beginning you don't know which one is gonna win, and then you let the community and the ecosystem, uh, um, move that forward. And, and mo- in most of the cases they are very technical decisions that are very critical or users, user simplicity. Um, and thenA lot of the conversation as well is happening on, um, potential alternatives to CUDA, uh, that provides a lot of control for NVIDIA. So, uh, it's a, it's a very exciting, uh, ecosystem, and yeah, it is not stopping.
- AGAakash Gupta
And it's operating at every layer, really.
- ARArmand Ruiz
Every layer. In, in every layer of the AI stack, you have like three or four projects and new incumbents coming with new alternatives, and, and I think the beautiful thing about open source is let the best one win, you know?
- AGAakash Gupta
Yep. You briefly mentioned pre-training, post-training, and alignment. For people who don't understand that, those are the steps in model building. What, what's one layer deeper? What's happening in each of those?
- ARArmand Ruiz
Yeah, s- and, and for 99.999% of the people, they won't... They, they don't really have to touch that. Uh, this is really done by the m- frontier AI labs-
- AGAakash Gupta
Yeah
- ARArmand Ruiz
... where they train the models. And basically, in pre-training is when you basically, uh, gather all that, uh, clean data set, and you, and you train th- this what you use to train your, your model, and then, uh, you have the post-training phase and the alignment to, to make sure it, it performs, uh, properly. So these are, uh, different data sets that you use for that. Uh, obviously we're running out of data, so then you have new methods to create synthetic data, high-quality synthetic data, which is synthetic data is data generated by AI algorithms-
- AGAakash Gupta
Yeah
- ARArmand Ruiz
... and then supervised by humans to make sure that it's high quality. And, and yeah, s- like all that process, uh, is what is used by the, all these frontier labs to train, um, to train all these magic LLMs that we're using.
- 51:39 – 59:32
IBM's AI Strategy
- AGAakash Gupta
So let's get to IBM. How is IBM gonna make big waves in the AI space?
- ARArmand Ruiz
Yeah. So, um, I joined IBM when we announced IBM Watson, you know, and it's been a while, uh, decade, uh, with a lot of lessons learn, learned and, and, uh, also things that got right, we got right and things that, that we got wrong. Um, right now, the, one of the things I'm very bullish is about providing customers, uh, flexibility to tap into, to deploy AI anywhere, and to tap into any AI, um, engine they want. So, so what, what does that mean? That means, um, um, our customers, uh, they, they sit on a lot of data, as I mentioned before. It's a goldmine of data. Uh, they need to execute the, that AI close to that data. Uh, cost per token is extremely important, you know? So, uh, I think we are one of the only providers that provide all this flexibility to deploy the AI very close to, in the infrastructure they, they want, whether it's a hyperscaler, whether it's on-prem, on a private cloud, uh, setup, or a combination of all of those. So that's number one. Uh, then we've been developing also, uh, our own AI, uh, models. It's a family of models called Granite. And then, uh, providing-- But our customers, they want everything. I mean, I, the customer I was yesterday, and it's a trend I see with every single customer, they have all the options.
- AGAakash Gupta
Mm-hmm.
- ARArmand Ruiz
You know? So okay, so how you provide and govern access to all these, uh, AI engines, no matter where they run, uh, so you make sure you have some way to understand the, the overall cost, the access control and, and things like that. And then we provide a lot of, um, tooling on top to make sure, uh, we give productivity to developers. And, and at the end of the day, these, these systems are built for scale, massive scale, so we are working on different projects to, uh, help scale inference throughout multiple clusters in different environments. And, and, and then, um, a- an area that I've been responsible as well is, uh, the governance piece, and the governance piece is a, is a, a one that m- many people are thinking, uh, after the fact, and it should be thought before, especially in a, in an en- enterprise setup. So, uh, I'm sure you heard about a lot of the AI regulation that is coming to the market, and that regulation is being updated and is, uh, different in, uh, at sometimes by industry, by state, by country.
- AGAakash Gupta
Mm-hmm.
- ARArmand Ruiz
And so you need to have like an inventory of use cases at different stages, and for those that are in production, they need to be compliant, uh, to a certain regulation. So, um, we, we have, um, very good tools in order to, to do that at scale.
- AGAakash Gupta
Why is the Granite model important?
- ARArmand Ruiz
I think the, the Granite m- model has two, I would say, maybe, uh, the research team will disagree with me, uh, there, there are more, but I think it has, we have two major components. One is, um, uh, cost per token. So these are very small models. Um, uh, uh, one of, one of the most popular ones is a two billion model that performs really, really well.
- AGAakash Gupta
Mm-hmm.
- ARArmand Ruiz
So if you, if you see what OpenAI released, uh, last week, the smallest one is 20 billion.
- AGAakash Gupta
Okay.
- ARArmand Ruiz
Um, these are different kinds of models, like that, that model is, OpenAI model is very good at reasoning. Uh, but like for certain use cases, what we see is, uh, cost per token is extremely critical, and for some, uh, use cases, you don't need generic models that know how to do everything. In enterprise setups, you need models that do one thing and do it really, really well. So, um, these very small models are extremely good. They are very cheap, and they run in, in hardware, um, that in some cases is even commodity hardware. And then, um, the second thing-- I, I will add two more. The second thing is easy to customize, so we're talking about, uh, tuning or RAG or things like that. The larger the model, the more complex every single thing you're trying to do is. So if it's a very small model, uh, it's easier to tweak it, to tune it, to, to change the weights, and, and to, uh, embed it into, into different, um, uh, customization setups. And then the last one is, um, which is it was talked a lot at the beginning of this AIUh, um, craziness that we have going on. It was about the copyright of the data. Uh, for most of the models that we use today, we have no idea which data was used to train. Um, all our data is actually is even disclosed on their white paper. Uh, our legal teams, they went through it, is, uh, has the proper copyrights and, and so on. So, uh, and we provide that information very transparently to our customers. So th- that build a level of comfort for some specific use cases in certain industries that has been, that's been really good.
- AGAakash Gupta
Mm. So the AI talent wars have gotten insane. People have heard about $1 billion for four years at Meta for some of the highest-paid AI researchers. Yet you're st- saying that AI talent may still be underpaid.
- ARArmand Ruiz
[chuckles]
- AGAakash Gupta
What's your take on this? Like, what's going on with these AI talent wars? Why is AI talent underpaid?
- ARArmand Ruiz
I, I got in trouble [laughs] for that, that post, but I think, um, I think th- th- there are two things here. One is, like, kind of like the ethical piece. It seems completely unethical that someone is making that absurd amount of money. But then you need to put things also in- into context, right? We're in a, a capital allocation market, and we're talking about talent that is very unique. Uh, I would say maybe there are, like, 200 of those, um, folks, uh, worldwide.
- AGAakash Gupta
Yeah.
- ARArmand Ruiz
And those folks are the ones that are making-- They are actually using all these CapEx being, uh, being spent by the biggest and strongest companies in the world. So when you have a, uh, uh, one of those companies spending billions on AI clusters, and they are even talking about building nuclear, uh, nu-nuclear plants to power those, those AI clusters, uh, and those clusters, th- those clusters are, uh, they are for, uh, training new models, um, tuning new models, serving new models. So you need the right talent, uh, to leverage that. And, and literally, like, one architecture decision can, can, um, use capacity on those clusters for weeks and month.
- AGAakash Gupta
Mm-hmm.
- ARArmand Ruiz
Uh, so if you put the numbers into context, that's massive. So that's number one. Like, those, those folks are really capital allocators right now, um, not only just employees.
- AGAakash Gupta
Yeah.
- ARArmand Ruiz
And, and second one is, I mean, you need to see the state of the market. Like, a lot of these people, they are either in-- they are founders or, or first, uh, employees of, um, very well-funded, uh, companies, so they have, like, very sweet equities, uh, uh, at e-extremely high valuations, right? So if you see what they are-- w-what, what is their opportunity, uh, maybe, um, they, they have two or 300 million on equity in some, in some company-
- AGAakash Gupta
Yeah
- ARArmand Ruiz
... you know. So, uh, if you wanna hire them, you need to put a very sweet package. So I think th- those are the two motions, like the, the capital allocation motion and, like, the state of the market and where these technical folks, that they are, like, founders of some of the most promising companies in the world.
- AGAakash Gupta
Well, I just love to see the nerds get paid like athletes. [laughs]
- ARArmand Ruiz
It is. I mean, I, I heard there are, uh, AI, um, agents, but not agents in the g- like, NBA agents, uh, for players-
- AGAakash Gupta
Yeah
- ARArmand Ruiz
... that are helping you negotiate those, uh-
- AGAakash Gupta
Oh
- ARArmand Ruiz
... those contracts and things like-- It's just, it's, it's wild. It's the, it's the nerds are, are taking over. [laughs]
- 59:32 – 1:02:36
Career Journey: Intern to VP
- AGAakash Gupta
so you have an amazing career story. You rose from intern to VP of AI. A lot of people would wanna follow your trajectory. Of course, you know, you did good work, you made good connections, but are there things about the way you work or the way you manage your career that really helped you propel so quickly?
- ARArmand Ruiz
Yeah. I think this is one of those that be careful what you dream, because it might become true. Uh, I was, I was a kid, and I was just obsessed with being here in Silicon Valley, watching all the keynotes, uh, the action happening with Apple and, or Oracle, and all these companies, and I really, really, really wanted to, to be here. So every single thing I did, either very intentionally or unintentionally, took me here.
- AGAakash Gupta
Mm.
- ARArmand Ruiz
Um, and for some things, I was, like, very, very aggressive trying to get there. I, I w- when I was in Europe, I wanted to be in a US company, to get the visas, to, to get here and get sponsored-
- AGAakash Gupta
Mm-hmm
- ARArmand Ruiz
... and, and so on. Um, so that was one factor. I think the other factor is, is I, um, there is a component of, of luck, you know. Uh, I joined IBM when IBM Watson was announced.
- AGAakash Gupta
Mm-hmm.
- ARArmand Ruiz
And then I've been very consistent on that AI path. Even on these AI winters that we had in between, I've been always working in, in AI and machine learning. And, and yeah, like, we're now in this stage where AI is the biggest thing in, in the world. [chuckles] Uh, so there is that l-l-luck factor, but I had a good intuition, right?
- AGAakash Gupta
Yeah.
- ARArmand Ruiz
Uh, I saw the promise of the technology, and I, and I was very connected with all the developments that were happening with, uh, for example, with NVIDIA and AlexNet and the promise of all this, uh, technology, but we didn't expect this to happen so, so quick. And then, and then it's also, uh, more about managing kind of like the corporate, uh, um, ladder, right? Something I always recommend is network is very important. Um, and network and add value and just, um, be humble and, uh, problem, problem-solving. So yeah, I've, I've been always kind of, um, lucky to be in these very hot projects, and then part of my nature, I'm very impatient. I want to build things very quick, and, um, that fits the narrative in corporate America, that they want-- people want to see results quickly and, and they s- wanna see innovation. So, um, yeah, I was always with this m- mindset of building and showing, not telling, and, and, and then I've been lucky to surround myself with un- unbelievable colleagues, um, that made things, uh, happen very fast. So that's, that's been kind of like-My story. I started-- I'm from Spain. I started working at IBM in Belgium, in France. I moved to Chicago, and then I've been here in the Bay Area for 10 years.
- AGAakash Gupta
Wow. Very intentional journey to get-
- ARArmand Ruiz
Yeah
- AGAakash Gupta
... to Silicon Valley, and then seize the opportunity, stay grinding. A lot of people, they come to Valley, they see the gray weather, they leave after three years.
- ARArmand Ruiz
[chuckles]
- AGAakash Gupta
You stuck it out.
- ARArmand Ruiz
Yeah.
- AGAakash Gupta
And that's really an incredible story. Another incredible story
- 1:02:36 – 1:08:18
Building 200K LinkedIn Followers
- AGAakash Gupta
that you have is you-- I believe June 2023, you shared-
- ARArmand Ruiz
Yeah
- AGAakash Gupta
... was when you started your content posting journey. We're talking in August 2025, you have nearly 200,000 followers. I think it's at, like, 194,000 today, right? By the time this episode publishes, it'll be 200,000. How-- And you've written about this a little bit. You use AI in your content creation process. How can other people use AI in their content creation process to grow like you did?
- ARArmand Ruiz
Yeah. Um, first, why I, I started doing it, I, I think I always, um, wanted to be a better communicator, and I think if you wanna get good at something, you just need to flex that muscle.
- AGAakash Gupta
Yeah.
- ARArmand Ruiz
So, um, and because of the work I was doing, every time I was posting something, either at the time on blogs on Medium or s-sometimes on LinkedIn, I have posts from, like, five years ago that were getting, like, thousands of views-
- AGAakash Gupta
Mm
- ARArmand Ruiz
... which is rare. And they were, they were resonating. So then I was like, okay, I'm gonna try to put consistency and a framework and is something I should have been doing before. And, and then I, I put a system just to collect ideas and, and, and, and, and just write every single day. Uh, like I have a post every day on, on LinkedIn at, uh, 7:00 AM Pacific. No, 4:00 AM Pacific, 7:00 AM Eastern.
- AGAakash Gupta
Oh.
- ARArmand Ruiz
Um, so kind of like when, when the US is waking up and it's still working day in Europe, uh, you, you get an update from me. Um, I will say so, though, that, um, I was using AI a lot more before. I barely use it these days.
- AGAakash Gupta
Oh.
- ARArmand Ruiz
And I think that's also part of the differentiation on, on good content. So, f- um, a-and I have posts where I had, like-- Uh, like two years ago, I had agents filled with, um, baby, baby AGI and things like that. And I think at that time, I was always kind of, like, concerned that I didn't have enough, um, enough, um, ideas. So I had agents to do, like, research, and they will just scroll YouTube, X, or, uh, different sources and give me, like, what is the content that is getting more engagement. And then my, my content was really optimized to go viral, um, which was good. That, that helped grow a lot. Um, right now, my content is more targeted, uh, to the people I want to talk to, and, um, the audience I'm going after. And, and that's helping as well with my, my, uh, connections and, and professional experience. So when I sit in front of a customer, most of-- Like, 90% of the time, they already follow me. They know my content. They have questions about it. So, um, and it-- a-and that help also inspire, um, um, other people at IBM to help promote, uh, IBM technology. So, um, I, I, I was very heavy on AI. I use it a little bit less these days.
- AGAakash Gupta
Mm.
- ARArmand Ruiz
Um, just because I think, um, um, just trying to, um, spend more time thinking, and that's a way to differentiate. Otherwise, the content is also getting democratized these days.
- AGAakash Gupta
Yeah. It is, right? I also had a phase, like two years ago, when I was using AI, and I felt like it was giving me an edge.
- ARArmand Ruiz
Yeah.
- AGAakash Gupta
Now I don't use it at all-
- ARArmand Ruiz
Yeah
- AGAakash Gupta
... because it ends up leading you down the path of creating content like everybody else.
- ARArmand Ruiz
Yeah.
- AGAakash Gupta
And that's the content that doesn't work. [chuckles]
- ARArmand Ruiz
Yeah. And, and, and, uh, I think maybe where I use it is more on the, uh, ideation.
- AGAakash Gupta
Okay.
- ARArmand Ruiz
So, um, like, many people ask me, "How do you have time to write so much?" And it-- I think it's if I didn't write, I-- my thoughts wouldn't be structured and in order, and I need that for my, my job. So, um, I was flying yesterday, um, back from a customer visit. It was like a 24-hour visit to a customer. In the flight back, I, I just had so much information that I was kind of, like, structuring it and writing my thoughts, um, kind of like old school on pen and paper on a notebook. And, and I will use AI to kind of expand on, on those ideas, help me structure them, but then, uh, write, write something. And I usually have my own routine, uh, every day after kids go to sleep. Just-
- AGAakash Gupta
[chuckles]
- ARArmand Ruiz
... uh, spend some time, like, writing. And then the more you do it, the faster you do it as well.
- AGAakash Gupta
Yeah.
- ARArmand Ruiz
Um, uh, at first, maybe it will take you, like, 45 minutes to write something. Now it takes me maybe, like, five, seven minutes, you know?
- AGAakash Gupta
Oh, wow.
- ARArmand Ruiz
Because I, I, I don't have, like, this, uh, uh, paralysis when I have to write. I already know what I'm gonna be talking about, and more or less, um, I learn about formatting content and so on. So yeah, uh, it's one of those things that you just need to do it, uh, and regularly, and then also track metrics as well if you really care about growth.
- 1:08:18 – 1:09:10
Outro
- ARArmand Ruiz
more valuable.
- AGAakash Gupta
What an ending to the podcast.
- ARArmand Ruiz
[chuckles]
- AGAakash Gupta
Armand, thank you. This was, I think, your first deep, long-form-
- ARArmand Ruiz
Yeah
- AGAakash Gupta
... podcast. Cannot wait to share this with the world. Really enjoyed it.
- ARArmand Ruiz
Uh, thank you for inviting me, and looking forward for more. Thank you.
- AGAakash Gupta
All right. So if you wanna learn more about how to shift to this way of working, check out our full conversation on Apple or Spotify Podcasts. And if you want the actual documents that we showed, the tools and frameworks and public links, be sure to check out my newsletter post with all of the details. Finally, thank you so much for watching. It would really mean a lot if you could make sure you are subscribed on YouTube, following on Apple or Spotify Podcasts, and leave us a review on those platforms. That really helps grow the podcast and support our work so that we can do bigger and better productions. I'll see you in the next one.
Episode duration: 1:09:19
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode g-Yb7CFWItk
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome