Aakash GuptaAI Agents for PMs in 69 Minutes — Masterclass with IBM VP
CHAPTERS
Why AI agents are “the wall of automation” beyond chatbots
Aakash and Armand frame AI agents as the next step after predictive analytics and chatbots—systems that can automate real work end-to-end. Armand shares why enterprises (and CIOs) now prioritize agents, but also why safe, secure production deployment is still the hard part.
The 4-step mental model: Think → Plan → Act → Reflect
Armand walks through his simple four-step diagram that explains what an agent does internally. The model clarifies how agents reason, decompose tasks, take actions in real systems, and improve via reflection loops over time.
Choosing an agent-building approach: code frameworks vs no-code builders
They categorize agent development tooling into two camps: programming frameworks that provide maximum control and low/no-code tools that speed up experimentation. The discussion highlights popular options and when to use each.
RAG demystified: adding fresh enterprise context to LLMs
Armand explains Retrieval-Augmented Generation (RAG) as the dominant method for injecting up-to-date knowledge into LLM outputs. He contrasts RAG with fine-tuning and shares why RAG became the default enterprise pattern post-ChatGPT.
RAG inside agent workflows: enterprise search becomes ‘answer + action’
They position RAG as a core component of agentic systems, especially during planning where agents fetch needed data. Examples show how RAG turns traditional enterprise search into direct, usable intelligence for decisions and downstream work.
RAG architecture building blocks (and why it’s mostly data engineering)
Armand outlines the real components behind RAG pipelines—embeddings, vector databases, filtering/ranking, and orchestration. The key message: most RAG failures and successes are driven by data engineering complexity, not just the LLM choice.
Vision RAG: extracting value from charts, tables, and rich PDFs
The conversation expands RAG from text-only to multimodal information retrieval. Vision RAG enables agents to understand charts/tables and visually dense documents, unlocking industries where critical data lives in non-text formats.
Common RAG mistakes: accuracy expectations, ‘vanilla’ pipelines, and weak evals
Armand focuses on the gap between consumer tolerance for imperfect answers and enterprise requirements for accuracy and trust. Teams often deploy generic templates without rigorous evaluation, leading to frustration and unreliable systems.
Evals everywhere: how to test agent/RAG systems like real software
They argue evaluation must happen at multiple steps in an agentic workflow, not only at the final answer. Armand explains evals as a way to inject human expertise, scale SME input, and continuously improve systems in production.
Managing 10–20 agents: orchestration as a new knowledge-worker skill
Armand describes a near-future where employees supervise fleets of specialized agents. The challenge becomes orchestration—assigning tasks, setting approvals, and judging outputs—especially in traditional companies where adoption takes longer.
How AI reshapes product management: fewer PMs, broader scope, more leverage
They explore how agents can change PM-to-engineer ratios and expand a PM’s coverage area. Armand maps agents across the PM lifecycle—from competitive research to feedback synthesis to PRD drafting and prototyping.
Prototype-first vs write-first: avoiding ‘feature factory’ while moving faster
Armand shares a career story where a prototype beat slides and PRDs in an exec meeting, illustrating why prototypes communicate better. They also address the risk of rushing into solutions without deep problem investigation and customer understanding.
Roadmap for learning and building agents: concepts → one agent → deeper tooling
Armand gives a practical learning sequence: start with fundamentals, build a single useful agent, then progress toward more advanced tools as needed. He emphasizes hands-on exploration as the only way to learn the ‘art of the possible.’
Can open source AI win? Why enterprises default to open ecosystems
Armand argues open source tends to win in enterprise contexts due to deployability, control, and ecosystem momentum. He also acknowledges the reality that closed-source labs may stay ahead temporarily, but open source catches up over time.
IBM’s AI strategy: flexibility, Granite models, scaling inference, and governance
Armand describes IBM’s positioning around deployment flexibility (any cloud/on-prem), a family of models (Granite), and enterprise-grade governance. He emphasizes that compliance and policy management must be designed in from the start.
Career + creator playbook: intern-to-VP journey and building 200k followers
Armand closes by sharing how intentional moves, consistency in AI through ‘winters,’ and customer proximity accelerated his career. He also breaks down his daily LinkedIn system, why he now uses less AI in writing, and how targeting the right audience beats chasing virality.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome