
No Priors Ep. 37 | With Kawal Gandhi
Sarah Guo (host), Kawal Gandhi (guest), Elad Gil (host)
In this episode of No Priors, featuring Sarah Guo and Kawal Gandhi, No Priors Ep. 37 | With Kawal Gandhi explores inside Google Cloud’s Generative AI Strategy: Models, Trust, and Scale Kawal Gandhi from Google Cloud’s Office of the CTO explains how generative AI has become a core differentiator for Google Cloud, evolving from internal tooling and Workspace experiments into broadly available platform capabilities. He describes how customers progress from quick API prototypes to production systems that demand governance, reliability, and secure data handling. Gandhi highlights Vertex AI’s model garden, domain-specific models like Med-PaLM, and support for open-source models, all underpinned by Google’s infrastructure, TPUs/GPUs, and operational tooling. A major theme is building trust: starting with efficiency and productivity gains, then expanding to creative and multimodal use cases as costs fall and organizational maturity rises.
Inside Google Cloud’s Generative AI Strategy: Models, Trust, and Scale
Kawal Gandhi from Google Cloud’s Office of the CTO explains how generative AI has become a core differentiator for Google Cloud, evolving from internal tooling and Workspace experiments into broadly available platform capabilities. He describes how customers progress from quick API prototypes to production systems that demand governance, reliability, and secure data handling. Gandhi highlights Vertex AI’s model garden, domain-specific models like Med-PaLM, and support for open-source models, all underpinned by Google’s infrastructure, TPUs/GPUs, and operational tooling. A major theme is building trust: starting with efficiency and productivity gains, then expanding to creative and multimodal use cases as costs fall and organizational maturity rises.
Key Takeaways
Start with simple, high-leverage efficiency use cases before ambitious projects.
Enterprises that first apply generative AI to clear efficiency wins—like customer support, internal knowledge access, or basic content generation—build internal trust and measurable KPIs before moving to more creative or mission-critical applications.
Get the full analysis with uListen AI
Use off-the-shelf APIs and existing models to prototype before fine-tuning.
Many teams initially rush into training or fine-tuning, then backtrack to simply experiment with high-quality APIs; proving value with existing models reduces risk and clarifies whether custom models are truly needed.
Get the full analysis with uListen AI
Treat data security and governance as first-class design constraints.
Google emphasizes tenant isolation, strict access controls, logging, and no cross-customer data leakage; organizations should similarly define guardrails, ownership of model weights, and compliance requirements early in any AI initiative.
Get the full analysis with uListen AI
Build a staged trust model: efficiency → productivity → creativity.
Gandhi frames adoption as a progression: use AI to streamline workflows, then boost productivity (recommendations, promotions, coding assistance), and only then rely on it for creative, higher-impact tasks as confidence and reliability grow.
Get the full analysis with uListen AI
Plan for inference scalability, not just model training.
The hardest problems increasingly sit in scaling inference for fast-growing applications and future multimodal workloads; teams must architect for latency, cost control, and resilience from day one, regardless of TPU vs GPU choice.
Get the full analysis with uListen AI
Align AI ambition with organizational maturity and culture.
Success depends as much on where a company sits on the spectrum from “visionary first mover” to “fast follower” as on its technology; board-level strategy, reskilling, and cultural readiness are critical to sustaining AI investments.
Get the full analysis with uListen AI
Leverage ecosystems: model gardens, open source, and data partners.
Rather than building everything in-house, enterprises can use platforms like Vertex AI’s model garden, open-source models (e. ...
Get the full analysis with uListen AI
Notable Quotes
“It's not hype. It's real excitement because engineers, we love it... you wanna show the art of the possible.”
— Kawal Gandhi
“We are on that trust cycle of like, how do we trust these models? They do what we think, they don't go off, they don't hallucinate.”
— Kawal Gandhi
“The expensive part is now becoming cheap. It's the models, the availability, the usage of the platform.”
— Kawal Gandhi
“Models in my mind are like 50–60% of the work, and how do you leverage your current investment is 30–40%.”
— Kawal Gandhi
“From the beginning, we just made sure that all of their data, all of their models, all of their weights, that's like their IP, and we wanna safeguard that.”
— Kawal Gandhi
Questions Answered in This Episode
How should an enterprise objectively decide when to stick with off-the-shelf models versus investing in its own fine-tuned or proprietary models?
Kawal Gandhi from Google Cloud’s Office of the CTO explains how generative AI has become a core differentiator for Google Cloud, evolving from internal tooling and Workspace experiments into broadly available platform capabilities. ...
Get the full analysis with uListen AI
What concrete metrics or frameworks can organizations use to measure and increase ‘trust’ in generative AI systems over time?
Get the full analysis with uListen AI
How will multimodal AI (text, image, audio, video) practically change customer-facing experiences in the next 2–3 years, and what should teams build first?
Get the full analysis with uListen AI
In highly regulated industries, where is the line between safe synthetic data usage and overfitting to unrealistic simulations?
Get the full analysis with uListen AI
Given falling infrastructure costs, what new AI-powered products or workflows become viable now that were simply uneconomical two or three years ago?
Get the full analysis with uListen AI
Transcript Preview
(instrumental music) . This week, Elade and I are joined by Kawal Gandhi. He works in the office of the CTO of Google Cloud, where he's the lead for generative AI. Gandhi comes from a long history of working on search and ads at Google before Cloud. Welcome, Gandhi.
Thank you.
How did you end up working on Cloud and then AI in particular from, um, from other projects at Google?
Sure. Um, I really worked deeply with a lot of our advertisers around search and ads for shopping and travel, uh, especially c- commercial-based queries. And as working with them, they required a lot of storage, compute, infrastructure constantly around making their ads perform better, which led us to Cloud and then Cloud solutions around using some of that to create smart analytics, machine learning pipelines, more around documentation AI, conversational AI, and here we are. Uh, we've been doing it for a while, but now it's generative AI, uh, where, how can you make that customer experience, uh, much better with the information that they have?
And, and just in terms of beginning to broadly, uh, Google incorporate AI into GCP, like, what was the origin story of that? Is it TPUs, APIs, some customer need that you specifically saw?
As we were, uh, getting into the Google Cloud, this goes back into, uh, how can we provide them with, uh, latency, high response, uh, better experience with data is how our customers started kind of leaning towards Google, in my mind. From the beginning, it was more around machine learning and AI as a differentiator to work with Google, and how could they use that data better in, on our platform was a constant ask. So as we were kind of on the journey of Google Cloud, it was all about data, AI storage, privacy, security, and kind of having that same deep technology that we used inside Google. How could we leverage that and offer that in market? Uh, so lots of learnings, because what we built internally, we had tools, frameworks, et cetera, that took us time to kind of make our platform rich for our customers, from regulated to non-regulated environment, and they... and how to leverage some of their current investments on our platform.
What are some of the internal use cases that have really driven that behavior in terms of the stuff that you ended up building for your customers? I know that, you know, a lot of what Google does is sort of dogfood its own APIs or products, and then it starts launching them externally as sort of a service that other people can use. What, what were some of those first applications of generative AI that occurred internally that then caused you to decide to do these things externally?
Yeah, the early ones, I think it's- it's all public now. Uh, was, was around Workspace. Just using our documentation, our email, you all have (laughs) , I'm sure, use it. So it was like, can you summarize this better? Can you personalize this better? Can you offer me a suggestion? And all this kind of gradually as we allow things, as we, you know, tested it internally and dogfood, we gradually launch it externally because we see a lot of, uh, progress that we make internally in terms of efficiency, productivity gains, folks can use it in their, you know, spreadsheet creation, et cetera, uh, was gradually launched, and now it's Duet AI, uh, part of Workspace. So, uh, these are constantly being dogfooded and tested. We call them experiments, and as the research team leans in and looks at some of these, we add it to the platform and bring it forward in our products.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome