EVERY SPOKEN WORD
60 min read · 12,160 words- 0:00 – 9:05
Generative AI in Google Cloud
- SGSarah Guo
(instrumental music) . This week, Elade and I are joined by Kawal Gandhi. He works in the office of the CTO of Google Cloud, where he's the lead for generative AI. Gandhi comes from a long history of working on search and ads at Google before Cloud. Welcome, Gandhi.
- KGKawal Gandhi
Thank you.
- SGSarah Guo
How did you end up working on Cloud and then AI in particular from, um, from other projects at Google?
- KGKawal Gandhi
Sure. Um, I really worked deeply with a lot of our advertisers around search and ads for shopping and travel, uh, especially c- commercial-based queries. And as working with them, they required a lot of storage, compute, infrastructure constantly around making their ads perform better, which led us to Cloud and then Cloud solutions around using some of that to create smart analytics, machine learning pipelines, more around documentation AI, conversational AI, and here we are. Uh, we've been doing it for a while, but now it's generative AI, uh, where, how can you make that customer experience, uh, much better with the information that they have?
- SGSarah Guo
And, and just in terms of beginning to broadly, uh, Google incorporate AI into GCP, like, what was the origin story of that? Is it TPUs, APIs, some customer need that you specifically saw?
- KGKawal Gandhi
As we were, uh, getting into the Google Cloud, this goes back into, uh, how can we provide them with, uh, latency, high response, uh, better experience with data is how our customers started kind of leaning towards Google, in my mind. From the beginning, it was more around machine learning and AI as a differentiator to work with Google, and how could they use that data better in, on our platform was a constant ask. So as we were kind of on the journey of Google Cloud, it was all about data, AI storage, privacy, security, and kind of having that same deep technology that we used inside Google. How could we leverage that and offer that in market? Uh, so lots of learnings, because what we built internally, we had tools, frameworks, et cetera, that took us time to kind of make our platform rich for our customers, from regulated to non-regulated environment, and they... and how to leverage some of their current investments on our platform.
- EGElad Gil
What are some of the internal use cases that have really driven that behavior in terms of the stuff that you ended up building for your customers? I know that, you know, a lot of what Google does is sort of dogfood its own APIs or products, and then it starts launching them externally as sort of a service that other people can use. What, what were some of those first applications of generative AI that occurred internally that then caused you to decide to do these things externally?
- KGKawal Gandhi
Yeah, the early ones, I think it's- it's all public now. Uh, was, was around Workspace. Just using our documentation, our email, you all have (laughs) , I'm sure, use it. So it was like, can you summarize this better? Can you personalize this better? Can you offer me a suggestion? And all this kind of gradually as we allow things, as we, you know, tested it internally and dogfood, we gradually launch it externally because we see a lot of, uh, progress that we make internally in terms of efficiency, productivity gains, folks can use it in their, you know, spreadsheet creation, et cetera, uh, was gradually launched, and now it's Duet AI, uh, part of Workspace. So, uh, these are constantly being dogfooded and tested. We call them experiments, and as the research team leans in and looks at some of these, we add it to the platform and bring it forward in our products.
- SGSarah Guo
Is there a single, um, feature or a product you launched within the internal Google version of Duet, the Workspace AI products, that have gotten the most uptake or that you're most proud of?
- KGKawal Gandhi
Yeah, I think we're seeing it across the board, uh, aha moments. As, as we launch, we're seeing it in documents, uh, in terms of generation, uh, summarization. We're seeing it, uh, now in Slides with suggestions on images and new image creation, which used to take time for someone to go ask a studio or an agency, and you have a prompt that you can give and say, "Here's something I'm thinking about." So, Sarah, we've seen kind of intake on all those features. Also, email generation has been super helpful from productivity perspective, not only for the consumers, but for enterprises as well. Uh, security, we don't talk about it a lot. I think it's super secure, how it's sent, how the links are used. Um, those are something we really take it seriously.
- SGSarah Guo
How far, uh, do you think we can take it with email generation? 'Cause I maybe spend four hours of my workday just trying to keep up with my inbox, so this is of great personal importance to me.
- KGKawal Gandhi
I'm not gonna predict because you predict something and something always comes back and surprises you from a technology perspective, from a model perspective. So I'm looking forward, um, to see what surprises you, and I'm sure we'll speak again in six months, and you'll be like, "Gandhi, I saw this feature inside Gmail. It's really helpful." I like the translation feature for my mom because, uh, she speaks Hindi. She's really fluent in it, and when I write my (laughs) emails now, I can just say, translate this when she reads it. Now it does it automatically because she's using that app. So which is, like, fantastic. So I think it just bridges the gaps for a lot of our users internally and externally.
- EGElad Gil
I know that there's a lot of different services that Google Cloud provides, particularly around generative AI. There's a series of models, and there's lots of domain-specific models that Google's really been forward-thinking on, things like Med-PaLM 2 or Sec-PaLM or other things like that. Which of those are currently available, and how do you think about that roadmap of things to expose over time?
- KGKawal Gandhi
Sure, and I think, uh, it's helpful a lot to know that we have invested on the AI infrastructure. So you've seen people training not only using our first-party models, but also training their models and bringing it onto our platform. The next level up is the, the Vertex, what we call as capabilities of models that you can use from our model garden. That's where we have domain-specific models. So it's Med-PaLM.... uh, set, um, these are early days domain-specific. You can chat with it, you can kind of get validation of things, the notes that are written out from a nurse to a, a patient, healthcare from a doctor, and it kind of puts it in the right format that can be, you know, saved up. Uh, so it's early days. We are seeing it from just foundational model capabilities to models that are built by our users and deployed on our platform to also open source models. I'm really excited to see like LLaMA, Stable Diffusion, uh, other variations coming onto the platform. Um, but what's key, Nad, in all of this is your data is secure, you wanna be like, you know, fine-tuning it on the platform, and then all the model operation, uh, should be easy because we've spent a lot of time really specializing around the model drift, operations, tooling, safety around it. Super early, but I think those are the elements that will differentiate on the platform, uh, from the, from the next few quarters.
- EGElad Gil
That's, that's super interesting. Yeah. There- there's, um, this guy Ankur Goel, who runs a company called BrainTrust, it's focused on eval and a couple other things. And one of the points he makes, which I think is pretty interesting, is that there tends to be a very sequential adoption of LLMs by larger enterprises or people training models for the first time. So often they jump straight into fine-tuning and then they suddenly realize, "Wait, wait, wait, let me just prove it out in GPT-4 or, uh, Bard or some other API, and then let me iterate towards something that actually works, and then maybe I go and train my own model." Do you see something similar in terms of the pattern of behavior that a lot of your partners adopt? Or do they just jump to your APIs or... What's the most common sort of sequencing of people really adopting this technology?
- KGKawal Gandhi
Yeah, I think, uh, it's always the case if you look in history, it's like, "Hey, I got an API, I can build a prototype and it creates a lot of excitement." It's when you really start thinking about deploying it, managing it, monitoring it, using different models to chain them, have responses that really answer a f- capability, you start thinking deeply about it inside an organization. So I see this as early excitement. It's not hype. So I wanna s- (laughs) decouple that. It's real excitement because engineers, we love it. Like I've been one. You wanna grab something, you don't wanna have restrictions around it, and you wanna show the art of the possible. And I think from those experiments, we are gonna see in the next year or so capabilities that are, you know, available, uh, to the users and to inside the, inside the enterprise where they will see differentiated output and capabilities. Uh, that's how I see it. So I'm excited in all those, all those phases, by the way, uh, because it's like the best creative time for engineers to solve problems that they've been thinking about solving for a while. Uh, and now they have the tools that they can download, whether they can grab an open source or on a closed one, and we can then kinda see how they're using it and evolving, evolving the platform along with it.
- SGSarah Guo
Do you advise, uh, your, your customers and, you know, major Google Cloud customers
- 9:05 – 13:31
AI Adoption for Enterprise
- SGSarah Guo
how to think about when they need to train their own models or when they should be using domain-specific models? Like what advice would you have for them?
- KGKawal Gandhi
Yeah, sure, Sarah. So we, we deeply think about it because it's, it's you have to be responsible once you're building and fine-tuning a model. You know, what are the guardrails? What are the use cases? What is the cost you wanna put behind that? Because it's a continuous, uh, learning process and it also depends on the maturity of the organization. Do they have a, a team that understands how models are built, tuned, and then brought forward? And this, it doesn't mean that you cannot invest in it, it's just that where you are in the cycle is very important. So we lean in, talk about that from a, you know, board level perspective, from a strategy perspective, and then thinking through cultural transformations as well. So it's not limited to just one dimension, in my mind. And it really helps them to think through how they see this transformation also happening inside their organization.
- EGElad Gil
Do you track the common use cases that your customers end up focusing on? Because it seems like there's almost three different things that people tend to do. There's, you know, let's go experiment and just see if this is interesting. There's I'm gonna use it for specific internal tools. I wanna make customer success better, or I wanna make, you know, some ops workflow better. And then there's the people who are actually doing it for external products. You know, I'm actually gonna launch a feature that includes generative AI. Do you have a sense of sort of how that breaks out across your customer base and the proportion that's doing each at this point in the, in this early cycle?
- KGKawal Gandhi
Yeah, it, it falls into, like, for me, it's like efficiency, productivity gains, and also creativity. And I did it (laughs) in order because inside the enterprise, folks wanna be creative, but every time they think about efficiency gains inside their workflows. So how can I make my workflow better so I can invest back into my group? And then how can I make productivity better? Then I can make them creative. And I think we are seeing the flow inside the organizations. So what you touched on in this, right? How can I make my customers successful? Like, let's start that use case. Efficiency, how can I make my support better? So... And not drop my CSAT? That's the KPI I'm gonna look at. And if, see if that matches and I go into productivity gains, offering promotions, next recommendations, then the trust increases. So let's put another dimension there of trust. So now you have your efficiency, you're going into productivity, you trust more and more. And the world we look at is like, how are these things gonna work in the system of intelligence or agents in the future that the trust goes up and then you really free up your human capital to be creative, right? So I think we are on that trust cycle of like, how do we trust these models? They do what we think, they don't go off, they don't hallucinate. All those things are important. So that's how we, how we think about it, and then gradually make progress towards it and not get super excited and then find that the KPIs are not working, for example.
- SGSarah Guo
Are there particular verticals outside of technology you see the strongest adoption in so far or the strongest interest?
- KGKawal Gandhi
Yeah, I see a lot of interest. It started from sales and marketing. Like as soon as this came out from a horizontal perspective, I think it is an intersection between horizontal and vertical for every department, whether regulated, unregulated. There's like bottlenecks and content creation, distribution. All that became like you could see the efficiency gains im- immediately and creative gains, uh, and then making that process and workflow shorter. So that was one. Customer care, you all touched on it, is another one that goes vertical, across horizontal. Even if you are a B2B, you're touching a supplier, and a supplier wants a good experience with their current, you know, provider. So how can that experience get better? And then now we are seeing them verticalize all the internal experiences, so how do I make my employee's experience better? So you're kind of seeing that same technology applied internally. It's like, "Oh, my God, I'm looking for my HR benefits and do this XYZ. How, why can't we make that easy?" Right? So for Sarila, we are seeing the internal process now, whether it's regulated or unregulated, and I think there are these huge, um, areas that we can, you know, provide opportunity with AI. That's my belief.
- SGSarah Guo
I'm assuming that the vast majority of this starts with text, right? Text analysis, generation, et cetera. Where are you seeing, if at all, multimodality,
- 13:31 – 16:19
Multi-Modal AI Models
- SGSarah Guo
voice input, text-to-speech output, any sort of other... even within sales and marketing, perhaps, other modalities in people's use?
- KGKawal Gandhi
Yes. Uh, so I think the core right now is language and text, and conversation fits really well into it. Like, you know, what we're doing, can you convert it into a script? Then you do translations around it and you have a distribution mechanism. Sounds easy now but was really hard three, four years ago. Multimodal, I think we're early stages now, so you're gonna see the early models which are audio based. So I think about it like text, then images, then images and text with audio, and then you're kind of combining these medias together. So as I said before, the trust on these as it increases, you're going to see multimodal coming forward. There are things to think about, like identification, creation, deepfakes that get created, voices that are used out of band. How can we provide that layer of safety around that, especially when it's used internally and the data and the creation becomes, uh, an element of cost? How do we store that? How do we retrieve that? How do we scale that? So w- if, if I give you an example, I think gaming is a really good industry which is kind of scaling out multimodality, and we have a lot of customers who are using that. Now, think of that when you're going shopping and then the latency and the progress bar, like, "Why can't I see the video coming?" Right? So that's why you see it centralized with some organizations who are experimenting, but we got to bring it at scale out, which will be like fantastic to see across not only the web, internally as well.
- SGSarah Guo
Are there any particular patterns that you see, either organizationally or the types of projects that people start with, that make your customers more and less successful with their AI efforts?
- KGKawal Gandhi
Yeah, I think, uh, it goes back to kind of the how much investment, how much belief, how much, you know, they wanna be in the vision quadrant, first of a kind versus fast follower, all plays into it. It's less about the technology, it's more about like where they see themself in the industry, and if they're seeing themself as like, "Hey, we wanna really build something and then scale out and then keep investing in it." I've spoken with a lot of customers, even governments and all, the interest in AI, I have not seen this, it's so high. We have just scratched the surface of this right now, so we can see a lot of these gains which then can get invested back in the, in the business. Um, so I- I see, I see a lot of positive signs around that.
- SGSarah Guo
What's the most expensive part of the investment cycle, you think? I'm, I'm sure it varies dramatically from customer and customer, case to case, but when you say, you know, it depends on commitment level and investment, like what, what's expensive?
- 16:19 – 24:43
AI Adoption, Investment Cost, Anti-patterns
- SGSarah Guo
- KGKawal Gandhi
The expensive part is now becoming cheap. It's the, (laughs) the models, the availability, the usage of the platform. Those were the things that were really expensive. I think you, you, if we do a, a cost craft, uh, you know, curve right now and the investments you all are making into the ecosystem, I think we are seeing that just phenomenally coming down, which will allow people to adopt more. So I think we are, we are really entering a phase of Sarila in- in like a growth phase of just people adopting this more and more. We are seeing signs of just, like, how fast can you do? How fast can we learn? Um, we've had training classes and we've had people certified. I've never seen this. For a year, we have more than 10,000 people certified now, just more on AI, on GCP, uh, generative AI globally, right? Ready to, you know, write code, take advantage of code capabilities, uh, engineers' productivity going up, which used to take time. Migration of systems, can we make that easier? There's a ton of use cases that are helpful with just bringing this forward on the platform.
- SGSarah Guo
Are there any anti-patterns or big mistakes you guys have made internally or that you see customers make when they're trying to get these, um, efforts into production or even choosing use cases?
- KGKawal Gandhi
Not mistake, but we, uh, think deeply about the data that customer bring to our platform. And they're not mistakes, but, uh, we worry about any of this, like, you know, not being used in a way or something happens, so we take that very seriously. Uh, so we run a lot of drills. We make sure that data is kept secure, the models that are trained, even if they are early days, no leaks happen. The adapter model, the certification, everything stays in that tenant. So from the beginning, we just made sure that all of their data, all of their models, all of their weights, that's like their IP, and we wanna safeguard that.So none of the mistakes, just deeply have to think about how fast we move and, and what are the checkpoints we need to make in between to make sure we don't move too fast that we get into mistakes. Uh, those rollbacks are expensive, by the way. Then you have to again educate, like, the industry, make sure they understand. Rather not commit those is where I'm going.
- EGElad Gil
Yeah, Google's always been very good at, uh, sort of data security and ensuring that, you know, customer data is well secured. I guess related to data, there's a number of verticals that are very data-intensive or alternatively where the data can be quite sensitive. You know, healthcare is the most economical example of that, where, you know, bespoke data can really help build a more interesting AI service, sort of like what you've done with MedPaLM, but at the same time, you wanna secure it properly. Are there specific verticals or use cases that you see adopting AI soonest? For example, do you see healthcare moving ahead of education, or do you see fintech or financial services as sort of the early adopter wave?
- KGKawal Gandhi
I think they're all kinda early and adopting in all those verticals that I was mentioning are horizontal, like sales and marketing, creative approaches. Uh, I'm seeing engineering adopt it much faster internally now, uh, with the coding models, with the open source. Uh, it- it's a surprise, by the way, if I had to share my personal... I thought engineering was so productive and they're writing their... Right? Just design elements of UX and the discussion you have around that, I've been there (laughs) , takes a long time. But then having a, a, a bot in the room saying, "Here's how you can approach it," or, "Here's how code could be written, uh, capabilities," it's fantastic to see that. So I see, uh, a lack of, you know, resources which was causing engineering projects to be bottlenecked. Now the transformation is gonna be much faster. So bringing that across to vertical is, like, fantastic. That's the IP that's getting created now inside those organizations.
- EGElad Gil
Yeah, I'm seeing something very similar in the startup world where a lot of the early startups are basically technologists building stuff for themselves or mid-market tech companies as sort of the earliest adopters outside of a Google or a Microsoft or sort of the, the really cutting edge large tech companies. And so it seems like, um, you know, at least in the startup scene, there's a very similar pattern being mirrored relative to what's happening on your cloud in terms of, you know, developers are building for themselves first, mid-market companies are kinda next simply because they're often driven by a developer CEO, and then it kind of is starting to eke into the other parts of the world in terms of adoption patterns. So it's, it's interesting to see that parallel.
- KGKawal Gandhi
It's very interesting. And think about the, the investments you all make. Now I have a platform in GCP which gives me models. I have a coding model that helps me code, and now I just need capital allocation to go experiment. I think that's great. And then now I need to make it secure and scale it up, that's where we come in with our infra, and then make it cost-effective. I think we're, like, early stage of new era of even startups and big companies coming out of this, uh, and different solutions getting built out.
- SGSarah Guo
What you said earlier was really interesting about having a bot in the room, so to speak, helping us with UX discussion or specing a product. Where does that live for a Google product team? Does it live in a chat interface? Does it live in documents? Does it live in, um, the IDE or, uh, in version control? I think a lot of people are trying to figure out how this should integrate into existing developer workflows.
- KGKawal Gandhi
Yeah, I think today in my mind it lives in docs. So you get a lot of, like, you know, benefit in the documentation and that's shared across. I think you've seen it, like, any project is like, "Let's start a doc." Uh, any hypothesis or experiment, "Let's start a doc." I really think, uh, there will be an evolution as the platforms mature where the docs can make suggestions and those suggestions could be, "Here's based on, you know, what you've specced out. Here is output." Now, you can choose to ignore it. It could be a notebook as well. You said IDEs and UX. It could be just code written with, "Put the API key here." It's great to kind of do that. But then when you're scaling up, that's where I get scared. Like, how does it, you know, scale up? You don't want these things to become something that doesn't have the guardrails around them. Who is entitled inside the organization to do this? So we have to make sure logging is on. Simple things like logging we will not think about, but has to be who accessed, when did they access, et cetera. But I think once we get over that, if you're writing a monitoring tool and it knows the schema, you don't need to go back and look at it and have an engineer work on it for, like, four weeks, right? Why can't it be generated out of, "I just need a monitoring tool for model drift"? So those benefits are gonna just accelerate adoption in my mind.
- EGElad Gil
One of the things that Google really pioneered on top of all the cloud services and APIs and everything else is actually the silicon layer. I think it was seven, eight years ago when TPUs were first really rolled forth. As something internal, uh, to Google, they said, "Let's invent silicon that's really good at dealing with AI and ML systems." And so that was my understanding of the origin of the TPU, and it obviously was, um, dramatically more performant than GPU for a long time. Could you talk a little bit about the TPU versus GPU trade-offs and how Google Cloud approaches that?
- KGKawal Gandhi
Yeah, I think how we think about is, like, can the platform be capable enough and have, um, features, as I call them, that takes away the developer or engineer or team thinking about a project from, you know, the infra piece? Early days, like, you know, we used to write Windows app, I've written, like, Java application, and it's always about abstracting it away so you can manage it, scale it, and then roll it out faster. So I think we'll see it work gradually other than training, and when it's trained and deployed. Is it abstracted out from that layer in a way that it gives the fungibility and the adoption and the scale out, um, that helps you kind of do that? That'll, that'll speed up in my mind. And then, uh, to your point, innovation on that layer is gonna also continue.
- EGElad Gil
Are there specific trade-offs you see in terms of when you should use TPU versus GPU or do you think they're reasonable, they're reasonably fungible?
- KGKawal Gandhi
They're reasonable in my mind, it's just, uh, more around how many folks know how to do it directly is something, uh, you know, top of mind for us. Because there is this that was, you know, externally available and a lot of people are trained up. As more people get trained up,
- 24:43 – 31:00
Google's TPU and NVIDIA GPU shortage
- KGKawal Gandhi
we're seeing cost and benefits across the board, right? So having optionality is what we wanna kind of bring forward constantly.
- EGElad Gil
The people that I know who know how to use TPU very well are very comfortable in that environment, and to your point, that's often people who are at Google and therefore they're trained to know how to actually use that underlying silicon versus the GPU in terms of optimizations or, you know, little tricks that make things more efficient.
- KGKawal Gandhi
How can we kind of make that more prevalent and available is something we should kind of invest in in the future. We're gonna see that. More than that I'm, or the team as such, is thinking about inference because as soon as you get the train, you get the application ready, and then the scale is where you, you, you will not know what character AI is like, really thinking about scaling it out. We have other examples in Quora and po- Adams thinking about scaling out the application. So now you're gonna see the shift on like we have V5E, et cetera. We are investing, uh, heavily and, you know, others are doing is how do you scale? That's... And then keep the cost curve kind of going in the other direction. Uh, we're gonna kinda invest and kinda put our effort around that.
- EGElad Gil
Yeah, and I guess since Google trains its own models, um, it may have an advantage in terms of thinking through how that scalability could work for others-
- KGKawal Gandhi
That's right.
- EGElad Gil
... and that may be an interesting differentiator in terms of that, um, that understanding.
- KGKawal Gandhi
Yeah, I think inference will be something we really needed. Um, you can see the numbers, right? Like an application launches and it goes from like five, 10, 15, 100, right? And you're like, "How do I scale this further out?" And Sara touched on it, what happens when we get into multimodal world, right? And if I'm a educator thinking about an education site, I'm not having that kind of luxury. So that's like really something we think about. We didn't talk about that vertical at all is, you know, education and people wanna be learning, deploying, et cetera. How do we help those organizations come on board and take advantage of this as well?
- SGSarah Guo
Are you seeing the current NVIDIA GPU shortage change your customers' perspectives in training or inference processor choices today? Or how does that change your conversations at all?
- KGKawal Gandhi
It definitely is mentioned, uh, as we think about it in the conversations. Uh, it's always the, the discussion that comes up but as you look at the applications and the usage and what they're really trying to solve, you then get to the next step. I think it just sh- the conversation shifts around capabilities and platform to as we were discussing over here, how do you keep the data secure? How do you make sure the right teams have the access to it? How do you make sure the data is regionalized, for example? And those answers are not easy to answer for some of the, the other providers, for example. And then it comes to like if models are gonna be available, do we really see that shortage playing out, right? And I think it's, we are seeing the concentration on training side, but then the world focuses on a different problem, which is like let's focus on the real, how do we deploy it inside your organization and make it successful? That's, I think where the narration changes and then it doesn't even come back to talking about the, the chip shortage and availability, et cetera. Uh, but having said that, we are, we are absolutely focused on like making sure that the platform is available for customers and we see the demand, uh, coming as well towards us.
- SGSarah Guo
Yeah. I, I mean, there's different ways to look at the demand, uh, for training. So, you know, one point of view in the world is that most of the demand will end up quite concentrated in people who provide model services, right? Versus at least quantity of compute used in fine-tuning or training outside of a few large labs. Like how do you think about this and how should customers think about Google's model quality versus others?
- KGKawal Gandhi
Yeah, absolutely. So we think about, you know, going back that the customer should have availability and optionality. So if there's a model that's coming out, it's been trained up, can we make it available like LLaMA inside a model garden? And then they can use it inside their application and not worry about the platform capabilities at all, and it kind of fits into currently and leverages that current investment. We think about that a lot. So just starting from the customer first and then making the technology available rather than thinking about here's another set of models that you need to kind of go take and do the scaffolding and the build out around that. So absolutely, like as these come out, Sara, and you know, there will be new ones, um, you know, launched with some capabilities, different ways of training and approaches, optimization. And if we see our customers in a vertical kind of leaning towards that, we wanna make sure it's available on our platform. Um, and I think models in my mind are like 50, 60% is the, is the work around and how do you leverage your current investment is 30, 40%, uh, work that goes in, uh, from the groups, from our customers as well. And then the upkeep and maintenance and all the operational elements are also important.
- SGSarah Guo
One or two last questions for you. Um, what is the, the thing you're working on right now within cloud AI you're most excited about that we should be looking forward to or, or more customers should know about?
- KGKawal Gandhi
I'm working across the board with multiple customers on just making these user experience the next generation multimodal and how should we think about that. Um, and what are the real first use cases that we should kind of deeply work and partner together with them. So that keeps me really excited around that, and you know, how can we, uh, bring these to our platform and make it capable? Uh, second is just...The amount of sheer data some of our customers want for their model training are available on our platform. That's the next thing to think about, like are there data sets that we can offer out? And we're also looking at synthetic data as well. Can we use in regulated industries some of the synthetic stuff and like recreate those, uh, simulation modes that can give them a good insight into some data that's missing?
- SGSarah Guo
Does Google Cloud have a data marketplace, a labeling offering, labeling
- 31:00 – 32:33
Data Marketplace and Model Training
- SGSarah Guo
tools, anything like that today?
- KGKawal Gandhi
So we have, uh, the partner ecosystem. Through that, we provide a lot of partners to exactly the marketplace. And if they wanna take advantage of, uh, some of those providers, they can absolutely do that. They're integrated into our platform, so you can kinda say, "I want this service for RLHF," and, you know, reach out to them and then use it for your model training, et cetera. The, the most important thing said is to make sure that they're working in congestion and whatever they are, uh, you know, getting from the customer, it's secure, so it's not something used with another customer. So we make sure those pipelines and entitlements stay in the project.
- SGSarah Guo
Ganti, is there anything that you wanted to cover today that we didn't? It's great conversation. I feel like we covered a lot of ground.
- KGKawal Gandhi
Yeah, I think it was... Thank you. I think we covered all of it. I would love to come back in six months and take look back and see, you know, how we move forward. Uh...
- SGSarah Guo
I know making predictions in AI is very hard, but I heard from you that my, um, my Gmail suggestions are gonna get a lot better quickly, so I'll, I'll take that.
- KGKawal Gandhi
(laughs) Yes. Absolutely. Hold us to that. Yeah.
- SGSarah Guo
Yeah. (laughs) Great. Thanks so much for the time today. Really appreciate the conversation.
- KGKawal Gandhi
Thank you.
- SGSarah Guo
Find us on Twitter @nopriorspod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.
Episode duration: 32:33
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode QXxlu65fRGo
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome