The Gist
- Kurian’s cloud comeback. Google Cloud CEO Thomas Kurian credits AI, open models, and enterprise partnerships for Google’s 30% growth surge.
- Agentic future in focus. Kurian details how Google is building agent-based systems that can reason, automate and collaborate across enterprise tasks.
- Enterprise before hype. Kurian insists that Google’s AI progress isn’t hype — it’s driving real change in sectors like retail, manufacturing, and healthcare.
In a revealing interview timed with Google Cloud CEO Thomas Kurian reflects on how Google Cloud is gaining ground on AWS and Microsoft. He shares how Google’s AI infrastructure, open-source philosophy, and partnerships with enterprises like Wendy’s and Mayo Clinic are helping shift the cloud conversation. Kurian also weighs in on the model wars, agentic systems, and why Google Cloud’s sales team finally found its groove.
For years, Google’s cloud services offering ran a distant third behind Microsoft and Amazon. Then came generative AI.
While Google Cloud CEO Thomas Kurian doesn’t give the recent AI revolution all the credit for his division turning in 30% quarterly revenue growth and closing the gap, it’s certainly an important factor. And in an interview on Big Technology Podcast on Wednesday — as Google hosts its annual Google Cloud Next event in Las Vegas — Kurian made the case for Google’s offering.
Google, Kurian said, has DeepMind in house, still offers more than 200 other models — including DeepSeek, Llama and Mistral — and is embracing open source.
“We base it on what customers want,” he told me. “We track what’s on the leaderboards, what’s getting developer adoption, and put them in the platform.”
In a candid conversation touching on Google’s competition, how companies are putting generative AI into practice, and, yes, how tariffs might impact his business, Kurian shared how the current moment is shaking up the competitive balance in cloud, and what might come next.
You can read the Q&A below, edited for length and clarity, and listen to the full episode on Apple Podcasts, Spotify, or your podcast app of choice.
Editor’s note: Kurian sees the future of the call center as a blend of intelligent automation and human support — powered by agentic systems that can reason, take action and interact across platforms. He tells that story in this Q&A through Verizon. In his vision, AI doesn’t just transcribe or route calls — it becomes an active participant, helping customers troubleshoot, initiating workflows and even coordinating with other agents to complete tasks. These systems are designed to reduce friction, improve response times and empower both customers and service reps with real-time, context-aware insights. (More on these trends later).
Table of Contents
Where AI Is Driving Real CX Results
Alex Kantrowitz: Google Cloud Platform has been surging, with growth rates of 30% per quarter recently. Is AI responsible for that?
Thomas Kurian: AI has definitely driven adoption of different parts of our platform. When people come in for AI, some of them say: I really want to do super-scaled training or inference of my own model. There’s a whole range of people doing that, all the way from foundation model companies, whether that’s Anthropic or Midjourney or others.
Also, traditional companies like Ford, for example, wanted to use our chips and our system called TPU, Tensor Processing Unit, to model airflow and wind tunnel simulation using computers rather than physical wind tunnels.
So one set comes and says: I’ll use your AI infrastructure. A second set comes in and says: I want to use your AI models, and that could be somebody building an advertising campaign using our image processing model, somebody wanting to write code using Gemini, somebody wanting to build an application using Gemini, or one of our newer models like Veo, which is our video processing model. In that case, they come in and use the platform.
The third is people coming in and saying, I want to use a packaged agent that you have. For example, we offer something for customer service. We offer something for food ordering. We offer something to help you in your vehicle. We offer our services for cybersecurity. Depending on which customers are coming in, they come in at different layers of our stack.
What Sets Google Apart From Microsoft and Amazon
You’re the CEO of Google Cloud Platform. So when it comes to the broad Google Cloud Platform ability to compete, how important is AI across everything? Yes, of course, it varies for individual use cases, but broadly.
It’s going to be important going forward. We’ve been very measured in how we brought our AI message to the market to avoid people feeling like we’re overhyping things. We’ve always said: We’re going to build the best technology in the market. Right now, we’re super proud. We have over two million developers building every day, every morning, and every night using our AI platform.
You can see the strength of our models. Gemini Pro 2.5. is the world’s leading model. Gemini Flash has the best price performance. Imagen and Veo are considered state of the art for media processing.
I’m not a marketer, so I will tell you it’s an important factor. It will be an increasingly important factor, and our strength in it helps bring other products along with it.
Related Article: Inside the Crisis at Google
Google’s Model Menu: Why Openness Wins
You talked about a lot of models coming out of DeepMind. Here’s what Amazon Web Services might say about that: Google has its own models and it wants you to use them. Amazon, however, will let you pick whichever model you want, from Anthropic on down. What would you say to that?
We offer 200 models on our platform. In fact, we look every quarter at what’s driving popularity in the developer community and we offer them. We offer a variety of third-party models and partners, not just Anthropic, AI21 Labs, Allen Institute. There’s a variety of models there. We offer all the popular open-source models; Llama, Mistral, DeepSeek—a variety of them. We base it on what customers want. We track what’s on the leaderboards, what’s getting developer adoption and put them in the platform. People have been super pleased that we have an open platform. We always feel companies want to choose the best model for their needs. And there’s a range of them. We’re offering a platform. You can choose the model you want.
The only model we don’t offer today is OpenAI and that’s not because we don’t want to offer their model. It’s because…
Would you welcome them on the platform?
Of course we would.
Any talks about that?
I don’t want to tell you that we won’t do it. We have always said we’re open to doing it. It’s their decision.
OpenAI on Google Cloud? Kurian Weighs In
Okay, but your competitors like Amazon might say something like even though Google can offer everything, they might still push you to use DeepMind models. What do you think about that?
Our field is not compensated any differently. Our partner ecosystem is able to use all the models in the platform. Most importantly, we have very large Anthropic customers running on GCP. If you don’t have your own model — or you have a model of your own but it’s terrible — naturally you’re going to say something.
Are you saying that Amazon’s model is terrible?
No
Is Microsoft’s OpenAI Bet Overhyped?
Okay. Why don’t we move to Microsoft then? Microsoft might tell that they have this partnership with OpenAI, which is going to build the best in breed technology. What do you think about that?
They’ve done a good job, no question. OpenAI has done a good job. How much of credit goes to Microsoft outside of providing them a bunch of GPUs, time will tell.
The DeepMind Advantage
There is a pretty interesting difference between Google and Microsoft, and that is that Google does have DeepMind in-house. What does DeepMind being in-house provide you?
We work extraordinarily closely with DeepMind CEO Demis Hassabis and his team. When I say extraordinarily closely, our people sit in the same buildings. My team builds the infrastructure on which the models train and inference. We get models from Demis and his team every day. In fact, we’re staging models out to the developer ecosystem within a matter of a few hours after they’re finally built.
Then we take feedback from users and move it upstream into pre-training to optimize the models. One benefit we have at Google is all our services, whether that’s search or us or YouTube, we are inferencing off the same stack and same model series. The model learns very quickly from all that reinforcement learning feedback and gets better and better. There’s a lot of close collaboration.
Many times when we enter a new domain, for example: We built a solution for cyber intelligence using Gemini. There’s a lot of threats happening in the world. You want to collect all that threat feed. We do that using a team we have called Mandiant and also from other intelligence signals we’re getting on what are the threats emerging. You then want to compare it to your environment to see if you’re at risk. Most importantly, you want to compare it to what parts of my configuration somebody uses to try and get in. We used our Gemini system to help prioritize and also help people hunt faster. We call it threat hunting faster.
In that environment, the model has to learn how to find patterns in a large number of log files that people are ingesting and that requires specific tuning of the model to do that. There are things there that having a close working relationship with the DeepMind team has helped enormously.
There’s similar things when you look at, for example, customer engagement, customer service. We’ve got a project at Wendy’s to automate food ordering in the drive-thru. If you actually think of a drive-thru, it’s an extraordinarily complicated scenario because there’s a lot of background noise, kids screaming in a car, people changing their mind when they’re ordering something. “I didn’t mean that one. I wanted that one, changed to this one, and which one did you mean by that one and this one?”
There’s a lot of things that we needed the model to do to have ultra-low latency in being able to have that conversational interaction with the user. All those elements and the partnership we have with Demis has been super, super productive.
Related Article: Google DeepMind CEO Says Company’s AI Will Surpass ChatGPT
Scaling the Model Mountain: Cost vs. Performance
I was speaking with Mustafa Suleyman, the CEO of Microsoft AI, just a few days ago. He said that without spending the billions of dollars it takes to train the new models, you basically replicate what they’re doing with a lot less money and put it into action just a little bit more slowly. What do think about that argument?
I can just tell you there’s a lot of debate on cost of training and inference. First and foremost, in the long run, if AI scales, the cost you really want to care about is inference cost, because that’s what’s integrated into serving. Any company that wants to recover the cost of training has to have a large scale inference footprint.
There are lots of things we’ve done with our Gemini Flash and Gemini Pro models that you can see and also other people using TPU for inferencing. For example, large companies are using it to allow them to optimize the cost of inference. The proof is in our numbers. If you look at our price performance, meaning quality performance of models and the unit price of tokens, we’re extraordinarily competitive. That’s number one.
Number two, on the training, there’s a bit of confusion that’s may exist in the market. There is frontier research exploration. Frontier research exploration, for example, could be: how do I teach a model a skill like mathematics? How do I teach a model a new skill like planning? How do I teach a model a new skill in a brand new area? Those are what we call frontier research that goes on.
Many experiments like that are done. And then, after you find the recipe, you then actually train a model. Training a model means you’re running the actual training. People are mixing up the total amount of money spent on research and breakthroughs as opposed to actual training. We are very confident we won’t be investing in the way we are as a company without knowing the ratios between all of these. We’re very confident that we know how to run very efficient model training, what we’re investing in frontier research and, most importantly, how we’re handling model inferencing and being world-class at all three.
How Google Is Managing AI Model Growth
Do you think there are still gains to be had by scaling up the pre-training of models?
There are gains to be had. I don’t think they will be at the same ratio as earlier because there’s always a law of diminishing returns at some point. I don’t think we are at the point where there are no more gains but we won’t see the same ratio of gains we used to see.
What Reasoning Really Enables for Customers
With inference, how much of the cost is going toward reasoning and what have these new reasoning capabilities allowed your customers to do that they couldn’t do previously?
Reasoning is something we are starting to see customers using in different parts of our enterprise customer base. For example, in financial services, we’ve had people say: I want to understand what’s happening in financial markets, summarize the information coming off financial market indices and other financial information and tell me what’s happening. And, the model can not only build a plan for how it collects the information, but summarize it and then reason on the summary to say if there are conclusions to be derived.
We are starting to see people doing much more sophisticated, complicated reasoning. We have a travel company, for example, that’s working on giving me a very high-level description of what you want to travel for. I want to fly to New York. I’m taking my son. We’d like to see Coney Island and the following three things; build me a plan and in that, it can have multiple choices, but it may say, if you’re traveling in June, it maybe hot in the afternoon. Therefore, you should see Coney Island in the morning and go to the museum in the afternoon. Models are starting to be able to reason on those things. We are starting to see early adopter companies test in all these different dimensions.
Yes, but in the non-reasoning versions of large language models, I could say, build me a plan and it could do that. So what does reasoning do that allows customers to be able to do stuff they could not previously?
Historically, when LLMs were used, people were worried about hallucination. They gave a large language model a single-step task, meaning, do this and come back to me so that I can determine if your answer is hallucinatory or not.
Secondly, when I asked you a question, you gave me a single answer. You didn’t generate a variety of different options and then reason on it or critique them to say this might be the best answer.
That is the nature of some of the differences we see in why people are using reasoning now as opposed to prior. The more you can trust that the model can actually reason across a set.
Whenever you have a multi-step chain of thought, if you have drift, meaning early in that chain of thought, you have an incorrect answer and then it stepped on that incorrect path and reasoned a lot more. Downstream, you can get way off relative to what the right path ought to be. As models have become more sophisticated, people have trusted them. Part of it is the accuracy can be higher. Part of it is that it can evaluate a set of different choices and give you an answer based on a set of choices, not just say: here’s a single answer.
Third, we also allow people to understand what the steps were in how it reasoned. They can look at it and say: yeah, maybe I agree with it, maybe I don’t.
Inference Costs and Optimization: The Hidden Battle
Jensen at NVIDIA says reasoning costs 100 times more than non-reasoning inference. You also have your own compute, you’re also facilitating that. Is that in the ballpark?
It depends on how long it spends on a task.. For instance, you could give it a very complicated problem and a model can take hours to reason on an extraordinarily large data set that will be more expensive. At the same time, in the example I gave you on travel, given the number of trips that are made, etc. that company is not going to spend millions of dollars to calculate the answer for what’s the best choice of trip for me. Or in the financial markets area, given how much information is coming all the time and how quickly you need to reason on it to present your equity traders or your private wealth managers an answer, you’re also going to time-bound the reasoning computation. These controls in the platform allow you to say: what is the breadth of the reasoning—meaning how large a cluster do you want to reason across and how much data and how long do you want to reason.
All those factors are in the user’s control and therefore drive how much they want to spend.
So a 100x cost increase for reasoning might be the most optimistic scenario if you’re selling compute. But there are plenty of other reasoning use cases that are much less expensive. Does that sound like a reasonable takeaway here?
What we’ve seen, if you look at models themselves, you would need a billion times more energy if you straight-line extrapolated the cost of a model from an inference point of view in 2023. If you look at 2024, we’ve reduced the cost of inferencing and you can see it in our prices of the models by a factor of 20 times.
There’s a lot of optimizations you can do in that. Same thing with reasoning. There will be a lot of optimizations that we will continue to make to lower the cost of reasoning. People will want to do more reasoning. As you make it more affordable, people will use it more widely. There will be a range of things all the way from relatively quick, short, time-bound reasoning to much longer things.
There’s a financial institution working with us to do fraud analysis on transactions that are happening on the payment network. By definition, they need to do that in real time. Their reasoning is time-bound because they have to flag a transaction within a certain period of time. Now, they also do anti-money laundering and other calculations. That reasoning is done in batch and can take a lot longer if they want to. That’s why there will be a range of these things. Saying it’s all one or all the other is not correct.
I just want to ask you about open source. There’s a view that open source overtakes the proprietary models, it really won’t matter which cloud platform you use and it levels the playing field. What do you think about that?
I think it’s very early to tell whether open source versus proprietary models are going to win or lose. We put out an open source model called Gemma, which is getting a lot of adoption among the developer community for people wanting to build a certain class of applications. We want to continue to see how open source and proprietary models evolve.
Historically open source models were used because people wanted to fine tune a model to have their own weights. When I say fine tune a model, they would take an open source model and really tune it on their dataset to have their own weights. Now, as more and more sophisticated techniques for optimizing models have come in where you don’t need to depend on fine-tuning with adjustment of all the model weights, that case has become less important. But, there’s always going to be a need for a combination of these and it’s very early to tell. Separate from that, let’s assume to your question, Alex, if open source became the dominant one, how would we do we have a history with that?
A couple of examples. First of all: Kubernetes is an open standard for people spinning up cloud workloads in computation. Many people would say Kubernetes is a standard and has become the dominant programming paradigm through which people stand up containerized workloads, which are the dominant way forward. We’ve got a great solution, something called Google Kubernetes Engine. People still take vanilla Kubernetes, but choose us because of performance, scale, reliability and all the other things. Even if you said open source models become popular, you still have to serve the model. You still have to optimize the performance of the model, and we’re confident we can do that better than others. Lastly, many people are coming in at different other parts of the stack where they’re using a model as part of a service.
For instance, I gave you an example with cyber security. Inside the cyber security tool, they don’t really care if it’s Gemini or something else. What they’re looking for is a great cyber hunting capability. If you look at data science, where people are saying, I just want to ask a question to my data warehouse using English. Can you understand what I’m asking and show me the calculations? That’s actually a very complex technical problem.
For those cases, do they really care if it is Gemini? It works particularly well because it’s Gemini, but they’re just accessing our product. We have a new product called Agentspace. Agentspace is search, conversational chat and agentic technology for your enterprise. They really don’t see the model. They’re using an application or a platform and underneath we are providing the capability. There’s other ways to differentiate even if open source became extraordinarily popular.
And AgentSpace, if I’m right, is your fastest growing product ever?
Yes. We’re very proud of it.
Workforce Transformation Through Embedded AI
Okay, last question on this topic and then we’re going to move on to product. You’ve made Gemini a free add-on for the $30 per seat option in Google Workspace. Can you talk through that decision?
Google Workspace is our collaboration tool. We made Gemini part of Google Workspace rather than requiring somebody to buy a separate subscription. Why did we do that? If you’re using Google Workspace, and for example, you’re using Gmail, people love the fact that when I receive a lot of email, it summarizes things for me. Or, I want to write an email and I want to write it to recommend somebody for a position. You can ask it to help write the email. If you’re doing slides in Google Slides, you want to have a great visual presentation of a set of information. I’m not very good at creating amazing slides, but now you can use our Imagine tool to create amazing images and put them into slides.
It requires people to change the way they work. We want to drive daily usage of AI, because it needs to change the way they work, you want them to get used to using it. If this group of users in a company gets it, that group of users is not allowed to do it, this group is maybe going to be allowed but they have to buy a subscription, you don’t let them get used to using AI as part of their daily life.
We learned, doing it back in 2014 and 2015, when we added auto-complete, auto-suggest to Gmail that a lot of people love. It was part of the product. That’s what got people used to using it. It helps us improve our AI because of all the usage you notice patterns and the models get better and better. It also helps condition the users to start using AI to assist them every day. That’s why we put it into the base product.
Now, there have been a number of articles recently calling generative AI technology “mid,” or shiny on the outside but just a bit disappointing when you use it. What type of applications have you seen that make you believe that’s wrong?
Any major technology shift takes a while for adoption to happen and for people to understand it. If you look at the internet, it went through a similar thing. If you look back at 97-99, there was a lot of hype that it was going to change things. In 2001, some of the hype fell apart, but over the long term, it has definitely shown that it’s transformed the way that people find information, they buy things. They even run their businesses.
Early on, people had maybe too rosy a view of AI. In the long term, we always say that technology is going to be really a fundamental transformation. How quickly it changes in the day-to-day, every day, time will tell. I’ll give you examples of things that we always say, let the customers tell the story. Let’s not tell the customer story on their behalf.
We’re super proud of the work we’ve done. At Seattle Children’s Hospital they wanted their pediatricians, when they see a child, to be able to understand the guidelines for treatment. Guidelines are complicated. You need to be accurate in the information put in front of the person. We’ve helped them do that. At the Mayo Clinic, they wanted us to provide a system through which a doctor could find information from the electronic health record, from their clinical trial system from the radiology imaging system and synthesize it so a nurse, before she sees a patient, can see the information.
If you look at what we did with Verizon, Verizon is the largest consumer customer base in telecommunications in the United States. They have over a million calls a day going into the call center. We’ve helped them build something called a personal research assistant so that if I am a call center person and you call me saying, here is my set of issues and how long does it take to research that information and put it back in front of you so that you can handle customer service faster and better. They are very pleased, they have a 96% accuracy in the information placed. That’s important because that number is better than a human.
We’ve had people do it in the consumer world, in retail. We’ve had people improve the way they shop for things, helping people change the accuracy of search results on their search page. We’ve improved the back office in a company called AES. It’s an energy utility, it builds and delivers energy to different parts of the world. It just takes them 14 days to run their end of quarter audit. They do it in one hour now. These are examples of people doing it right at the core of their business. Honeywell in industrial manufacturing has put our technology into the manufacturing control systems. Deutsche Bank is using it for their private wealth managers to summarize information for them. Are they transformative to the people doing the work and to those customers? It is transformative. They’ve seen the business results. Time will tell how transformative consumers experience it to be.
Related Article: Top Contact Center Trends to Watch in 2025
Why do you think enterprise has been so much quicker to adopt this than consumer? And is it going to be like the BlackBerry? Are we going to start to see some enterprise adoption, and then all of a sudden it will just shift over to consumer?
The enterprises find real value at the core of their business. It’s helping people like Wayfair write code faster and write better code. It’s helping people like Mattel find answers so that they can be much more quick and efficient in managing their supply chain and operations infrastructure. It’s helping people in the entertainment business build much better recommendations of titles for people to see.
There’s lots of companies using our recommendation system for it. It helps them decide: Do I want to improve my top line? Top line is to get people to buy more product, get people to use more of my services. For example, recommendations on movie titles. It helps them be much more efficient in their back office.
It also helps Home Depot. We help them build an employee help desk that answers employee questions about the benefits, medical insurance and lots of things. It also helps them improve the way their own employees experience the organization.
Enterprises are choosing it for a variety of reasons. Time will tell whether there will be many killer consumer apps based on generative AI, but we’re focused on making sure people have the best technology to build a great experience.
Bending Spoon, for example, is a company out of Italy. 60 million photos a day. They’re using our tools to edit and do magical stuff with it. Samsung S24, every smartphone has our Gemini AI on it. People are using it to create great images and do amazing stuff with it. There’s lots and lots of examples of even enterprises now bringing these technologies to their consumer experience. Even the work that we did with Mercedes helped me drive and helped me give me guidance by just talking to maps. Is it transformative? It’s up to the consumer to decide.
Agentic Platforms for End-to-End Customer Journeys
Agentic is one of the biggest buzzwords I’ve ever heard covering tech. What’s real, and how far do you think we are in the rollout?
It’s early on. But, let me start with what we mean by an agent. An agent is an intelligent system, software system, that has a set of skills. One of the set of skills is, for example, that it can reason. Another set is that it can use tools. Third, it can communicate with enterprise applications and systems and do that in order to, for example, automate, answer questions, or do something on your behalf.
Here’s a very simple example of a way to think about a single agent and multi-agent scenario. I have a phone. I want to decide whether I want to upgrade that phone or not. So I call my telephone company, a digital agent comes on and says, Thomas, I notice you’re calling from this number. Let me find out what you are calling about? And I said: I’d like to figure out a trade-in. The agent the says: I notice you’re on your mobile. Can I text you a link? Please take a photograph of your phone and tell me and upload it. I notice you have X phone, Y model. You have a cracked screen, so you’re authorized for this much of a trade-in.
Handling that interaction with a customer. It’s looking at my plan and my profile and says, he’s a premium customer, so he’s eligible for trade-in. It’s looking at using a set of tools to calculate, do I have the right profile and am I authorized for a trade-in? Then it’s looking up a system to understand how much is that trade-in amount worth? It’s automating that flow rather than saying the customer’s calling in for a trade-in, let me transcribe that for a human and then the human tells me what phone they have, and then saying they have X phone, if the screen is cracked.
Now where is agent to agent interaction? Agent-to-agent interaction is when this agent is functioning, it may need to, for example, say: hey, I’m going to send you the new phone, but you have to activate it. In order to activate it, I’m going to schedule you to go to our nearest retail store.
Yes, been there.
It may need to call a scheduling system to schedule an appointment for you. That scheduling system may be in some CRM, Salesforce or otherwise, where it needs to create a ticket for you so that when you go into the store, it says Friday morning, Thomas is showing up with his new phone. Let’s have people ready to activate it. There’s one agent talking to another agent and that needs an open protocol.
What we’ve done at Google is build an agent development kit which has an API through which you can 1. create agents. We provide you a tool set to do it. We provide you a set of tools that these agents can use, but we also have an open agent to agent protocol supported by a lot of companies. It’s just an open source project that we’re doing where you can connect our agent to any other agent.
Related Article: Your Dashboard Is Outdated: Welcome to the Agentic AI Era
Now we get to tariffs. This is a tweet from investor Gavin Baker who effectively said, “Geopolitically, nothing matters more than winning AI. These tariffs as constructed essentially guarantee that America will lose AI by making America the most expensive place on earth to build AI data centers.” Do you agree with that? And how do you think these tariffs will impact your business?
I’m not going to comment on policy. We do have a global footprint. We do have data centers, machines, networks and subsea cables in many, many different parts of the world. That’s part of Google’s infrastructure and I am responsible for that along with the team. We have got lots of places where we manufacture things, lots of places where we deliver things and we are working through the implications of the tariffs for our part of the business. We’re confident we can work through it and we have lots of smart people way smarter than me working on solutions on how we manage through this environment, which is uncertain.
How Google Cloud CEO Thomas Kurian’s Strategy Impacts CX and Marketing Leaders
Editor’s note: Kurian’s insights reveal how Google Cloud is using AI to reshape the customer and employee experience, power enterprise applications and support marketing teams through open platforms and agent-based tools. Here’s our breakdown highlighting CX and marketing takeaways from this Q&A:
CX/Marketing Insight | What Kurian Said | Why It Matters |
---|---|---|
Prebuilt AI for customer-facing use cases | Google Cloud offers AI-powered solutions for industries like food service, automotive, and cybersecurity, allowing organizations to deploy intelligent agents out-of-the-box. | These ready-to-use AI agents streamline implementation for customer-facing scenarios, reducing development time and enhancing digital experiences quickly. |
Conversational AI for real-world environments | Kurian cited Wendy’s drive-thru project, where AI handles noisy, dynamic conversations with high responsiveness and accuracy. | AI that works reliably in complex environments like drive-thrus shows the tech is mature enough to support high-stakes customer interactions. |
Enhanced call center intelligence | Verizon uses Google Cloud to support call center agents with AI-powered assistants that deliver accurate, context-specific information. | Improves both agent productivity and customer satisfaction by speeding up issue resolution and reducing errors. |
Internal service improvements that boost EX and CX | Home Depot implemented an internal help desk using Google Cloud AI to support employees with benefits and HR-related questions. | A better employee experience creates more engaged workers — and that translates to more consistent, helpful customer interactions. |
Flexible model ecosystem based on customer needs | Rather than forcing customers to use Google’s in-house models, the platform supports 200+ options, including open source and third-party models. | Marketers and CX teams benefit from choice and adaptability — reducing vendor lock-in and enabling use of best-fit models for different applications. |
Everyday AI integration inside productivity tools | Gemini is embedded in Workspace apps like Gmail and Slides, helping users summarize messages and generate creative visuals more easily. | Marketers gain everyday productivity boosts from AI-assisted email writing and presentation building — driving content velocity. |
Agentic platforms for end-to-end customer journeys | Google is developing agents that reason, use tools, and automate customer tasks — and can even interact with each other via open protocols. | These agent-based systems could eventually power entire digital journeys, reducing friction across channels and unlocking new service models. |
But what about all the raw materials that come in? This is continuing on from Baker, who says, “The semiconductor exemption was irrelevant for AI. Data center semiconductors come into America in finished goods from Taiwan and other Asian countries, which includes servers, storage systems, and networking switches. By the time we have developed the capacity to domestically produce these systems, we will have lost the AI race.” You’re buying this stuff, what do you think about that?
Some parts of our manufacturing, significant parts, are here and we have solutions to some of this and I’ll leave it at that because the rest of it is confidential on how we’re managing through this environment.
Do you rely on suppliers outside of the US? Does that mean your costs will have to increase if they go into effect?
We have mitigations in lots of other ways to protect our infrastructure and our cost. I don’t want to give more details than that because it can lead to speculation on financial results, and I’m not going to get into that. We’ve run a global infrastructure for Alphabet for many, many years. Part of our success at Google has been having good, low cost and highly scalable training, serving infrastructure for all our services, YouTube, search, advertising, Waymo, etc. I always tell people to trust that we know how to run a large global supply chain. We’ve been working on contingency plans for quite a while.
As we round out this interview, I want to share that the conventional wisdom on Google Cloud has been that Google had all the technology in the world to compete in cloud, but none of the sales muscle. That Google got used to selling in an automated fashion and didn’t know how to sell to people. When you came into Google Cloud revenue was a billion dollars a year. Now it’s in the 40s of billions. It’s expected to be in the 50s in 2025. How did Google Cloud learn how to sell?
We’ve learned how to sell by listening to customers and building a great, great, great sales team. In order to do cloud well, you have to do three really basic things. You have to anticipate customer problems in different ways than other people did. We are very proud of our ability to identify where the next customer pain point is going to be, that’s number two.
We built a global sales team and credit to our go-to-market organization. We’ve done it. It’s a grind to build such a thing. That’s why very few companies have done it successfully and to grow from the scale we were in 2019 to where we are now, no other enterprise software company has grown that fast. That’s a credit to our sales organization. We have to bring discipline. We have to start with a certain set of countries, get critical mass there then expand. We had to find the right mixture of sales reps, technical customer engineers, people who do customer service and customer support. We had to ensure that, for example, our contracting, legal framework and all of the other things that sit behind the sales organization were world class. I’m super proud of that. Third, we have always believed that cloud is a platform business and the way that you grow is you provide a platform.
That lets other people grow on top of you, whether that’s independent software vendors like Salesforce, ServiceNow, Workday and SAP, all of whom have great relationships with us. You work with partners, for example, the relationship we have with Oracle and many other independent software vendors, Palo Alto Networks, etc., bringing them to our customer base jointly.
Lastly, for every customer who has in-house staff, there are many who don’t and they want partners to help them deliver the solutions. We made a decision early on, we’re not going to have a big professional services organization specifically so that we can attract the partner community. One stat we are super proud of, in 2019 we had about thousand partners, today we have a hundred thousand. It’s that allowing people to grow with you and building that great sales organization that’s been what’s transformed our business and when we talk to customers and when you see them at the show next week, you’ll see how proud they are at the difference in the way that Google works with them. They listen to them, that we help them innovate their business and it’s not an IT vendor relationship with the vast majority of them.
Okay, last question for you. Right now, cloud makes up like 15 to 20% of total overall workloads. most of hosting is still done on-prem. So where do think it can get to in the future? Can it go up to 100%?
We definitely see it getting north of 50%. The historical reluctance on: I do it cheaper, I can do it better, my cyber security controls on premise are better. There were lots of those arguments. People are seeing that they don’t make sense. As the breadth of technology that you get in the cloud continues to mature the cyber security tools, the AI platforms, the analytical tools, how fast you can do something. It’s helping people move. As an example, last year we had Walmart speaking at a conference. Every transaction that happens at a Walmart gets into our cloud to allow them to do analysis of how much inventory they need to replace, which customers are buying, what products are selling.
If you look at the volume of transactions and the accuracy and how quickly they can get analysis into the hands of their store managers, their retail store people, it’s an order of magnitude faster. Our job is not to criticize customers who run stuff on their premises. There’s always some reasons for it.
Increasingly, we’ve also built technology to take our cloud into their data centers if they want to. For example, for people who have classified and highly sensitive workloads, we’ve taken our cloud into their data centers and that’s also a new way to deliver the cloud. If you look at the work we’re doing with McDonald’s, we’re putting our cloud into the restaurants. When people think about the cloud, they used to think it’s one definition, it’s these big cloud regions that we have. Increasingly, the cloud also means the same technology can come into your premises. That’s also changing this definition of what percentage of workloads you can reach.
Thomas, thank you so much for coming on. It’s great to meet you.
It’s such a pleasure to speak with you, Alex. Thanks again for having me.
Learn how you can join our contributor community.