-
Decoding Google Cloud Next 2025
-
Key AI Innovations & Business Impact
Decoding Google Cloud Next 2025: Key AI Innovations & Business Impact
Join Brandon Carter, Marketing Director at Promevo, as he hosts a deep dive into the key takeaways from Google Cloud NEXT. With over 220 significant announcements, experts John Petit, Aaron Gutierrez, and Mark Baquirin reviews the most impactful news in AI, data analytics, and Google Workspace.
Featuring insights on Gemini, Vertex AI, Looker, and more, this talk explores how these advancements can boost productivity and integrate seamlessly into your business. The team also discuses the exciting developments in agentic workflows and practical AI applications.
Topics & Timeline
00:00 Introduction and Conference Overview
01:22 Meet the Experts
01:59 Promevo and Google Cloud Next Highlights
05:10 Key Takeaways from Google Cloud Next
10:03 Gemini 2.5 Pro and AI Advancements
16:30 Imagen and Chirp: New AI Tools
22:26 Open Source Models and Vertex AI
26:06 Real-Time Multimodal Capabilities
26:34 Ensuring AI Reliability with Global Endpoints
27:07 Understanding Vertex AI and Gemini
27:45 Exploring Agentic Workflows
28:19 Google's Agent Development Kit
31:18 Agent to Agent Protocol
33:05 Introduction to Agentspace
38:55 Google Workspace Flows
41:10 Enhancements in Google Docs and Sheets
46:16 Conversational Analytics in Looker
49:26 Object Table Querying in BigQuery
52:22 Conclusion and Contact Information
Transcript
Brandon Carter: We're super excited to talk through all the things that we learned last week at Google Cloud Next.
Everyone here is still recovering. It's a super busy conference, uh, with a ton of stuff going on. A lot of walking, a lot of conversation. Some of us, the voices are still scratchy.
I know that, like, looking at my steps on my phone, I averaged about 18,000 steps a day, uh, which is good. Like it is, uh, it's a great workout. But more than that, it's just a lot of exciting stuff happening, a lot of stuff to learn, a lot of sharing.
So my name is Brandon Carter. I'm the Marketing Director at Promevo, and I'm very glad to be your host today.
Here's a quick glance at our agenda. Uh, Google Cloud had over 220 significant announcements last week, and we've sorted through the ones that we think are most significant and impactful to you, the end user who utilizes Google Cloud tools like Gemini and Vertex AI, Google [00:01:00] Cloud Platform, Looker, BigQuery, Google Workspace.
So as such, we, we've split the agenda into a few parts. Uh, we'll start off with the highest impact announcements across the board, just sort of like, Hey, what's the big picture here?
And then, uh, each of our experts is gonna dive into what we think are the most significant announcements across cloud, AI, data analytics, and Workspace.
Let's meet our experts for today.
Joining us live from Hawaii where it's very early is our Chief Technology Officer, John Petit. John, good morning.
We're also joined by Aaron Gutierrez, who runs our data analytics practice and resident guru on all things Looker and BigQuery. Good morning, Aaron.
Aaron Gutierrez: Morning.
Brandon Carter: And of course, those of you who attend our gPanel webinars will recognize Mark Baquirin, who is also an expert on all things Google Workspace and all the announcement that, that, all the announcements that they made around that particular product.
So, good morning, Mark.
Mark Baquirin: Good morning.
Brandon Carter: All of us were [00:02:00] on site for the duration of the event last week. We all came away with a ton of, you know, new things that we've learned. A lot of enthusiasm and real excitement about where things are going. Uh, and you might be wondering, well, who is this "we"?
All of you out here, some of you're clients and partners, some of you have no idea who we are. You're just wanting to learn about Google Cloud Next.
So, Promevo, for those of you that are uninitiated, is a Google Cloud Premier partner, which means we help companies like yours with all of your Google needs, ChromeOS devices like Chromebooks and Meet kits, Google Cloud platform, data analytics tools like BigQuery and Looker, Google Workspace, and of course, you know, all of these different AI tools that they've built out.
We'll sell it to you, we'll service it for you. We provide support. We give you someone to call.
Uh, we're probably best known for gPanel, which is our Google Workspace Management platform that unlocks a ton of functions for your Google Workspace, including things like batch policy updates, uh, automation, decommissioning recipes, integrations, and [00:03:00] more.
If you're not familiar with that one, uh, be sure to check out our new website at gpanel.io to schedule your demo there.
Let's talk a little bit about just Google Cloud Next from Promevo's perspective.
So, I mean, what is Google Cloud Next? For those of you that don't know, it's Google's annual conference where, at least this year, 45,000 Google customers, partners, developers, employees, they all converge on the Mandalay Bay in Las Vegas for four days of learning, innovation, announcements.
These are some of our highlights. Those of you that were there, hopefully you got one of our little llama plushies. We went through about 500 in about two days, and people were still coming up to me in the airport getting ready to leave Las Vegas, asking if they could have a llama, like, so a big hit.
This thing on the right, that was obviously the Google CEO at the Sphere, talking about how Google Cloud partnered with, uh, like a film production company to [00:04:00] recreate the Wizard of Oz for the Sphere using generative AI. Totally mind blowing experience. It's interesting to see how they use technology to recreate, you know, what was meant for a screen that was a rectangle and turn it into this, like, immersive experience. Uh, that's probably up on YouTube. I highly recommend checking that out.
Uh, you know, of course, Promevo, we hosted a happy hour with a couple hundred of our closest friends, and over the course of the few days, I guess we probably had about a thousand people that stopped by the booth, uh, and picked up llamas or registered for our Lego giveaway.
Uh, so again, if you were there, it was really great seeing all of you. A lot of energy, a lot of enthusiasm. Culminated Thursday night with a, uh, a concert at Allegiant Stadium where the Raiders play headlined by the Killers. But for me, the highlight was Wyclef John getting so into his set that he went over his allotted time and eventually they had to pull him off the stage, like Oscar style, uh, just literally cutting his music [00:05:00] and cutting the lights.
If you didn't go this year and you're interested in going next year, reach out to us. We can help you, uh, we can help you make your way to Google Next.
All right, that's enough of me talking. Let's get into the meat of this thing. I wanted to start with a big, high level takeaway.
And we'll start with you, John. Like, what would you say was your li like large overall impression from Google Cloud next this year?
John Pettit: Yeah, I, I think, you know, walkaway impression was, um, this was all about AI. It was an AI conference more than a cloud conference in the past. We were talking about like cloud migrations and things like that, and, you know, web 2.0, and this was all about AI.
Uh, Google, you know, has done a tremendous job, I think getting into and being known for AI over the last year. Inflection point was maybe October, November when we had Gemini 2.0 come out, and you had a very powerful, comparable model to everything else that was out there that you know is more cost effective.
The things they showed with Gemini 2.5 at this conference are equally impressive. Also, it's just not, [00:06:00] it's not just the, the models that they're focusing on and the APIs, it's like the, the whole platform, right? They're doing the best job at integrating AI across their entire tech stack, their products, their cloud environment.
Um, anything, anything out there. But, um, super impressive stuff to see coming, um, both products, AI models, tools, everything that you need to be in the AI space coming out of this conference.
Brandon Carter: Sure. We're definitely gonna get into some of those here in just a second, but I'm curious, Mark or Aaron, do you, what were your takeaways? Do you wanna chime in?
Aaron Gutierrez: Um, yeah, I, I just wanted to mention how I, I think Google is really getting better at telling the story of how AI is even useful to people. Um, especially with the Wizard of Oz thing, like, you wonder how that's done right? Like what kind of technologies are being done? In the presentation, they showed us how they're using Google tools and Google AI.
So last year and the year before, I think a lot of people, [00:07:00] you know, they boiled AI down to just fancy auto complete, right?
Brandon Carter: Mm-hmm.
Aaron Gutierrez: Like something that can just build, build text. You, you put in a text prompt, gives you a response, but the stories are getting better. The examples are getting better with how they're using AI to do other things.
I think they showed us some video tool, what is it called now?
John Pettit: Veo.
Aaron Gutierrez: Yeah. That was really cool how that's improved so much over the initial tiny glimpse they showed us last year. It's actually pretty useful. And now I, I kind of caught myself watching like some of the presentations and being like, uh, was that built with Veo or is this real?
Um, and I guess the other big takeaway for me was, you know how last year was just a rush to get to AI or in years past it's always been a rush. Just everyone wants to get their foot in the AI door. And it reminded me of a classic, that classic me meme [00:08:00] that you see sometimes when people skip steps on stuff, it's the one when the kid's just making a huge stretch over four steps going up the staircase.
And the one I saw that really made me laugh was a CEO taking a big step just trying to get to AI and skipping a ton of important steps like...
Brandon Carter: Sure.
Aaron Gutierrez: Data preparation and security of your data and infrastructure, even things like that, that are necessary to do this. Well, I feel like, uh, the second takeaway from me was they really went back and addressed all of those intermediate steps, built in a lot of cool tooling to even prepare you to use the, the AI tools that is the eventual finish line of the race.
So I, I think they did a great job and it's making using it a lot easier for not just us developers, but you know...
Brandon Carter: Right.
Aaron Gutierrez: Um, people in the Workspace side as well.
Brandon Carter: The practical application of AI for sure, like definitely came away with that. And John's gonna show you a bunch of that stuff here in a second.
Mark, did you [00:09:00] want to add anything?
Mark Baquirin: Uh, yeah, speaking of the Workspace side, as of January, Google had started to bake in Gemini into the different Google applications and what they released here at Next, they've shown that they've continued to evolve and add more functionality, just giving Workspace users even more tools and more ways to use, uh, Gemini and AI.
So, yeah, it, it was just an amazing thing to see and, uh, yeah, to be able to take part in all of that.
Brandon Carter: Yeah, some great exciting stuff for Google Workspace that Mark will share everybody here in just a minute. But obviously, AI, it's all about AI.
I mean, most things are about AI, but I think a good recap is this is AI, like in use case, in ways that are gonna improve the lives of your employees and your clients, your customers, with, you know, increased productivity and a wider set of knowledge to pull from.
Well, let's not talk about it. Let's get into it. Um, I'm gonna hand it over to John. You know, we'll be chiming in on occasion, but John, if you could walk us through [00:10:00] what are the big AI things that they're talking about?
John Pettit: Yeah, so, so first and foremost, Gemini 2.5 Pro, which is now available in public preview, is Google's uh, deep reasoning model.
So it can handle very complex tasks with high accuracy. It's not focused on speed of response, but quality of response. And interesting enough, they've claimed the number one spot on the chat bot arena. So that chat bot arena is where users are rating models based on accuracy and quality of response.
It's not just like, am I a human test? Or some of the other ones that hope you could put out there. And they're doing it for things like code. They're doing it for things like, could I use this in, in a complex reasoning scenario where maybe it's legal or medical or other fields and, and Google's, with their new models, claim the number one spot.
On top of that. It's the million token context window allowing you to pass in a ton of, um, uh, video images, audio, uh, textual data and some file formats that it [00:11:00] supports. Uh, definitely sets it apart, right? It makes it so you can out of the box, start building some very powerful systems or begin to swap some of your systems into it in a more cost effective way.
So you compare it against the cost of some of the other models, Gemini is, again, it's gonna be the number one in terms of response and the number one in terms of cost for you to be able to, to build your system. So super impressive to see some of the things they've done. And there's some things in there that I can talk about as we talk about it.
They have a live API, so if you have a streaming application, maybe you're sending in, samples of screenshots of live video or you're sending in things like that and you want it to continuously analyze and give you feedback. So applications where you, maybe you're out there saying like, what's in this room?
Or, if you're somebody doing a home assessment, you know, categorize everything in here and help me build, right, an appraisal, right? There's a lot of like practical applications that'll come out of a model that powerful. So super neat to see Google leading the way. Again, setting themselves apart with both on efficiency and cost.
It's in public preview, so if you need access to it, you can work with, Promevo or your Google sales rep or your connections there [00:12:00] and get access to these if you wanted to start building or testing things around it.
Also interesting is that they're, they're experimenting with a 2 million token context window, which again is, um, you know, thousands of pages of text. It's like thousands of images. It's, it's a lot of data.
Aaron Gutierrez: I, I think you mentioned something really important that has advanced AI. Um, that's second bullet point. You mentioned reasoning. And to me that's a huge step forward in AI.
I remember even a year or so ago, if you look something up in AI and then you try to use that as proof people would be like, you just used Chat GPT or whatever that, I don't believe that that's fake.
Um, this is really similar to, to what happened with Wikipedia when it first came out. It was, it was just a tool that yeah, it existed. It was cool and aggregated everything, but is it trustworthy?
I think some of the cool things that Gemini's doing now in line with the deep research, uh, mode [00:13:00] is that it's showing its work. It's not just giving you blobs of text back. It's showing you where they came from. It's giving you like direct links to sources. So if you click on it, you can actually go see, you know, the study that, that it referenced in the answer.
Brandon Carter: Yeah.
Aaron Gutierrez: And to me that's super important because now instead of being an untrustworthy.
You know, fake news source, Gemini is, is actually a perfect research tool that gives you, like lots of things that would take you a lot of hours to do. It just kind of does that instantly. So even, even playing with it on my phone like lately has been really nice because it has that deep research, uh, feature built into just Gemini on the app.
So it's just, it's just reliable and, and more useful as a tool now than it was when it first came out.
Brandon Carter: Definitely a game changer and, and you know, definitely puts Google not just at par, but over a lot of the common, uh, AI chat bots that people are using.
Aaron Gutierrez: Yeah. Credibility. It's, it's important with data. [00:14:00] It's huge stuff.
So the fact that they like put it, put it in line to the responses is pretty nice.
John Pettit: Yeah. Yeah. That, that grounding through the API to be able to say ground on the search results where Google can shine with, its like future repository of website data is pretty impressive. And how quick it still responds.
Aaron Gutierrez: Yeah, I think that's cool. The Google search ability, I forgot about that part.
John Pettit: Yeah. And then speaking of quickness: Flash, right? So that's the, the model that's your everyday workhorse for goul. You want very quick responses. And as you think about like agentic workflows, you have some tasks that require reasoning.
Like, tell me the next set of actions, or, you know, I'm gonna build something that's gonna think forward through the whole process, or analyze a large set of data that's been accumulated. But flash is good for the quick decisions, right? Is this relevant or not? Um, give me quick responses. And the quality of that, even in Gemini 2.0 Flash was super impressive.
Um, they're, they've also given the ability for people to [00:15:00] tune how much thinking the cost, cost efficient thinking is interesting. You can tune the thinking that you want it to do to speed up the response and or quality and balance that out.
So that's, that's a neat new feature, uh, that's coming out. And this is coming soon to AI Studio and Vertex AI and Gemini app.
Uh, but focused on low latency, cost efficiency, and, and when I say workforce models is probably the one that's gonna be used for 80, 90% of the things that you're doing in your task or your workflow. Um, but 2.5 Pro will be used for the things that require like high accuracy and high performance part, like processing, like coding tasks.
Yeah. Things like that. Um, but this will be great for like fitting in like virtual assistants, realtime summarization, anything that you wanna integrate into apps. That'll be a powerful model. So excited to see this coming. Again, the Flash models are like 10 times cheaper than the other competitors out there. So very cost effective to run.
Aaron Gutierrez: Should be super useful. Um, for chat bots and stuff. I think like that's just gonna make the UX experience a lot [00:16:00] better. Um, just due to that latency, it's gonna be fast. Like a lot of times you get frustrated using these, these tools that because they're slow and whatnot, but like the fact that they're focusing on the low latency part as well.
John Pettit: Yeah.
Aaron Gutierrez: Pretty awesome.
John Pettit: I would recommend anybody to start with Gemini 2.0 right now, right? Like the Flash model that's out there for 2.0 is great and it's available. I would recommend like start building around that, knowing that 2.5 Flash is gonna be better, more robust for again, quick, efficient tasks.
Um, Imagen was interesting to see how Google's continuing to push forward, um, image editing. So simple things like, I wanna go to the GCP and use Imagen, I can remove things from photos, right? It can do in painting. So it can remove items, it can add items back in, and it's significantly improved for what it did before.
It also has the ability for you to continue to iterate on your prompts to get what you want. So they're spending a lot of time, not just on the tooling, but they're the quality of the tools that they're putting out there. So from like a first party tool [00:17:00] standpoint, super impressive to see what you can create.
I know like we played around with this a little bit with our MLB hackathon and we were generating like, you know, photorealistic images of ballparks from different perspectives so you could feel immersed and, you know, it's, it's a really powerful, API for generating all kinds of image, and I think it's gonna power like the next generation of like image editing capabilities. A lot of neat apps are gonna come outta this.
Brandon Carter: Yeah. I just, I, I'm just for a moment, like, I know I can't make it too big on the, the webinar screen, but the little strawberry hummingbird is incredible. The detail.
John Pettit: Yeah.
Brandon Carter: I mean, down to like, the little, like hairs that show up on it is just mind blowing. Super cool.
Mark Baquirin: Yeah. And also the depth of it. Like you can really tell, uh, what's in the foreground and the background.
Brandon Carter: Yeah.
Mark Baquirin: It really adds a lot.
Brandon Carter: It's got a great prompt too, which you can see down here in the corner. Yeah. That's major advancement. And for marketing people like me, it's nice to see something that's like, this looks really cool and I can't tell that it's AI, which, you know, has [00:18:00] not been the case for a while.
John Pettit: Yeah. And you think about like some of the, like your content generation for like digital assets, um, where people are doing photo shoots or trying to do all kinds of different things to get images that they want for marketing content or even buying stock photos. This allows you to be way more creative than stock photos.
Brandon Carter: For sure.
Aaron Gutierrez: I was impressed by the in painting demo they showed with a guitar.
John Pettit: Yeah.
Mark Baquirin: Oh, that was amazing.
Aaron Gutierrez: Yeah. That's, it's really high tech stuff. Outta my world, but I was very impressed in my head.
John Pettit: So, so Chirp was interesting because, you know, Google had like their voice capabilities, and again, we use this in the MLB hackathon where we were gen generating like announcer style voices using their like different voice models.
They're able now to allow you to create your own custom voice with just 10 seconds of input. So you can create synthetic voices that model whatever you want.
Plus they've added in, um, which we saw before, like all the voices they were adding in, of HD voices in multiple languages, which is interesting if you want to deploy [00:19:00] content into, um, I would say like an accessible, a culturally friendly format where you have people from different regions or spoken languages.
Not just having it say the words in Spanish, but have it say it with the Spanish accent, right? Being able to really reach your audiences with this. It's, it's great for voice attendance, great for like, you know, any kind of like accessibility options. So really excited about the Chirp API and seeing Google kind of make this, again, a competitive, like first party service or tool that you can use, uh, inside of Google Cloud to generate all kinds of rich content.
Brandon Carter: Again, another one that's really useful for marketing people. Like, you know, we're handling podcasts or we want to go back and take our webinar that we're doing and try to remove all of the hosts, mispronunciations and uh, uh, stutters, things like that. Like yeah, this is really significant. Or I think the best use case that you said there is like being able to take your content, present it in other languages, but not just, you know, a robot voice that's reading Spanish, for example, but an actual Spanish like dialect.
John Pettit: Yeah. And these things really, [00:20:00] they stack on top of each other, right? So like you can be generating the, the scripts and audio plans from Gemini, but then you can be translating them through chirp into any languages, right? So it's like, again, the MLB app that we built was an interest hackathon. 'cause that's what we were doing is generate an announcer script in any language you want and make it sound native friendly to whoever was listening.
Right? So there's, there's, as you stack the, the, the Chirp or the image generation or the generative AI together, you start getting some really powerful creative options or some really immersive and rich content that you're putting out to your users. And that can be in your apps or that can be in, you know, content that you're putting out for marketing. It can be in just about anything.
Brandon Carter: Yeah. You've heard John reference a couple of times this MLB, uh, tool that they built in which you can use AI to recreate the play-by-play.
If you guys wanted to check that out, reach out to us. I think John would be happy to, to give you a tour. It's a great, uh, example of a use case of AI and tools like Chirp, uh, to be able to [00:21:00] like, take content that doesn't exist and create it from scratch and in context for people, which is, you know, significant.
John Pettit: Uh, so Lyria is something that, that DeepMind was working on before, which is generating music from text prompts. And I think this is interesting in the sense that it opens up new creative ways of creating music, maybe speeding up some of your music production, but generating sounds that maybe you didn't imagine or iterating on 'em.
So these tools can be used for all kinds of things. Prototyping, you know, production, quality type output. And one of the interesting things is that with Google, all of the things they're building have this copyright, indemnity. So anything that you create with the images or anything you create with the music, you have the copyright to and you're protected from a legal standpoint from Google.
Um, interesting that you get this high fidelity audio that you get it, um, integrated with the IV so you can watermark different parts of the audio. Uh, I played around with this a little bit, just creating different tunes and it's a fun way to say, Hey, I want to create, you know, a beat and a [00:22:00] style maybe with these instruments and have it start to generate the music from you. And then you can bring that into an audio editor and then you can edit it and further refine it.
So, um, pretty neat like that we're able to see, we talk about multimodal, where Google's taking it. It's not just visual stuff, it's, it's audio. It's everything across the board that generative AI is touching and allowing people to be more creative.
Brandon Carter: Lots of content, fun content generation tools. Again, marketer. Love it. Easy stuff for me.
John Pettit: Uh, not just the Promevo llama, but Meta's Llama. So Google is very open with Vertex AI and the ability for you to have open source models. Uh, that's one of the things I love best about being able to go in there and have a model garden and serve up something that's not Gemini.
If you want to compare and see how other ones perform against your prompts, or if you want to see if certain ones are better in different tasks, or maybe you don't need a massive model, maybe you just need a fine tuned model of something else. You have the ability to get those deployed and serve them up just through the, the normal Vertex AI API.
So, um, Llama four is [00:23:00] out very powerful models. Maverick, which is like their deep research model. And Scout, which is their equivalent of Flash. But they can do things like image captioning, obviously text generation and assistance, chatbots. Um, and super long context for Scout 10 million tokens.
So if you have a ton of data that you need to pump in there, multimodal as well. Um, you could, you could say, Hey, like I have some very specific use case for maybe I'm publishing in every legal article ever into this model, and I want some assessment of it. Maybe you could throw that kind of stuff at it. But, um, the ability that Google's participating in the open source community, they also had a partnership with AI too.
They're, they have, uh, anthropic models up there. Um, they're just basically part of the community. They're not trying to push their models on you. They want to build the best ones. I'm sure they hope you use theirs, but you can, if you have a specific use case, use these other ones. Um, and also that, you know, maybe you have a specific use case where you're prototyping a building in Google Cloud, but ultimately you're gonna host that model internally in a hybrid scenario.
And you want to use Meta as your open source model. [00:24:00] You know, you have that capability. Very cool to see this stuff coming out. Very cool to see Meta continuing to iterate in the open source LLM arena. I think it's nice to have technology companies, um, promoting and building open source around this, and it's not becoming private or exclusive.
Um, I think that's gonna be benefit us all. Right. That's a great way for us all to continue to be able to embed AI into any application.
Aaron Gutierrez: Yeah. Consumers always win when there's competition.
Brandon Carter: Mm-hmm. Well, in this case, collaboration alongside competition.
Aaron Gutierrez: Yeah. And I also think it's a good show by Google to have confidence, like to expose these other models out there alongside of their, their flagship models.
You know, it shows that they know their stuff's good, and they don't mind if you play around with other models from other companies and other sources. That's always a good thing to me if they're, if they're not, you know, restricting competition, just being real open about offerings.
Brandon Carter: Yeah, absolutely.
John Pettit: And then I think all that is really because of ver Vertex [00:25:00] AI. So they're building, I think, one of the best AI tooling platforms that exists, right? Allowing you to connect to other models, but tune your prompts, optimize your prompts automatically, host and fine tune all of these open models. Like they're building a lot of tooling around it.
So some of the new things they, they talked about is, um, better monitoring control through dashboards. So you being able to see what's happening within your AI models, efficiency, responses and be able to have sort of the, um, MLOps portion of it.
Like, okay, I built a production application, but how do I know this thing's doing what it's supposed to do? Right? So Vertex AI is, is empowering people to do that. The customization and tuning, um, adding additional capabilities for auto generating fine tuning models or even just saying, help me fine tune the model parameters given my prompt, right?
Like all of the things that you would need to do to really dial in your application so you're not just throwing out a prompting, be like, I think I've got it right, or iterating on prompts and not really knowing you're going the right way. They're giving you a lot of, of power around that to make sure [00:26:00] that you're getting it to the optimal state and then being able to maintain that optimal state after you deployed it.
Um, live API we talked about was pretty cool. The real-time multimodal capabilities, you know, people doing sort of live, interactive agents using AI is like backing it, right? So you can stream it sort of inputs like imagine like humans have visual and audio or other sensory input things that we see.
You can basically treat the, the AI model. That way you can send it all these sensory inputs. And give it some, some tasks for reasoning and get back the analysis and then use that to drive really dynamic realtime applications.
And then this last point is kind of, maybe, sort something that's overlooked is how do you make sure this thing doesn't break right?
So you set a Vertex AI endpoint and you're talking to a server or cluster in Chicago or the East Coast or Midwest, but this AI global endpoint allows you to make sure that one, it's efficiently routing AI requests to the servers that are closest or maybe being regionally, uh, tied based on data, but also, allowing you to have that kind of load balancing so you don't end [00:27:00] up worrying about, your inference system going down.
So, um, pretty powerful stuff they're doing on the backend in terms of infrastructure, as well. So I think like at the bottom, like this summarizes a lot of, it's like they're building a lot of great models, but they're also just serving up the power to Vertex AI as a platform.
Brandon Carter: Which by the way, uh, for those of you who maybe you're not on the developer side, but you're curious like, what's the difference between Vertex versus Gemini versus all these other models? Go to Promevo.com. We have a ton of articles and ton of content on our blog where we explain pretty much all of this stuff.
Uh, and happy to walk you through it. Even better, just reach out to us at Promevo.com and like you'll get someone like Aaron, uh, or even John on the phone with you. Just, you know, take you through it. Just a little quick spiel there.
John Pettit: So I saved this part for last. Um, so everybody's been talking about agentic workflows this year.
Like, okay, I want to have this, this, you know, what does that even mean? And there's lots of questions about like, what does an agentic workflow mean?
But basically you have something where you have agents potentially interacting [00:28:00] with other agents. So you've built an AI that responds, given some instructions to some input and gives an output. And you have other AIs that can interact with them as tools. But when that happens, you can end up with some very interesting interactions between the systems.
You also have the ability now with so many different frameworks to have different agents, uh, communicate in different ways. And so what Google's done is they've invested in fill the gap with their agent development kit.
One, they've made it easier to define an agent, which is basically just a prompt. Some tools it can use and a model that you want it to use, which doesn't have to be Gemini, it can be any model. Uh, two, they've made it. It can work with model context protocol, which is something that's been emerging as like the leading way for sending data back and forth between tools, uh, and models, and be able to access things like, think about CPS as interface to your APIs and those APIs, you want to have a standard way to talk to an agent to give data back.
So it simplifies all of the development of building these. It allows you to quickly spin them up and build tools and they have a way for you to [00:29:00] deploy and test and debug them, which again gives you a full software development kit around agents because otherwise sometimes the debugging and the deployment can be, uh, up to you and it can be difficult or inconsistent about how you go about it.
Um, but they also complete that whole life cycle. They showed like a great video here of like, oh, I deployed my agent, now I go to logs and now I can inspect, investigate an error that came about, and then it suggests code changes somewhere in a tool that the agent is using that would fix the problem. So they have this complete lifecycle that they're showing of how they intend you to be able to kind of rapidly build and iterate on these and, you know, all the nice things we're used to having for every other language. They're starting to see these frameworks and things emerge, uh, for developers and when it comes to agent, uh, software development.
Aaron Gutierrez: Speaking, um, back to the big picture point, I feel like for developers, this was where a lot of the talk was focusing on just the, the whole concept of the entire multi-agent workflow, building a mature development kit, like all of that stuff seems to [00:30:00] be where all roads are leading to, at the end of the day for developers.
John Pettit: Yeah. And you're starting to see some strong statements publicly from people like the CEO of Shopify where they're saying like, no new investments in people unless you can prove that you can't do this with AI. And so if you gave an AI agent the capabilities with tools and access to things, could it replace the need for us to, to add this person?
So you're gonna see more and more of a focus and more push for developers to be able to build out these systems that can automate more of the internal processes. Um, 'cause we see it everywhere, like everywhere. Every customer you talk to, even like at Promevo, like we have things that people do that are just, you know, consistent, consistent, you know, click the box, move the data, make a request, things like that that, you know, would improve the quality of life of the employees if you could automate them.
So, uh, really cool to see Google invest in the ability to make this, um, easier for developers to go about building and maintaining.
Brandon Carter: Yeah, I think that's a really important point too, like to make about, it's not about [00:31:00] replacing employees, it's about replacing inefficiencies and processes that are painful or kind of a time waste for employees. Like focus them on their best work.
John Pettit: Yep.
Brandon Carter: All right. With, sorry, go ahead.
John Pettit: I think there was one more on agent after the agent development kit that I was...
Brandon Carter: That's right. I'm sorry.
John Pettit: Agent to agent protocol. So this is another investment from Google that what they're intending is to have an open protocol so you can have any agent talk to any agent, depending, no matter how you built it, right?
So the interoperability of allowing people to use different, um, frameworks to build their agents, but having a protocol that can allow them to still work together, um, through like your agent development kit or something else that's gonna be really powerful. Also allow you potentially, if you're supporting agent to agent protocol to plug those agents into Google technology stack.
So you think about the tight integration Google's having with their different agents that they're building, um, Gemini and other things, gems, all, all the stuff that's coming out of there along third parties to easily build something that'll be [00:32:00] compatible is important. 'cause it'll, again, it reinforce this open ecosystem that Google's building and they, I think you see some of that too in their agent marketplace they put up on GCP, so you can kind of readily buy and enable agents from third parties.
So really neat to see them continue to invest in just, you know, how do we, not just like, how do you build an agent, but how do you build an ecosystem that allows people to kind of quickly build and scale long term and then have this successful, uh, implementation.
Right? I think it took a long time for everybody to get through the first year of just integrating AI into their business and maybe building some AI, uh, API kind of workflows. But now we're starting to look at this more complex. These things are always talking to each other for complex tasks, so having more, uh, frameworks and stability around, it's important.
Brandon Carter: I love it. Um, yeah, fleshing out like the ecosystem, not just, like we said at the beginning, not just here is a prompt and have it write an email for me, but now we're getting into like deep business integrations, getting agents to work together. Uh, yeah, I mean, [00:33:00] just a, a ton. And this, again, this is only a fraction of what Google announced around AI last week.
Let's transition now into Agentspace / Google Workspace. Mark, do you want to, like, I think Agentspace is something a lot of our viewers here have probably heard about or heard at least a rumor about a lot of discussion about it at Google.
Next. Do you wanna talk us through some of it?
Mark Baquirin: Yeah, absolutely. Agent Space. Uh, Google has, uh, developed a place. Agent Space is basically a single space where, uh, employees of your company can access their agents. They could build these agents, they could be agents that your company has purchased or agents that your company has built.
Um, what's really cool is you can, these agents can even interact with other agents and so forth. And what it does is it gives your, gives your employees a way to access data. They can access, I believe there's over 300 different apps that they can connect to. Um, you can enable things like Salesforce, JIRA, [00:34:00] Okta, and have access to all of that information.
Um, and that allows, uh, users to really just, uh, you know, conduct better research. They're able to analyze, pull data down from different apps that are all connected. So everything would be integrated through the use of these agents. And then the agents would not only be able to access and analyze that information, synthesize that information, um, and get you, you know, reports and visibility on what you need to see.
But they can take actions on your behalf as well. Um, some of those actions could be like, um, once they have the information, uh, the agent can be, directed to, uh, maybe draft, uh, draft a message to somebody, or send a notification to a chat space or an email or something like that. So it really helps with a lot of the repetitive tasks.
I think I saw a few demos where, uh, these tasks were roles such as, like a call center type of role where, uh, o oftentimes, uh, there would be an employee that was, um, uh, [00:35:00] receiving a call, receiving or receiving a, a trouble ticket, and they had to access maybe a, their ticketing system, um, and kind of like review their processes, and then the agent would automatically send a response out to the client. So it was, it's very cool. Very powerful.
And then on the next slide, um, uh, Google has released, um, a. Two agents that they've, uh, they're kind of like, um, Google built agents that they're providing, or they're going to release these, I think, uh, towards in the coming months. Um, the first one is this idea generation agent.
And this is great. Uh, this is a way for, um, for you to develop, um, interesting novel ideas. Uh, you can use it as your brainstorming assistant. You can come up with these very interesting ideas. It kind of categorizes them together, and you can, um, based on, uh, your criteria, you can evaluate these ideas and find out which idea is fits best for the solution that you're, you're trying to get to.
But yeah, this agent in [00:36:00] itself is, uh, it looked to be very powerful and I, I can't wait to use it. If you need to, um, you know, just generate a cool idea, this would be like your assistant for that.
And then the next agent that they have, uh, is the, um, uh, the deep research agent. And this one is very cool.
Um, this is where you can actually, if you have something a little where you need to research a little bit more deeply, then just using the flash model, you can actually access this agent for very deep research on very complex topics. And as Aaron had pointed out earlier, uh, what's nice about this is it gives you kind of a study plan of what, uh, sources it's going to be researching for you.
It's typically a long list of sources where it would take a regular person, you know, just a lot of time to go through each of the sources. But it's gonna, what it does is it analyzes all the sources for you. It, it does take a little bit more time than the Flash model, but much more quickly, of course, than, than you having to do it yourself.
And then what it produces is this [00:37:00] comprehensive, easy to read report, and the whole report is, uh, there are citations included all throughout the report. So as, as Aaron pointed out, you can hover over or even click on a citation. It'll take to exactly where that citation was referencing, and you'll have the opportunity to review it yourself to just to make sure that it's not hallucinating and that the information that you have in your final report is, um, relevant to what you were actually, uh, going for initially.
So that's, that's very cool. Um, and then on, um, uh, going. Go ahead.
Brandon Carter: Just one, one second. Uh, I think Aaron, did you wanna jump in and add a comment there?
Aaron Gutierrez: Oh, um, I just wanted to say before you wrapped up on Agentspace, Mark.
Mark Baquirin: Yeah.
Aaron Gutierrez: One of the cool things that they really showed was all the connectors they have available to these third party sources.
And it was super impressive, like telling your agent what to do based on the functionality the connectors have right [00:38:00] now. And just thinking about how that can expand in the future. 'cause right now the connectors, they're new, right? All they're gonna do is get better, add more functionality. So like, like you mentioned earlier, like they can help you iterate, they can move you onto the next step.
They can shoot the email off. Well eventually they're gonna be able to do a lot more things, uh, based on, based on the power of the connectors that are built for them for the system for Agentspace.
Mark Baquirin: Sorry. Yeah, that's absolutely right. I, it, it seemed like in the demos, like there was extra space for additional actions.
Uh, I can't say for sure if they're gonna, you know, what they're gonna build in there, but it looks like they left room for growth. Uh, and they even have a custom connector. So if your app isn't on the list, um, there should be a way to, uh, just to configure a custom way to connect to it.
Aaron Gutierrez: That's a good idea, Mark.
Brandon Carter: A hundred percent. This is, this right here is exciting.
Mark Baquirin: Uh, yeah. Moving on to workspace. So this is, uh, really great. Google has released for their Google [00:39:00] Workspace a number of things. I think one of the coolest things was, uh, this, uh, Google's, they call it Google Workspace flows. And what it is, is a, uh, automation sequence that you can configure.
Um, so you can configure it as a multi-step sequence. So what it does is as you go through the configuration, you can set the triggers. A trigger could be something like, you received a message or somebody filled out a survey form, and then you receive that. And then once this is triggered, you can configure actions to happen, like in sort of a sequence.
Um, some things that you can do, well, basically AI is going to initially take in the, uh, the intake, uh, what, whatever your trigger is, it's gonna analyze that, it's gonna generate content for you to review. And then from there, if you wanted to, you can also add in those additional actions such as, you know, sending notifications, uh, maybe.
Writing out an email response. And then what's cool is you can also utilize gems with this. So if [00:40:00] you had a gem that, uh, for instance, if you had it configured to send out an email response, if you wanted to send it through a gem, the gem, if you had it pre-configured to say when you send a response, it's going to reference something like a, uh, email style guide.
You can send your response according to how you want your voicing for that message to be. So it's really cool that you can tie all of these in together creating a workflow from end to end, really reducing a lot of that overhead from your actual employee having to do a lot of that repetitive work.
So really excited to, uh, see when this comes out.
Brandon Carter: This is really cool. Uh, and in particular, like some of these cross platform, like this one. Uh, or I guess you can see my mouse, hopefully you can, where it's Gmail, Gemini and chat to where like you're surfacing information to an employee in the channel that they prefer, but also like enriching that, that information through Gemini.
Uh, yeah, this is like [00:41:00] massive and you know, someone that's not very technically minded like me, I don't have to build this. It's, it's literally in a like prebuilt formula for me. So, super excited about that.
Mark Baquirin: For, um, some of the other announcements that they've made, um, in Google Docs now that now they have the ability to, um, include audio.
So what's really cool is you can have an audio version of your documents. So just, you know, get that audio version, kind of make your own, uh, audio, you know, book for yourself, maybe listen to that on the go or something.
And also podcast style overview, similar to if you've played around with Notebook lm or it's two presenters and they're talking back and forth and, maybe kind of persuading you one way or the other on a topic, that's gonna be what's also included in Google Docs.
So it's very cool. And I hope they also integrate this with, uh, with Chirp's ability to create your own voicing. So that would be very interesting. Uh, you know, maybe you can have. It'd be read in your voice or maybe somebody [00:42:00] famous like James Earl Jones or something like that. That would be very cool.
Um, but, so yeah, can't wait to to play with this when it comes out.
Brandon Carter: Darth Vader voice reading your internal company memo. Love it.
Mark Baquirin: That would be amazing.
Uh, and then another feature in Docs is this Help me refine feature. Uh, it. It's really cool if you highlight a section of, uh, of your content, and then you hit refine writing.
It's gonna do things like offer thoughtful suggestions. It can strengthen your argument, make your make your point of view more persuasive, or you can use it to clarify key points, summarize. So the, uh, idea here isn't just to like fix up your document, it's really to help, um, communicate more effectively.
Um, it can, it can do things like change all of the, uh, the formatting of what you're saying and all of the voicing to be just very consistent. So that should be a nice feature for a lot of people, and especially if you're working on a document, multiple people are, are [00:43:00] collaborating on one document.
This kind of, um, makes everything really consistent. If you were to run this through the document. All right. And then in, uh, the next slide, uh, Google Sheets.
So Google, uh, sheets has had Gemini side panel in it. Previously, you could not actually, put in a chart or anything like that. You can make tables, but not charts.
Uh, recently they did add the ability to put in charts and now this moves that to the next level. So now with, um, help me analyze in Google Sheets, you're able to look at a data set, and then it's gonna give you an expert level analysis with not just charts, but interactive charts. So you can move things around, you can change the settings, and it really gives you a different way to look at the data.
So those people who, uh, charting is the way that they understand data a little bit better, it's a really great way to understand that data with the use of charts. It'll help you identify trends, and then it also suggests some next steps to take. Uh, so it's, this is gonna be very, very helpful and I think a game changer for, uh, those of you who, um, [00:44:00] use Sheets in your roles.
John Pettit: I'm really excited about that, uh, and getting like connected sheets and being able to do the analysis sort of on the fly without having to configure any BI. Just have the data and start finding information. Super cool.
Mark Baquirin: Yeah, absolutely.
Brandon Carter: Massive.
Mark Baquirin: Yeah. Finance teams I think will love that.
All right. And then the last thing here is in Google Chats.
Uh, this is a cool feature as you are chatting in a chat space, if you wanted to access Gemini very quickly, you would simply type in @ Gemini and then put in some kind of command. You can see that in there. It's, uh, one of, um, in this example, it's summarized this discussion as a table. So you can get a summary, like either as a, you know, as a paragraph. In this case they're asking for a table. So you can have it in different formats.
Um, you can have it highlight key points and also suggest next steps or suggest reply, suggest actions to take. So this is gonna be really nice and it really augment the way you communicate in chat [00:45:00] spaces.
Brandon Carter: Fabulous. Just so much useful stuff. Uh, I mean, again, this is what we talked about at the beginning, Google taking AI from, well, we have AI that will generate an email. It's like actually putting it into practice and imbuing it into their products like Google Workspace and Google Sheets, where, you know, not everyone understands the language of spreadsheets, marketing guy included. And being able to, without having to be a Sheets expert or like, you know, a spreadsheet expert to be able to generate those sort of like important outputs in charts and graphs. Something that like everyone understands is just, it's massive and super exciting stuff.
We're coming up on time, so, uh, apologize. Aaron. We're gonna squeeze you in here. Let's talk about data analytics and Looker and BigQuery and that part of Google Next.
Aaron Gutierrez: Yes. Um. As always, in the recent months, in the recent events, everything's heavily focused on AI, but when I attend these type of events, I like to keep my ears perked for things [00:46:00] that relate to analytics, BI tools, just the other things that currently exist in Google's world and exist for, you know, many analysts and users of data. So I just wanted to point out a couple things that were really cool that I saw.
Uh, we're gonna start with conversational analytics first, because to me, this has been a topic that's been kicked around by a lot of different companies for a lot of years. Like I, I remember, you know, years ago, Tableau kicking around their Einstein conversational type of tool.
Didn't, didn't quite hit the mark, but it seems like this type of tool is maturing to a point where it's actually real, real now. It's tangible, it exists. If you go into Looker now, I don't, I don't know if it's rolled out to everybody yet, but, um, I have seen this in, in some of the, the newer versions.
The conversational analytics piece is there, and they've actually done a really good job [00:47:00] presenting the tool. It's built, uh, I don't know how familiar you guys are with Looker, but data is organized into sets called Explores, the conversational piece. Kind of centers around that. So it tells you to select which Explore you want to ask questions to.
And then from there it provides, um, you know, it, it's has some sample questions you can ask of the Explore to get started. But one thing I really think Looker does really well, and this is probably why everything works, is because of the underlying Look ML model. It's, it's kind of feeding all that information, which is essentially a, a diagram of the data, um, into, into the language models.
So I think I asking a language model to just build SQL on the fly, the, the code that that populates these reports on the fly is difficult. But the Look ML code, just by default, it's essentially a small data dictionary. It's got everything defined, [00:48:00] it's got a lot of metadata. You can put descriptions on everything and how everything is related to each other, all the pieces are related in your data, it's all defined there.
So I think that's really why they're able to bring that to the, to the conversational, uh, part of Looker. They're able to, you know, have you just type natural language questions, generate charts, you know, see insights in the data, quickly build reports.
It's just really cool to see how it's evolved to this point. And there's a lot of integration too now with Looker Studio, which I feel has a lot more of a robust visual library than the Looker core, uh, Looker Original. So they're really bringing the best of both worlds from both products into the conversational analytics part of Looker.
And one last thing for Looker before I go. For any of you Look ML developers out there, uh, they showed us previews of the code completion, [00:49:00] which was not in Look ML developer tools before. So I, I think to me that I, I've been waiting for that for, for years, for months, you know, now we're actually gonna get that part.
Uh, it's been, uh, code complete. I, I'm kind of spoiled by it. All the other languages, you know, it's built into VS code. It's in, you know, the BigQuery console. But now it's, it's going to be in Look ML. So as a BI, I'm excited for that part of it.
Another announcement that's coming, uh, I think they mentioned later on in the year, is object table querying within BigQuery. Now, um, to boil this down, I. This is gonna be a way to easily ask questions and query your, your unstructured data. So think things in your cloud storage buckets.
So if you've got files in cloud storage buckets right now, it's very difficult to combine that to, um, to tables in BigQuery. They're different things, one is a very structured set of rows [00:50:00] and columns of data, right? You know, BigQuery tables. You think of the classic, you know, spreadsheet style view with rows and columns.
And then you've got your data in storage buckets, which can look like everything. It can be, you know, text files, it can be audio, images, videos, whatever. Whatever you wanna throw in a bucket. Those two worlds have, have been hard to marry for the longest time. But the object tables is giving you that ability to build queries where you can reference, um, objects that live in your storage buckets within BigQuery.
And that's, that's kind of major because you're not having to take the data, you know, embed everything, store it in BigQuery. You can leave everything in your storage buckets and now write queries directly, excuse me, in BigQuery. And this just makes your, your data enrichments process processes a much better.
You don't have to, you know, waste space, waste time, moving everything around, having outdated data. You're querying everything directly to the source. [00:51:00] So they showed us a lot of cool tools, uh, a lot of cool new BigQuery functions that will allow you to do these semantic joins.
So you've got a bunch of reviews in a bucket, right? So you just want the positive reviews. You can build your query to only capture the, the objects with positive reviews and then enhance that data set with everything that's in your existing BigQuery tables.
So, I don't know how, uh, cool that sounds to you guys. But for me, for a data engineer who just loves when, when you can combine data sets and, and make them useful to to external users , it's music to my ears to see this, this new functionality coming in. And I hope, uh, I hope it gets here soon.
John Pettit: I'm excited about that for like, the, uh, embedding, it's like this easy embeddings and imagining around like image data and things like that where it's like I have a bunch of image files off somewhere and I want to just be able to say, show me the ones that look like this, and then be able to join that to like, more data that, that's all really cool stuff.[00:52:00]
Aaron Gutierrez: Oh yeah. We're gonna be able to cut steps out of that pipeline. Like right now we have to go scan the bucket. Are there any changes? Uh, take that data, put it in BigQuery. Well, is there already data for that image in BigQuery?
Like, there's a lot of things that are going to be totally eliminated once you can query objects directly in the buckets. So I'm pumped for that.
Brandon Carter: Well, this has been super helpful. We have come up to the hour. If you have questions or things that you want to know or you wanna like, try some of this stuff, reach out to us.
Go to Promevo.com and fill out the contact form and we will be in touch. Uh, if you're a client, you know, feel free to contact our support team or your account manager and, you know, we'll make sure that you have access to this stuff as soon as you can.
So for those of you that aren't, uh, I mean John, like what would you say, like, why Promevo? Like what is our role in all of this Google stuff? If you wanna wrap us up here with just like, why should you work with Promevo for this stuff?
John Pettit: Yeah, I mean, at the end of the day, you know, we showed the [00:53:00] llamas like we're here to kind of be your sherpa, your guide to help you with anything Google related.
Specifically this year, everybody's very excited about agent space. So are we. Um, we have workshops and pilots that we can enable and get you started to help you figure out how to get that working in your organization. And some people talked about like ServiceNow and Jira connectors and things like that.
If you want us to help you figure out how to get connected and how to set it up and be successful, we can help you with that.
If you have other ideas of just us helping you, we can be your productivity bridge for other solutions. Say you want a custom agent or you want something built for a custom solution using any of the AI tools, we're here to help you build those out.
Whether that's getting the data stuff, all the steps Aaron talked about, or, you know, any of the other things that, that are out there in the Google ecosystem. We're here to help you.
So we want to be your partner, um, we work with a lot of other clients, pretty happy with us. We'd like to be, you know, make you work with us as well and be happy with us.
Brandon Carter: That's a great summation. It's a great spot to end it. Work with Privo. We know all of this stuff so that you don't have to, and again, this is a [00:54:00] fraction of the 220 something announcements that Google made last week. Uh, so we have teams that are digging into every aspect of it and are ready to help it imbue itself into your business.
So with that, thank you to everyone out there watching. Uh, and thank you to everyone in the future that is watching this on demand. We appreciate your time. As always.
Mark, John, Aaron, I appreciate you all. Thank you for your investment and your knowledge and for sharing it with everybody.
I think with that, we'll call it a wrap.
Happy Tuesday. We will see you all soon. Take care.
Presenters
Choose your Google Workspace edition. Try it free for 14 days.
- Gmail
- Drive
- Meet
- Calendar
- Chat
- Docs
- Sheets
- Slides
- Keep
- Sites
- Forms
Custom and secure business email
100 participant video meetings
30 GB cloud storage per user
Security and management controls
Standard Support
Custom and secure business email
150 participant video meetings + recordings
2 TB cloud storage per user
Security and management controls
Standard Support (paid upgrade to Enhance Support)
Custom and secure business email + eDiscovery, retention
250 participant video meetings + recordings, attendance tracking
5 TB cloud storage per user
Enhanced security and management controls, including Vault and advanced endpoint management
Standard Support (paid upgrade to Enhance Support)
Custom and secure business email + eDiscovery, retention, S/MIME encryption
250 participant video meetings + recordings, attendance tracking noise cancellation, in-domain live streaming
As much storage as you need
Advanced security and management and compliance controls, including Vault, DLP, data regions, and enterprise endpoint management
Enhanced Support (paid upgrade to Premium Support)
Business Starter, Business Standard, and Business Plus plans can be purchased for a maximum of 300 users. There is no minimum or maximum user limit for Enterprise plans.
Contact Sales