| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
Rhiannon Bell and Robby Stein, Product and Design leads for Google Search, join host Logan Kilpatrick for a deep dive into the integration of Gemini 3 into Search. Their conversation explores the evol...
Today, we're joined by Rhee and Robbie. We're talking about AI in search.
Spiritually, what we believe with search is that you can truly ask anything you want.
I think search has kind of, like, had that goofiness, the quirkiness from the beginning. So I'm excited to see how that manifests in a in a search experience for a for AI.
With Gemini three, it can do everything from reasoning and very complicated math to even coding you up, like, little simulations. I'm a big believer that a lot of AI in the future is gonna take a much more visual form.
It just sort of helps you understand data and and just completely differently to how you would have if it was just in a, you know, in a in a table or a chart.
Getting that model in front of, as many people as humanly possible is, like, the manifestation of what Google's mission is.
It's so awesome to think about, wow, the capabilities that we just discussed, like, coming to, like, you know, millions of, you know, people who use search every day. So cool.
Everyone. Welcome back to Release Notes. My name is Logan Kilpatrick. I'm on the Google DeepMind team. Today, we're joined by Rhee and Robbie.
We're talking about AI in search. I'm super excited for this conversation. And, actually, specifically, talking about Gemini three in in search. So let's dive in. Let's do it.
Yeah. I mean, I'll just say say that in general, there's a really special moment to be able to launch a frontier model in search to lots of people day one. I think we've been working up to that moment. So with Gemini three, it can do everything from reasoning and very complicated math to even coding you up, like, little simulations to help solve your problem, like if need a little calculator or a widget, like, on the fly. Because I think, spiritually, what we believe with search is that you can truly ask anything you want and get effortless information.
But really, that's hard to do because people ask pretty hard questions. And so it's kind of the purest ability to really allow us to achieve that mission, and I think we've all rallied around it to make that possible.
Yeah, the only thing I'd add, I think, us is just like the teams now have access to all these capabilities. On day one, it's just like you get the opportunity to think just more creatively, think about the core use cases that those capabilities can help users with. And so it's been great to have that on day one.
Yeah. It's wonderful. It's fun to build a product when you have such a good model to to build around. It makes so it makes makes it a little bit easier because you could be you could be more ambitious. The gen UI story is also really interesting.
So I don't know if I still fully grok the jet. Like, I I intuitively understand what's happening, but maybe we can sort of talk through it. Also, if folks, like, haven't experienced before, either one of you wanna sort of give, the high level of what folks should expect in GenUI?
So basically what GenUI is is you think about the model being able to have more control over not just the response, like the text that it sends back, but also, the page it constructs. And so what you can do is you can tell the model, hey, like for certain graphical information, you should consider graphing it. And here's a graphing library, and here's how it can look, and here's the styles. You can use this as a primitive now. It's like, oh, that's cool.
I'm just gonna start throwing graphs in. And so it's just one example, but you can kinda teach the model to think like a designer and kinda work through some of these decisions.
And we've done sort of things like this before in search, but it hasn't been, like, automatic by the model. Like, it was bespoke. Like, I think we talked before about, like Yeah. Real time information that gets pulled in, all this stuff, like the one box.
I think from a design perspective, what's been so great about it is that, originally, you know, it was we would create these sort of, static experiences. You know, there would be tension between, like, the designers and the model because you'd like, why can't you make this bold? The spacing doesn't look quite right. And but we would work on the system instructions to get it super dialed. And then now with generative UI, it's okay.
Rather than sort of it's like having a script and then having basically an improv stage. It's it's like we we give the model, like, all of the different components, And then we give those components a set of, like, system instructions as well. So that are based on how a designer might lay something out. What's been so great to see actually in this is that the designers now are designing these experiences kind of like on the fly with the model. Like to the point where designers are writing, like, system instructions that say, okay.
Here's a sort of set of components that you can have for a response that's like this. So it's like a carousel or here's, like, some imagery or, like, you know, this is how we would lay out typography or a list, etcetera. And maybe here's some data visualization, which we should also talk about. And then the the the team will create, like, a set of design rationale system instructions. So it says, okay.
For a site for a sizing spec, we would say model, you know, you need to look at, like, hey, is this a primary piece of information that needs to be displayed? Or is it secondary? And so then the model can make decisions on like how to actually lay certain pieces out based on sort of like, you know, core design rationale. And then, also then the user's needs and like what it, you know, what a user might need in a response.
What's crazy is you could we couldn't do any of this, very little of this, like, three months ago even, six months ago, definitely. Like, you're talking about the bespoke experience. You'd have to kinda retrain the model. Like, it would be something where you have to kinda take the weights. It can do certain things, and then you'd probably train it, It might be even post trained.
So like, oh, if you see data, the model just learns almost in the training process, that it's better to put a graph in there than it's not. Whereas I think what's happened with increasing intelligence and reasoning in Gemini three is instruction following and reasoning. And when you can do that, you actually can just kinda say like, hey, here's rules. Like, this is graphical information is best if in this way. And by the way, here's a link to a spec that has all these principles.
So the more you can encode things in natural language, and create specs like you would for another person who's a designer on your team, the model can kind of do that. So that's a way that you kind of need what does it mean to have a conversational search experience? You need to kind of find your own way there. And I think we've been slowly finding what feels good for search. And the other piece is, obviously, what search does and is the key part of search is bringing you close to the web and the richness of what's out there.
And so from the first design, we realized having AI, with links within, but also, like, on that side right rail, like, this kind of rich representation of, like, how the web and what it kind of brings you outside the universe of this very tunnel vision AI. And it also kind of makes the experience feel more, balanced, I think, too, and rich. Kinda worked. And so that was another piece that we, you know, we thought was a really important thing in the product side.
Yeah. Like, just reinforcing basically to a user that they're still using search. And so all the things that Robbie just mentioned were like, know, really helped, I think, with just orienting users to AI in search.
What's the breadth right now of, like, the different back to the our meme of of use cases and search queries. Like, what's the what's the breadth of what it is? Like, is it very, like, focused in certain is it, a small set of domains today, and GenUI will sort of expand to potentially every sort of question somebody might ask, we'll build some bespoke UI for them, or, like, how is it?
There's kind of two pieces that's manifesting right now. One is in the layout itself in the perimeter. So whether you have a table, an image, a graph, it's deciding whether to do those, what images to put in and how it looks. And it's given instructions of in our design language that it looks good. And actually, of the problems early on is that if you ask the model to make a page, it makes like a crazy page that just like, and every page looks different.
So how do you make a consistent and predictable user experience while also giving the model a control? And so a big thing we had to figure out was how to put it on the rails and be like, Look, this is like our design language. Like, you're a new designer joining our team. There's a design language and there's a design system, and here's what it looks like, and here's the color palette we use, and here's the typography we use. And you have to do that or else it kinda goes I I mean, it makes sense, right?
You just tell a designer to just design something for me, they're just gonna design whatever they think. So that's one thing. And the other thing that it's able to do is it's able to code up these little simulations and inject those as their own primitives. And those are really neat. They're these little interactive experiences where you can teach someone.
Like, I was teaching my daughter about Lyft, and I said I asked it to create a simulation or a visualization for it. And it made this crazy little window with, like, vectors, like arrows running over a wing. Through these sliders, it would adjust the wing and then show how much lift was occurring, like where the arrows would start going under the wing and push pushing the plane up. Super cool. And that's the kind of thing that's hard to describe to a person in text, but through the visual medium, it's super clear.
And so I'm a big believer that a lot of AI in the future is gonna take a much more visual form. And think I we've been innovating a lot in this space. You think about what we've done with shopping and visual search in in AI mode. But this isn't this takes it further. These are all versions of of the model having control over what it's showing you.
Yeah. I mean, the layer the additional layer of motion also is just a game changer. I had a similar example. I was playing around with it the other day with Ollie and my daughter, and we were looking at, like she had asked how cars work, and so we started looking at simple engines. And, you know, it creates, like, a full it's more like piston system.
It shows you how like fuel works, the ingestion, the exhaustion. I mean, it was just really like remarkable, actually. And I don't think you would have gotten that from just, you know, a static graphic either. So like, I think the ability for these things to become just like truly interactive also. So there's like aspects that you can hover over and get more information.
I think that's, you know, easily where it's going. I do think one thing though, just to touch on that Robbie was kind of talking about around like, hey, when you give the model all of these components, there does still need to be like a layer of just like taste and quality and craftsmanship that I think needs to exist. And this is definitely something that, you know, we kind of have like almost a visual QA process, like with the model. So we're like actively working right now. We're kind of like, what does it mean for us to evaluate these things from a design perspective?
And so do we have separate evaluation processes for that? Do we create a system instruction for a VizQA process? Starting to see some really amazing results with that. So I'm super excited about that because those things are gonna come very soon and I think it's gonna just take everything up a notch.
Rhee, your your comment about like design taste and how that's a essential part of the GenUI experience, I think hits home. I feel like some of the other constraints just like the efficiency latency piece of it. So I'm curious, like, how maybe from a design perspective or just, like, the product constraints of, like, building a simulation is obviously like, takes time to do that. So, like, what do users see and, like, what's the
Yeah. There's, I mean, there's definitely, like, a latency design component that is required. Like, we need to design the latency sort of experience. And so, you know, we need to make sure that users know that there's something that's being generated. And so we think about that.
I think we also are, you know, working very closely with engineering. Are there things that we can do here to create, reductions in latency? Are there certain components that actually don't need sort of to be regenerated? So we we're talking a lot about that. I also just have seen our capacity, to reduce latency, you know, is is is kind of second to none at Google.
It's like one of the things that I think search is like always just prided itself on. And it's like, we just wanna get you the, you know, the information that you need as efficiently and as quickly as possible. And I just have so much confidence in, like, our ability to to solve for those things over time. So, yeah, I see nothing but opportunity where that's where that's concerned. I know it's gonna get better.
What do users see right now when, like, like, for example, a a car engine simulation is being built? Is it just like
Yeah. So the same way as thinking steps, which is you know, I think thinking steps is really interesting in my mind because it's an opportunity for us to, like, use that latency to communicate to a user what the model is doing. And so, you know, when you do have these moments of latency, you know, we can create sort of like a representation of like, hey, there's things happening in the background here. We're calculating this where, you know, we're drawing this, what have you. And so right now, it's just sort of a relatively straightforward, like, you know, overlay where the image will appear or the the data visualization will appear that represents what the model is doing on the back end so that users know to wait for something.
It's a little longer than maybe you would want it to be right now, but I know it's gonna get better. So
Obviously, like, the three story started with Pro. It was available to AI mode customers. And and, obviously, we're the team's worked super hard on Flash. So I'm curious for you both and just, like, for AI and search, like, what the Flash story means. And, obviously, lots of hard work to make, to make Flash, happen from a from a search perspective.
Yeah. Couldn't be more excited to bring, the frontier model at the speed and availability that people need for everyday use to search. And I think that is one of the most exciting things. And so I think what you'll get is you have these lineages of these model series. So you have the three series, which I think will help people ultimately tap into much more sophisticated reasoning, problem solving skills, and also on the generative side, actually help create things for you, whether they're these widgets over time or help you understand your data and generate a graph for you about it.
Think I'm excited to bring that to many more people, you know, through these models that can be run at a larger scale and in a faster way.
Yeah, it is awesome. We were playing around a bunch of our evals in AI Studio. It's like for some of the use cases, Flash is like three times faster. And I feel like this the Flash story feels very much like the search story where like the the timing matters, the quality matters, and getting that model in front of as many people as humanly possible is like the manifestation of what Google's mission is. Robbie, something you talked about or something actually, I get my I get my AI mode Search AI updates from all your tweets, and something you tweeted about recently or announced was the experiment of bringing AI mode to the bottom of AI overviews.
Don't if there's a better way of saying it than that. But I'm curious about that experience and sort of how the fusion of the different AI search experiences coming together is playing out and what the high level idea is.
Yeah, mean, think in general, the high level kind of desire from the user is you just put whatever put what you're thinking into Google search and just ask. You can drop huge amounts of code in. You can ask a really specific question. You can get advice. Just put it into Google, and if AI, we think, will be helpful, you'll get this kind of generated, experience at the top.
And if you expand it, on mobile we're experimenting with that just opening up an AI first experience kind of all the way into the screen. And now you can have a follow-up because you have a follow-up box at the bottom, and now you're in a conversation. So what we're trying to do is make it really fluid to get to AI in the first place. And then when you tap in, be in a more conversational mode, which is basically AI mode, and will just naturally take you into AI mode for your follow-up questions. And so that users then need to think, where do I have to put my questions in an AI mode thing?
Do I put it into the main search engine? We want to bring these things together so that it's just as easy as possible. And of course, for power users who kinda know when they want AI, they'll maybe just go right we're seeing them just go right to AI, for those really hard questions. But I think users ultimately will have that choice.
Yeah. Something interesting, actually, was talking to Josh about this. This was maybe, like, three or four months ago, and he was saying that, like, 2026, one of the things that's top of mind is just, like, this model routing story. And I think more more loosely defined, like, we actually like, Google has lots of different models now. Like, we've trained many iterations of Gemini.
Actually, in some cases, they all have a different set of trade offs. I'm curious as you all or as you both, like, think about the search experience and, like, all of these different models, like, does it feel like there's, I'm sure you want better and faster models, but is there anything interesting that, like, you wish you had a model that could do something? Or is it, like, you're actually getting what you need right now? I'm curious. Because we can pass the feedback the feature request onto the model team.
You get your wish? Yeah. Whatever you want. Whatever Whatever you want.
Yeah. I mean, I I think, like, one of the one of the things that has been, you know, it's a work in progress, like everything here, is just around, like, the personality and the persona of the experience. There's just so much opportunity for us to just be, you know, more personable. I mean, nobody wants search to be, you know, super chummy, but I think that there's just opportunity for us to be there for users in a different way than we've been before. And I think we've made good progress and we've got, you know, experiments that we're running.
I think that ability for a user to sort of build a relationship with us as their, knowledge companion that we talk about all the time is, like, one of the things that, you know, we're we're actively working on that I feel like is a priority for us to get right.
Yeah. That's a great example. Has that been something that, like, search has thought about this, like, persona idea historically?
Yeah. And we're working closely with, like, teams within, you know, Google DeepMind and within Gemini also to, like, understand, like, their learnings and then how we can bring some of those things to search. But also wanna have kind of, like, our own flavor. You know? Sometimes I think about, like, search has always, like, have these moments of delight, like the Easter eggs, you know, the validation of sort of super fans, like, whatever it is, a Taylor Swift album.
You know, I think that the whatever we design here, and it is a design exercise because you're designing a, you know, a persona or a personality or model behavior in some way, I'd love to infuse it with some of the things that I think Google is kind of known for, some of this googliness, the quirkiness. And I think search has kind of like had that, you know, from the beginning. So I'm excited to see how that manifests in a search experience for AI.
Yeah. What's interesting is the search has has, I think, the persona of search, she kind of existed a bit, but not through language and not through an AI paradigm. So it's like, you think about it, it's kind of like, it's obviously like an intelligence service. People think about it with information, but it's also So it's kind of has this kind of science y futuristic vibe. So it's like, we celebrate scientists, right, for for doodles amongst other things.
And it's got a quirkiness and kind of an unexpected thing where, you know, I think if you you could, like, throw a what, a bouquet of flowers at, like, your favorite team, and people thought that was funny. Or, you know, like, there's, like, a nerdiness and a fun kind of jovial spirit there. But like, if you're talking to that thing and you say hi, like, does search say back to you? Right? Yeah.
Like, someone just says, I'm feeling sad. Down today. Like, how would a cert how would search, respond to something like that? And those are the kind of questions we're thinking about now, because people are asking about advice and they're asking personal questions. That's been a particularly interesting part of this product, taste and shaping.
It's like, you don't really think of your job as potentially thinking about those kinds of problems when you're working on a technology product, but those are actually as important as anything else we're doing.
Yeah, that's awesome. That is super fascinating to see that. And I feel like people definitely, that's maybe one of the shifts in user behavior of how people are using search.
You not have anything that's on your wish list?
Yeah, there's no wish list. I was like, found what I need.
Very wish list. I think for me it's more about I think the models are becoming more, like I said before, about capabilities to do things versus doing things that I like specifically for me. But I think that one is it would be really cool if the model just naturally could understand how all of Google's systems worked. Think about how, at least for as a developer internally, it'd be pretty neat to be like, okay, the model just naturally knows how to use, like it could crawl your code base and it could maybe work for any company and just know how every API and every system worked. So you could be like, hey, model, like, you now should be able to use everything in that search uses for Google Finance or something, if you're search.
And then all that information is just perfectly available now to the model, because it can go just figure it out by itself. Think that'd be super cool. So the more the model almost agentically learns your own kind of systems and then can build that capability that kind of allows us to, you know, make it even more helpful for you, I think.
On the model story, one of the models I've been super excited about is obviously the Nano Banana models available in in AI mode, and Nano Banana Pro for even more sort of, like, deep factual stuff. I'm curious how both of you have been thinking about like what that experience means for for search and AI and search.
Yeah. Yeah. I think what we're starting to see, like the opportunities are for like data visualization with nano banana In particular, we we see opportunities for users to, like, sort of look at data that they might have in, like, completely different ways. So one of the use cases that we've we've seen as exciting is a sports one where you have, a two basketball players that you're super fans of, and you wanna visualize their stats, and we can create, you know, an infographic for you. And like that, you know, that never existed before.
And and so, like, this idea that we can just visualize information for you in in these new ways. But it's amazing to sort of watch, like, you know, how the data that it can pull and then how it manifests that data in this visual way that is just sort of helps you understand data and just completely differently to how you would have, if it was just in a, you know, in table or a chart.
Yeah. Think actually that what's really neat is what Ree's describing is really the kind of unison of the most powerful model with search, honestly. Because if you think about it, like each of these things requires the reasoning of the model, but also the knowledge of search. And so to use these tools to look up these sports facts that are like real time information, or if it's trying to build you a product thing, it's like finding shopping data, it's pulling images, it's like browsing to see what reviews have. It needs to kind of pull that in.
And then you're combining that reasoning with tools, but then this visualization thing, you kinda need all of that working together. It's like a few pieces, and that makes these really magical things where you get a game recap visualized with graphs and stats that are live just for you. And and I think you're starting to see a lot of that magic happen now because of these pieces coming together. And it happens in lots of facets, not just for Nana Banana. Like, you can do shopping on on AI mode now, and it'll pull a gallery with images and with live prices, and you can follow ask follow-up questions and say, I like the black instead of the green pants, and then it'll switch them all to the color.
So I think you're seeing these combinations more and more in the system.
Yeah. That's interesting. I wonder I wonder I feel like there's an interesting thread to pull, and I feel like the next time we're we're gonna talk, it's gonna be about the even more of those combinations coming together and fusing all the different parts of search into into an experience. So thank you both for sitting down to to talk about this, and thanks everyone for watching. We'll see you in the next episode.
Gemini 3 and Gen UI in Google Search
Ask me anything about this podcast episode...
Try asking: