In a podcast interview, Google VP of Search Liz Reid described two methods LLMs are altering what Google can index and the way it ranks outcomes for particular person customers.
Reid advised the Access Podcast that multimodal AI fashions now enable Google to know audio and video content material at a deeper stage than was beforehand attainable. She additionally pointed to a future the place search outcomes adapt primarily based on a consumer’s paid subscriptions.
What’s New
Multimodal Understanding Is Increasing What Google Can Index
Reid stated LLMs being multimodal has opened up content material codecs that Google beforehand struggled to course of.
Reid advised the hosts:
“The wonderful thing about LLM is that they’re multimodal. So we are able to really perceive audio content material and video content material really at a stage we couldn’t years in the past.”
She went additional, describing how Google can now transcend fundamental transcription when analyzing video.
“Now you’ll be able to perceive audio significantly better. Now you’ll be able to perceive video significantly better. Now you’ll be able to perceive not simply the video transcription however like what’s the video extra about or what’s the fashion or different issues like that.”
Reid linked this to a long-standing hole in how search works for non-English audio system. For customers in India who converse Hindi or different languages, the net typically lacks the knowledge they want of their language. Beforehand, translating all net content material into each language wasn’t scalable. LLMs modified that.
“Now with an LLM, you’ll be able to take data in a single language, perceive it, after which output in one other language. Like that opens up data.”
Google has been transferring on this path for a while. In October 2025, Reid told the Wall Street Journal that Google had adjusted rating to floor extra short-form video, boards, and user-generated content material.
The feedback additionally add context to Google’s Audio Overviews experiment launched in Search Labs final June, which generates spoken AI summaries of search outcomes.
That wasn’t attainable a couple of years in the past. In 2021, Google and KQED tested whether audio content could be made searchable and located that speech-to-text accuracy wasn’t excessive sufficient, notably for correct nouns and regional references. Reid’s feedback counsel that the barrier has fallen.
Subscription-Conscious Search Might Change How Outcomes Are Personalised
Reid additionally outlined a path for personalization that goes past Google’s current Preferred Sources feature.
She advised the hosts Google needs to floor content material from retailers a consumer pays for, not paywalled outcomes from sources they’ll’t entry.
“If you happen to love this supply and also you do have a relationship with it then that content material ought to floor extra simply for you on Google.”
Reid gave a sensible instance. Say 20 interviews on a subject are paywalled however a consumer subscribes to 1 outlet. Google ought to make it simple to search out the one they’ll learn.
“We should always floor the one which they’re paying for and never the six that they’ll’t get entry to extra.”
She steered the corporate has “taken small steps to this point however wish to do extra” to strengthen how audiences and trusted sources join via search. She additionally talked about the potential of micropayments for particular person articles, although she acknowledged that mannequin hasn’t taken off traditionally.
Google expanded Preferred Sources globally for English-language customers in December, and introduced a characteristic that highlights hyperlinks from customers’ paid information subscriptions. Google stated it might prioritize these hyperlinks in a devoted carousel, beginning within the Gemini app, with AI Overviews and AI Mode to observe. On the time, Google stated customers who decide a most well-liked supply click on to that web site twice as typically on common. Reid’s feedback counsel the corporate sees subscription-aware search as a broader evolution of that very same path.
Why This Issues
The multimodal capabilities Reid pointed to broaden which content material codecs get found via search. Podcasts, video sequence, and audio-first content material have traditionally been tougher for Google to judge past metadata and transcripts. Google’s rising skill to evaluate relevance and depth from audio and video immediately modifications who might be discovered via search and the way.
For manufacturers and creators investing in non-text codecs, Google’s skill to floor that work is catching as much as the place the viewers already is.
The subscription-aware personalization path issues for any writer with a paywall or membership mannequin. Search outcomes that adapt to what particular person customers pay for would tighten the connection between subscriber retention and search visibility. Paywalled content material may carry out higher for the viewers that issues most to the writer, relatively than being deprioritized as a result of most customers can’t entry it.
Trying Forward
Reid didn’t connect timelines to both growth. The multimodal indexing capabilities she talked about look like present, whereas the subscription-aware personalization is a said path with some current options already in place.
Google I/O is scheduled for Might 19-20. Reid stated on the podcast that the corporate is “actively constructing” however that the tempo of AI growth means some options may come collectively as late as April and nonetheless make it to the stage.
Featured Picture: Mawaddah F/Shutterstock
Source link


