- Google tests audio previews of functionality of research in search
- The functionality will offer short audio summaries generated by the AI for certain requests.
- The functionality uses gemini models to provide podcast style explanations with clickable links.
I have been a fan of the functionality of audio previews in Google NoteBooklm since I experienced it last year. Now, he arrives at Google Search, currently only as a test in laboratories, but it brings a version more of the size of a bite of the “podcasts” generated by AI that I like in Notebooklm.
Once you have opted by Via Labs, you will start to see a small prompt on certain research results pages saying: “Generate an audio overview”. Press this, wait about 30 to 40 seconds, and will release a compact audio clip of about five minutes, sometimes less, which explains what you have sought in the form of two voices generated by AI-A-Discussion. Not too deep, but not a single superficial feeling either. Think of common ground between “Wikipedia Rabbit Hole” and “I read the title only”.
While you listen, the audio player remains anchored in your results page, displaying clickable links to sources from AI. You can continue to navigate, draw on related items or simply listen and absorb. If you like what you hear, you can give him a boost. If it is obviously false, the thumbs down are there too.
Although similar to what Notebooklm does with its audio previews, the search version has a major difference. NoteBooklm only uses documents that you download, YouTube videos and websites to which you are specifically linked. The Google Search version removes public web content. It can be good or bad, depending on what you are looking for. Something simple and scientific could be good, but a discussion on the best film of all time could get a different audio track each time you watch. Here is an example where I recorded a clip.
Research on the podcast ai
It is hardly perfect, and although the voices are good, they are always voices of AI. You might also notice that this has perceived sentences directly from someone’s reddit post. But it is listening and, as Google, hands free, emphasizes, with the possibility of adjusting the speed of the speakers and the links there to provide more context. You can speed it up or slow it down, skip or follow the links as you go. It is a search improved by AI, not a new audio book.
For the moment, not all research will offer to create an audio preview. You must also be in the United States and register for the laboratories at the moment. But, I expected that he soon has a general version. Then you can wonder how the lithium-ion batteries work or why Roman concrete is always standing and obtaining a beautiful mini-discussion of digital characters.
Consider it as the way video summaries and image carousels have brought new dimensions to the way we take online information. Audio previews are another aspect of this and a victory for hearing learners or people with visual disabilities, with Openai and Perplexity and a dozen AI search engines that slide at his heels, Google needs everyone he can gather to stand out and an IA podcast as a response to a Sertch is certainly a unique way, at least for the moment.