- Google Deepmind hung and extended to its access to Sandbox Music Ai
- The sandbox now includes the Lyria 2 model and real -time features to generate, extend and modify music
- Music is filled with synthetic
Google Deepmind has brought new and improved sounds to its musical AI sandbox, which, although the sand is notoriously bad for musical instruments, is the place where Google hosts experimental tools to put tracks using AI models. The sandbox now offers the new AI Lyria 2 model and the Lyria real -time Musical production tools.
Google launched AI sandbox music as a way to trigger ideas, generate sound landscapes and perhaps help you finish this half-written verse that you have avoided looking all year round. The sandbox is mainly aimed at musical artists and professional producers, and access has been fairly limited since its beginnings in 2023. But, Google now opens the platform to many more people in music production, including those looking to create soundtracks for films and games.
The new AI Lyria 2 music model is the rhythmic section that underpins the new sandbox. The model is formed to produce high-fidelity audio outputs, with detailed and complex compositions in any genre, from Synthpop shoegaze to any strange Banjo-Core hybrid that you cook in your chamber studio.
Lyria’s feature in real time puts the creation of AI in a virtual studio with which you can block. You can sit on your keyboard, and Lyria in real time will help you mix the rhythms of the ambient house with a classic funk, playing and refining its sound on the fly.
Virtual music studio
The sandbox offers three main tools to produce the pieces. Creating, seen above, allows you to describe the type of sound that you are targeting in words. Then, the AI throws music samples that you can use as a jump points. If you already have an approximate idea but you cannot understand what is going on after the second refrain, you can download what you have and let the extended functionality find ways to continue the room in the same style.
The third feature is called Edit, which, as its name suggests, rebuilds music in a new style. You can ask that your melody is redesigned in a different mood or genre, or by playing with the digital control card or via text prompts. For example, you can ask for something as basic as “transform it into a walk” or something more complex like, “make this sadness but always dancing”, or see how bizarre you can get the AI to “mark this EDM drop as if it was just a section of oboe”. You can hear an example below created by Isabella Kensington.
To watch
Ai Singalong
Everything that is generated by Lyria 2 and in real time is watermark using Google’s synthetic technology. This means that the tracks generated by AI can be identified even if someone tries to pass them like the next demo of Frank Ocean lost. It is an intelligent decision in an industry that is already preparing for animated debates on what matters as a “real” music and what does not.
These philosophical questions also decide the destination of a lot of money, so these are more than abstract discussions on how to define creativity at stake. But, as for AI tools to produce text, images and videos, it is not the death knell for the death of traditional song writing. Nor is it a magical source of the next hit of the graph. The AI could make a buzzing half -cooked flat if it is poorly used. Fortunately, many musical talents understand what AI can do and what it cannot, as Tommy demonstrates below.

To watch