The new Google AI model could one day let you understand and talk to the dolphins


  • Google and the Wild Dolphin project have developed an AI model formed to understand the vocalizations of Dolphins
  • Dolphingemma can operate directly on Pixel smartphones
  • It will be open source this summer

For most of human history, our relationship with dolphins was a unilateral conversation: we speak, they grine and we hochons their heads as we understand each other before throwing them a fish. But now Google has a plan to use AI to fill this division. In collaboration with Georgia Tech and the Wild Dolphin Project (WDP), Google has created Dolphingemma, a new model of AI trained to understand and even generate chatter of dolphins.

The WDP has been collecting data on a specific group of spotted dolphins from the wild Atlantic since 1985. The Bahamas based pod has provided enormous amounts of audio, video and behavioral notes as the researchers have observed them, documenting each Squawk and Buzz and trying to collect what it means. This audio treasure is now introduced into Dolphingemma, which is based on the family of Google open gemma models. Dolphingemma takes dolphin sounds as a starter, treats them using audio tokenizers like Soundstream and predicts what vocalization could then come. Imagine self-automatic, but for dolphins.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top