Google launches AI to decipher dolphin language

DolphinGemma is an AI made to understand dolphin sounds — Photo: Ranae Smith/Unsplash
DolphinGemma is an AI made to understand dolphin sounds — Photo: Ranae Smith/Unsplash

Google’s innovative project aims to decipher dolphin communication using AI to understand and replicate their vocalizations in the marine environment

Google unveiled DolphinGemma, an artificial intelligence designed to understand dolphin vocalizations and generate new sound sequences similar to them. The project, revealed on Monday in celebration of National Dolphin Day in the United States, is the result of a collaboration between the tech giant, researchers from Georgia Tech, and the Wild Dolphin Project (WDP). The goal is to use this AI to explore how dolphins communicate in the marine environment. Learn more about DolphinGemma and its mission.

DolphinGemma is a tool created to analyze dolphin communication, producing sound sequences based on their natural vocalizations. Trained with data from the Wild Dolphin Project, which has been studying these mammals since 1985, the AI was fed with detailed videos and audios, covering identities, interactions, and behaviors of dolphins. The goal is to identify patterns and meanings in the sound sequences, bringing researchers closer to decoding the language of these animals.

Based on Google’s Gemma collection of AI models, DolphinGemma’s technology uses the same foundation as Gemini, allowing it to process natural dolphin sounds, identify patterns, and predict the next vocalizations. The model has 400 million parameters and is capable of operating on Google Pixel smartphones, using the SoundStream audio technology developed to handle complex sound sequences.

Additionally, the collaboration between the Wild Dolphin Project and the AI also involves the use of the CHAT system (Cetacean Hearing Augmentation Telemetry), an underwater technology that allows dolphins to associate synthetic whistles with specific objects. Researchers hope to teach dolphins to mimic sounds to request desired items.

One of the innovations of the project is the integration of Google Pixel with the CHAT system, eliminating the need for custom hardware devices and improving system maintenance. This also reduces costs, energy consumption, and the size of the devices, making research in the marine environment easier. The Google Pixel 6 has already been used in real-time studies, and the next generation, the Google Pixel 9, is expected to further enhance microphone and speaker capabilities, utilizing the advanced processing power of smartphones to run deep learning models.

Source and images: TechTudo / Ranae Smith/Unsplash. This content was created with the help of AI and reviewed by the editorial team.

Back to top