Google launches Gemma 4 open models with 140 languages, 400 million downloads

Google launches Gemma 4 open models with 140 languages, 400 million downloads

Google DeepMind released Gemma 4 on Wednesday April 1st.

It is Google’s smartest open model ever for advanced reasoning and agent workflows under a permissive Apache 2.0 license.

Google introduced four versatile sizes, including effective 2B (E2B), effective 4B (E4B), a mixture of experts (MoE) 26B and a dense model 31B.

For now, the 31B is ranked as the third best open model in the world in the Arena AI text ranking.

Additionally, Google reports that the 26B model takes sixth place, outperforming models 20 times larger.

In the official blog post, the VP of Research at Google DeepMind wrote: “Gemma 4 delivers an unprecedented level of per-parameter intelligence. »

Since the release of the first Gemma model, the models have been downloaded over 400 million times, creating a ‘Gemmaverse’ of over 100,000 variations.

The new models support native function calls, structured JSON output, and system commands, enabling the creation of autonomous agents that can interact with tools and APIs.

All models support native video, image, and text processing; E2B and E4B models support native audio input for voice recognition.

The template supports over 140 languages ​​and provides pop-ups of up to 256,000 tokens for larger templates, allowing developers to process entire code repositories or long documents in a single prompt.

The edge-focused E2B and E4B models are optimized for mobile and IoT devices and work fully offline on phones, Raspberry Pi, and NVIDIA Jetson Orin Nano with near-zero latency. Google worked with Qualcomm and MediaTek on mobile optimizations in collaboration with the Pixel team.

Users can access models on Hugging Face, Kaggle, Ollama and Google AI Studio.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top