
"The 31B Dense currently ranks third among open models on the Arena AI text leaderboard, while the 26B MoE holds sixth. Both outperform models up to twenty times their size in parameter count, according to Google."
"The E2B and E4B models target edge devices. They run fully offline on phones, Raspberry Pi, and Nvidia Jetson Orin Nano with near-zero latency, and feature native audio input."
"The Apache 2.0 license is probably the most remarkable part about this model family. It marks a clear break from earlier Gemma releases, adding day-one compatibility with vLLM, llama.cpp, Ollama, NVIDIA NIM, LM Studio, and more."
Gemma 4 features four open-weight AI models designed for local devices, including smartphones and workstations. The models, Effective 2B, Effective 4B, a 26B Mixture of Experts, and a 31B Dense, are available under an Apache 2.0 license. The 31B Dense ranks third on the Arena AI leaderboard, while the 26B MoE ranks sixth. These models support advanced reasoning and agentic workflows, with the edge models capable of running offline on various devices. The Apache 2.0 license broadens the ecosystem for developers, allowing for greater compatibility and innovation.
Read at Techzine Global
Unable to calculate read time
Collection
[
|
...
]