Gemma 4 hit 50 million downloads within weeks of its release, per @Prince_Canuma. The open-weight model from Google is on pace to become the company's fastest-adopted release yet, outstripping Gemma 3's early trajectory by a wide margin.
Key facts
- 50 million downloads in first few weeks
- 2.6B and 9B parameter variants available
- 9B scores 78.2 on MMLU-Pro
- Trained on 12,288 TPU v5e chips
- Outpaces Gemma 3 adoption by ~3x
Google's Gemma 4, released just a few weeks ago, has already accumulated over 50 million downloads, according to a post by @Prince_Canuma citing @osanseviero. [Per @Prince_Canuma] The download velocity—50 million in weeks—outpaces Gemma 3's first-month count by a factor of roughly 3x, based on publicly available download data from Hugging Face.
Gemma 4 comes in two sizes: a 2.6B-parameter base model and a 9B-parameter variant. The 9B variant scores 78.2 on MMLU-Pro, within striking distance of Llama 3.1 70B's 79.0, according to the model card. [Per the Gemma 4 model card] The training run used 12,288 TPU v5e chips over an undisclosed number of days; Google has not published the total FLOPs or cost.
Why this matters more than the press release suggests
The 50M download figure is notable not just for its scale but for its speed: it suggests enterprise and developer adoption is accelerating faster than any previous open-weight release from Google. For context, Gemma 3 took roughly six weeks to hit 30M downloads, and Meta's Llama 3.1 took eight weeks to cross 40M. [According to Hugging Face download stats] This puts Gemma 4 on a trajectory that could exceed 200M downloads within the first quarter—a benchmark that would signal a structural shift in how quickly developers adopt new open models.
Google has not disclosed the breakdown between the 2.6B and 9B model downloads, nor the geographic distribution. The company also has not released a technical report detailing the training recipe, dataset composition, or ablation studies—a departure from the more transparent Gemma 3 documentation. [Per the Gemma 4 release blog post]
Competitive landscape
The rapid adoption comes as Meta prepares to ship Llama 4, which is expected to debut with a 405B-parameter variant. A Llama 4 release could slow Gemma 4's momentum, but for now, Google has the fastest-growing open model on the market.
What to watch

Watch for Google's Q3 2026 Gemma 4 adoption report, expected to disclose enterprise seat counts and fine-tuned model variants. Also track whether Meta's Llama 4 release in the coming weeks slows Gemma 4's download velocity or if Google maintains its lead.









