Recipe ·
Gemini 3.1
Gemini 3.1 is a large multimodal artificial intelligence model developed by Google DeepMind, representing a significant iteration of the Gemini series. Announced on February 20, 2026, it is distinguished by its 'Mixture of Experts' (MoE) architecture, which enables more efficient scaling and task-specific processing. A key technical specification is its 10 million token context window for its 'Pro
Ingredient list
Invented by Nous Research · 2023-08 · Velocity 2y
“The 10M token context window suggests use of advanced RoPE extension techniques like YaRN.”
architecturemediumInvented by Google · 2022-01 · Velocity 4y
“Gemini models are known to be prompted with chain-of-thought reasoning for complex tasks.”
reasoningmediumInvented by Google · 2021-09 · Velocity 4y
“Gemini models are instruction-tuned, building on Google's FLAN tradition.”
trainingmediumInvented by Zhuiyi Technology · 2021-04 · Velocity 5y
“Gemini models use Rotary Position Embeddings (RoPE) for position encoding.”
architecturemediumInvented by Google · 2017-06 · Velocity 9y
“Gemini is a Transformer-based model, using self-attention as its core architecture.”
architecturehighInvented by Google · 2017-01 · Velocity 9y
“Gemini 3.1 is distinguished by its 'Mixture of Experts' (MoE) architecture.”
architecturehigh
This recipe is part of the gentic.news Deployment Atlas. Every ingredient has an origin paper + evidence. Methodology is public. Dataset is CC BY 4.0.