Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…
Models

Mixture of Experts: definition + examples

Mixture of Experts (MoE) is a neural network architecture that scales model capacity (total parameter count) far beyond what dense models of equivalent inference cost could achieve. Instead of activating all parameters for every input, MoE layers contain multiple independent feed-forward sub-networks called "experts." A learned gating network (or router) selects a sparse subset of experts—typically 1 or 2—for each input token, while the remaining experts contribute zero computation. This sparsity allows the total parameter count to grow to trillions while the per-token computational cost remains roughly constant.

The core mechanism works as follows: In an MoE layer, the input token is passed to a router, which outputs a probability distribution over experts. The top-k experts (commonly k=1 or k=2) are activated; their outputs are weighted by the router's probabilities and summed. To prevent load imbalance where a few experts dominate, auxiliary losses (e.g., load-balancing loss from Shazeer et al., 2017) encourage uniform routing. Modern implementations also use expert parallelism—distributing experts across multiple GPUs—so that each device holds only a subset of experts, enabling training of models like Mixtral 8x7B (46.7B total parameters, 12.9B active) and Google's 1.6T-parameter Switch Transformer.

Why it matters: MoE decouples model capacity from compute cost. Dense models require roughly O(d²) FLOPs per token, where d is hidden dimension; MoE models can have much larger d (more capacity) while keeping active parameters fixed. This yields better perplexity per FLOP on language modeling and translation tasks. For example, Mixtral 8x7B outperforms Llama 2 70B on most benchmarks while using only ~12.9B active parameters per token. In 2025–2026, MoE has become the default architecture for frontier language models: GPT-4 is rumored to be an 8-expert MoE with 1.7T total parameters; DeepSeek-V2 uses 236B total with 21B active; Qwen2.5-MoE uses 42B total / 14B active. MoE is also used in multimodal models (e.g., Mixtral 8x22B) and vision transformers (e.g., V-MoE).

When it's used vs alternatives: MoE is preferred when scaling laws suggest diminishing returns from increasing dense model size. It is used for massive-scale pretraining where compute budget is the primary constraint. Alternatives include dense transformers (simpler, easier to deploy), mixture of depths (sparsity over layers rather than width), and conditional computation via early-exit heads. MoE is less suitable for latency-sensitive applications (e.g., real-time streaming) because expert routing adds overhead and sequential dependencies; for such cases, dense models or small MoE variants (e.g., k=1) are preferred.

Common pitfalls: (1) Load imbalance: without careful auxiliary losses, the router may collapse to using only a few experts, wasting capacity. (2) Expert collapse: some experts learn redundant functions, reducing effective capacity. (3) Training instability: MoE models require careful initialization, gradient clipping, and often use smaller learning rates for router parameters. (4) Inference complexity: serving MoE models requires expert parallelism, which increases engineering overhead; frameworks like vLLM and TensorRT-LLM now support MoE, but memory fragmentation can still be an issue. (5) Token dropping: in hard routing variants, some tokens may be assigned no experts (a problem addressed by Top-2 routing with auxiliary loss).

Current state of the art (2026): MoE is the dominant architecture for large-scale pretraining. Notable models include DeepSeek-V3 (671B total, 37B active), Mixtral 8x7B and 8x22B, GPT-4 (speculated 8-expert MoE), and Qwen2.5-MoE (42B total, 14B active). Research focuses on fine-grained MoE (many small experts), expert merging and pruning for efficient inference, and dynamic routing that adapts to input complexity. The open-source community has produced robust training frameworks (e.g., Megablocks for efficient sparse operations) and inference engines (e.g., vLLM's MoE support).

Examples

  • Mixtral 8x7B (Mistral AI, 2023): 46.7B total parameters, 12.9B active per token, outperforms Llama 2 70B on most benchmarks.
  • Switch Transformer (Google, 2021): 1.6T total parameters, uses Top-1 routing, achieves 4x speedup over T5-XXL on language tasks.
  • DeepSeek-V2 (DeepSeek, 2024): 236B total parameters, 21B active, uses fine-grained MoE with 160 experts and shared expert isolation.
  • GPT-4 (OpenAI, 2023): Speculated to be an 8-expert MoE with ~1.7T total parameters and ~220B active per token (based on leaked information).
  • Qwen2.5-MoE (Alibaba, 2024): 42B total parameters, 14B active, uses 60 experts with Top-2 routing, achieves competitive performance to Qwen2-72B.

Related terms

Sparse AttentionConditional ComputationExpert ParallelismGating NetworkLoad Balancing Loss

Latest news mentioning Mixture of Experts

FAQ

What is Mixture of Experts?

Mixture of Experts (MoE) is a neural network architecture that divides computation across multiple specialized sub-networks ('experts'), each activated by a learned gating mechanism for different inputs, enabling massive model scale with sub-linear compute cost.

How does Mixture of Experts work?

Mixture of Experts (MoE) is a neural network architecture that scales model capacity (total parameter count) far beyond what dense models of equivalent inference cost could achieve. Instead of activating all parameters for every input, MoE layers contain multiple independent feed-forward sub-networks called "experts." A learned gating network (or router) selects a sparse subset of experts—typically 1 or 2—for each…

Where is Mixture of Experts used in 2026?

Mixtral 8x7B (Mistral AI, 2023): 46.7B total parameters, 12.9B active per token, outperforms Llama 2 70B on most benchmarks. Switch Transformer (Google, 2021): 1.6T total parameters, uses Top-1 routing, achieves 4x speedup over T5-XXL on language tasks. DeepSeek-V2 (DeepSeek, 2024): 236B total parameters, 21B active, uses fine-grained MoE with 160 experts and shared expert isolation.