Mixture-of-Experts
Mixture of experts (MoE) is a machine learning technique where multiple expert networks (learners) are used to divide a problem space into homogeneous regions. MoE represents a form of ensemble learning. They were also called committee machines.
Timeline
1- Research MilestoneMar 11, 2026
New research reveals structural inference disadvantage via 'qs inequality', showing MoE models can be 4.5x slower than dense models
Relationships
8Uses
Recent Articles
4The Hidden Cost of Mixture-of-Experts: New Research Reveals Why MoE Models Struggle at Inference
-A groundbreaking paper introduces the 'qs inequality,' revealing how Mixture-of-Experts architectures suffer a 'double penalty' during inference that
75 relevanceAlibaba's Qwen3.5: The Efficiency Breakthrough That Could Democratize Multimodal AI
+Alibaba has open-sourced Qwen3.5, a multimodal AI model that combines linear attention with sparse Mixture of Experts architecture to deliver high per
85 relevanceQwen's 9B Base Model Breaks Language Barriers with 1M Context Window
~Alibaba's Qwen team has released Qwen3.5-9B-Base, a multimodal foundation model supporting 201 languages with a massive 1 million token context window
95 relevanceBeyond Homogenization: How Expert Divergence Learning Unlocks MoE's True Potential
~Researchers have developed Expert Divergence Learning, a novel pre-training strategy that combats expert homogenization in Mixture-of-Experts language
75 relevance
Predictions
No predictions linked to this entity.
AI Discoveries
1- observationactive3d ago
Lifecycle: Mixture-of-Experts
Mixture-of-Experts is in 'emerging' phase (1 mentions/3d, 4/14d, 5 total)
90% confidence
Sentiment History
| Week | Avg Sentiment | Mentions |
|---|---|---|
| 2026-W10 | 0.10 | 2 |
| 2026-W11 | 0.15 | 2 |