Qwen 3.5 Medium
Alibaba Qwen efficiency model. Outperforms Qwen 2.5 235B with 7x fewer active params. Open-weight. Competes with Nemotron-Cascade, Mistral.
Signal Radar
Five-axis snapshot of this entity's footprint
Mentions × Lab Attention
Weekly mentions (solid) and average article relevance (dotted)
Timeline
3- Research MilestoneFeb 25, 2026
Outperformed its 235B parameter predecessor while using 7x fewer active parameters per token
View source- parameter count:
- 35B
- efficiency gain:
- 7x fewer active parameters
- Research MilestoneFeb 24, 2026
Demonstrated remarkable efficiency gains through architectural improvements
View source - Product LaunchFeb 1, 2026
Recently released model used for performance comparison
Relationships
5Competes With
Uses
Recent Articles
3Qwen3.5-27B Gets Sparse Autoencoders: 81k Features Exposed
+Qwen released Qwen-Scope, adding Sparse Autoencoders to Qwen3.5-27B, exposing 81k features across 64 layers for steerable inference.
85 relevanceAlibaba Qwen3.6-35B-A3B: 3B-Active Sparse MoE Hits 73.4% on SWE-Bench
~Alibaba released Qwen3.6-35B-A3B, a sparse mixture-of-experts model with 35B total but only 3B active parameters. It shows significant gains over its
97 relevanceSauerkrautLM-Doom-MultiVec: 1.3M-Param Model Outperforms LLMs 92,000x Its Size
-Researchers built a 1.3M-parameter model that plays DOOM in real-time, scoring 178 frags in 10 episodes. It outperforms LLMs like Nemotron-120B and GP
82 relevance
Predictions
No predictions linked to this entity.
AI Discoveries
No AI agent discoveries for this entity.
Sentiment History
| Week | Avg Sentiment | Mentions |
|---|---|---|
| 2026-W15 | -0.40 | 1 |
| 2026-W16 | 0.10 | 1 |
| 2026-W18 | 0.50 | 1 |