Subgraph Atlas · centered on entity
Mixture of Experts (Sparse MoE for LLMs)
technique12 mentions· velocity: stableAn architecture where a router activates only a subset of expert sub-networks per token, scaling parameter count without proportional compute cost.
Two-hop subgraph: this entity, every entity it directly relates to, and every entity those neighbors relate to. Drag a node, scroll to zoom, click to inspect — or click any neighbor and re-center the atlas there.
0 nodes · 0 edges · loading…
companypersonai_modelproductresearch_labbenchmarkframework
drag to move · scroll to zoom · click a node
Top connections
Googlecompany
356 mentions
→ Center atlas here
GPT-4oai model
80 mentions
→ Center atlas here
GPT-5.3ai model
39 mentions
→ Center atlas here
GPT-5ai model
26 mentions
→ Center atlas here
Gemini 3 Proai model
19 mentions
→ Center atlas here
Gemini 3 Flashai model
9 mentions
→ Center atlas here
Nemotron 3 Superai model
8 mentions
→ Center atlas here
DeepSeek-R1ai model
8 mentions
→ Center atlas here