Anthropic has access to 220,000 NVIDIA GPUs and 310MW of power, claims @kimmonismus. The cluster, if confirmed, would be one of the largest AI training deployments ever disclosed.
Key facts
- 220,000 NVIDIA GPUs reported by @kimmonismus.
- 310MW power draw, triple OpenAI's GPT-4 cluster.
- Implied cost: >$5B total deployment.
- Compute potential: 10x GPT-4 FLOPs.
- Source: @kimmonismus, not confirmed by Anthropic.
Anthropic reportedly has access to 220,000 NVIDIA GPUs and 310MW of power for AI training, according to a post by @kimmonismus. The scale implies a multi-billion-dollar compute cluster, likely exceeding $5B in total cost, and suggests Anthropic is preparing to train frontier models at unprecedented scale.
The 310MW figure is roughly triple the power draw of OpenAI's largest confirmed training run, which used ~100MW for GPT-4. If true, this positions Anthropic's infrastructure at the frontier of AI compute, rivaling or exceeding known deployments by OpenAI and Google DeepMind. Anthropic has not publicly confirmed the numbers, nor have its cloud partners Amazon Web Services or Google Cloud commented.
The Compute Arms Race
Anthropic's reported cluster is part of a broader trend: AI labs are racing to secure massive compute resources as a strategic moat. OpenAI has been reported to be building a 5GW data center cluster [Reuters], while Google DeepMind is expanding its Tensor Processing Unit fleets. Anthropic's 220K GPU count, if real, would require ~$3-4B in GPU hardware alone, plus data center construction and power infrastructure — a total deployment cost likely above $5B.
Implications for Training Scale
With 220K GPUs and 310MW, Anthropic could train models with 10^26+ FLOPs — roughly 10x the compute used for GPT-4 [per public estimates]. This would enable training of models with trillions of parameters or extremely long context windows. The power draw suggests sustained training at high utilization, likely for months.
No comparable cluster has been publicly confirmed by Anthropic or its cloud partners. The claim comes from an unofficial source and should be treated with caution until verified. However, the specificity — 220,000 GPUs, 310MW — suggests access to internal data or close observation of Anthropic's infrastructure contracts.
What to watch
Look for Anthropic to disclose cluster details in Q2 2026 earnings or a blog post. If true, watch for training runs producing models with 10^26+ FLOPs, and whether competitors like OpenAI or Google respond with larger disclosed clusters.









