Anthropic has dramatically expanded its infrastructure partnership with Amazon Web Services (AWS), securing access to up to 5 gigawatts (GW) of compute capacity—a scale comparable to the entire global data center footprint of Microsoft as of 2024. The deal, which commits Anthropic to spend over $100 billion on AWS over the next decade, represents one of the largest single infrastructure investments in the history of artificial intelligence.
Amazon is concurrently investing an additional $5 billion in Anthropic today, with the potential for up to $20 billion more to follow. This capital injection solidifies a strategic partnership that began with Amazon's initial $4 billion investment in 2023 and a prior $2.75 billion commitment in early 2024.
Key Takeaways
- Anthropic has expanded its deal with Amazon to secure up to 5 gigawatts of compute capacity—equivalent to Microsoft's 2024 global data center footprint—and committed over $100 billion to AWS over the next decade.
- This infrastructure surge supports Claude's tripled run-rate revenue to over $30B and addresses consumer demand straining its systems.
The Scale of the Deal

The 5GW figure provides the most tangible measure of the deal's ambition. For context:
- 5GW is roughly the continuous power output of five standard nuclear power plants.
- It matches Microsoft's estimated total global data center power consumption (5-6GW) in 2024, as reported by Bloomberg.
- This capacity will be powered by Amazon's custom AI chips. Anthropic already has over 1 million Trainium2 chips in operation for training its Claude models.
The financial commitment is equally staggering. The $100B+ in committed AWS spend over ten years suggests an annual cloud bill approaching or exceeding $10 billion, underscoring the immense computational cost of training and serving frontier AI models at scale.
Business Context: Run-Rate Revenue Triples
The infrastructure expansion is fueled by explosive growth. According to the announcement, Anthropic's annual run-rate revenue has tripled from approximately $9 billion at the end of 2025 to over $30 billion currently. This trajectory places Anthropic firmly in the top tier of AI companies by revenue, alongside OpenAI and Google's DeepMind.
The company openly acknowledged that consumer demand for Claude has been straining its infrastructure, leading to periodic capacity constraints and service limitations. This massive compute deal is its direct response to those scaling challenges.
Technical and Product Integration
A key product integration was also announced: the Claude Platform is coming directly to AWS. This means enterprise customers will be able to access and manage Claude services through their existing AWS account with consolidated billing, reducing friction for the vast ecosystem of AWS-dependent businesses.
This deep integration mirrors the approach taken by other major model providers, like Microsoft's Azure OpenAI Service, and positions AWS as the primary cloud home for Claude's enterprise deployment.
Compute Capacity Up to 5 Gigawatts (GW) AWS Commitment >$100 Billion over 10 years Amazon Investment $5B today + up to $20B more Anthropic Run-Rate Revenue >$30B (tripled from ~$9B end of 2025) Trainium2 Chips in Use >1 Million Key Integration Claude Platform on AWS (unified billing)What This Means for the AI Industry

This deal has immediate implications for the competitive landscape:
- Cloud Lock-In at Scale: A $100B commitment effectively anchors Anthropic to AWS for the foreseeable future, creating a formidable alliance against the Microsoft-OpenAI and Google-DeepMind partnerships.
- The Compute Arms Race Intensifies: Securing 5GW of capacity is a preemptive move to lock down the scarce resource of data center power and cutting-edge chips. It raises the barrier to entry for any new player hoping to compete at the frontier model level.
- Revenue Validation: A $30B+ run-rate signals that the market for large language model APIs and enterprise platforms is vast and growing rapidly, capable of supporting multiple multi-billion dollar companies.
gentic.news Analysis
This announcement is the logical culmination of a partnership that has been building for three years. Following Amazon's initial $4 billion investment in 2023—a direct counter to Microsoft's OpenAI partnership—and a further $2.75 billion infusion in early 2024, this deal represents the full-scale operational merger of Anthropic's research with AWS's infrastructure empire. It transforms their relationship from a strategic investment into a foundational dependency.
The scale of the commitment reveals a critical shift in the AI landscape: the battle is no longer just about model architecture or research talent, but about guaranteed access to unprecedented, vertically integrated compute. Amazon gains a flagship AI tenant that justifies its massive investments in custom silicon (Trainium/Inferentia) and data center build-out. Anthropic gains the certainty of capacity to scale Claude against GPT and Gemini without being throttled by hardware availability.
This move also contextualizes the flurry of other massive compute deals we've covered, such as xAI's securing of 100,000 H100s from Oracle and Microsoft's rumored $100B "Stargate" AI supercomputer project with OpenAI. It confirms a trend we identified in late 2025: the frontier AI race is creating a new class of infrastructure mega-projects that dwarf the cloud spending of the previous decade. The risk for Anthropic is the same as for any company making a decade-long bet: architectural flexibility. Being deeply optimized for AWS's Trainium stack could make it harder to adapt if a radically different hardware paradigm (e.g., optical computing, neuromorphic chips) emerges elsewhere.
Frequently Asked Questions
How much is 5 gigawatts of compute power?
Five gigawatts is a measure of continuous power consumption, not raw FLOPs. It is roughly equivalent to the constant output of five large nuclear reactor units, or enough electricity to power approximately 3.75 million average U.S. homes. For the tech industry, it matches Microsoft's entire global data center electricity usage as of 2024, highlighting the staggering energy demands of scaling frontier AI.
What does a $100B AWS commitment mean?
It means Anthropic has contractually agreed to spend over one hundred billion dollars on Amazon Web Services over the next ten years. This is not an equity investment but a committed cloud spend, guaranteeing AWS a massive, long-term revenue stream. It likely comes with significant discounts and guaranteed access to the latest hardware (Trainium2/3), but it also deeply locks Anthropic into the AWS ecosystem.
How does Anthropic's $30B run-rate revenue compare to OpenAI?
While neither company discloses full audited financials, estimates and reports suggest both are in the same multi-ten-billion-dollar annual revenue league. OpenAI was reported to have achieved a $3.4B run-rate in late 2024, and its growth since then has likely been exponential. Anthropic's tripling from ~$9B to >$30B in a short period indicates the enterprise and developer market for advanced AI models is large enough to support multiple giants simultaneously.
What is the Claude Platform on AWS?
This is an integrated service that will allow businesses to access, manage, and use Anthropic's Claude models directly through their AWS account. It promises unified billing, AWS IAM (Identity and Access Management) integration, and likely tighter coupling with other AWS services (S3, Bedrock, etc.). It simplifies procurement and governance for enterprises that standardize on AWS, similar to how Azure OpenAI Service works on Microsoft's cloud.








