Anthropic is moving to secure its own data center capacity in Europe and Australia, according to new job listings for transaction principals. The roles, which have not been previously reported, indicate a significant shift in the AI company's infrastructure strategy as it scales its Claude models and competes for compute resources.
What's Happening

Anthropic is hiring for "Transaction Principal, Data Center Leasing" positions based in London and Sydney. The job descriptions, posted on Anthropic's careers page, explicitly state the goal is to "sign data center capacity deals in Europe and Australia." The company is seeking candidates with experience in large-scale data center transactions, real estate, and contract negotiation to lead these efforts.
This hiring push comes as Anthropic builds out its internal data center team, a move that suggests a transition from relying primarily on cloud providers like Amazon Web Services (AWS)—a major investor in Anthropic—towards owning or controlling more of its own compute infrastructure.
The Infrastructure Context
The AI industry is facing a severe compute crunch. Training and running state-of-the-art large language models (LLMs) like Claude 3 require massive, specialized GPU clusters. Most AI labs, including Anthropic, have historically leased capacity from hyperscalers like AWS, Google Cloud, and Microsoft Azure.
However, as models grow larger and inference demand scales, securing guaranteed, cost-effective, and low-latency compute has become a critical competitive advantage. Building or leasing dedicated data centers provides more control over hardware selection, power procurement, and capacity planning.
Why Europe and Australia?
The geographic focus is strategic. Europe represents a massive market for enterprise AI adoption, with strict data sovereignty regulations (like GDPR) that often require data processing to occur within the EU. Establishing local compute infrastructure helps Anthropic serve European customers while complying with these rules.
Australia is a growing tech hub with increasing AI adoption across finance, mining, and government sectors. It also serves as a gateway to the broader Asia-Pacific region. Local data centers reduce latency for Australian users and provide a hedge against potential geopolitical tensions affecting access to compute in other regions.
Competitive Landscape

Anthropic's move follows a broader industry trend. OpenAI has reportedly been planning its own "Stargate" supercomputer project. Microsoft is building massive data centers for AI. Google and Amazon are expanding their AI-optimized cloud regions globally.
By securing its own capacity, Anthropic gains negotiating leverage with cloud providers, ensures capacity for future model training runs, and can potentially offer more customized infrastructure for its largest enterprise clients. This is particularly important as inference costs become a major barrier to widespread LLM deployment.
What This Means for Anthropic's Roadmap
This infrastructure expansion is a clear signal of Anthropic's growth ambitions. The company is preparing for:
- Larger Future Models: Securing compute for training Claude 4 and beyond, which will require more GPUs than current-generation models.
- Global Inference Scaling: Supporting the rollout of Claude to millions of users worldwide with low-latency performance.
- Enterprise Sovereignty: Offering dedicated, compliant infrastructure stacks to regulated industries in Europe and Australia.
The hiring of specialized leasing principals—rather than general infrastructure engineers—suggests Anthropic is looking at large-scale, long-term capital commitments, potentially involving build-to-suit data centers or major colocation leases.
gentic.news Analysis
This infrastructure push is a logical, necessary step for Anthropic as it matures from an AI research lab into a global product company. Our previous coverage of Anthropic's Series C funding round highlighted its deepening ties with AWS. This move towards owned infrastructure doesn't contradict that partnership but complements it—even AWS-dependent companies often use Direct Connect for dedicated, high-bandwidth links to their own cages or colocation facilities.
The European focus directly addresses a weakness in Anthropic's position relative to OpenAI, which has benefited from Microsoft's extensive European Azure presence. For Australian expansion, Anthropic may be looking to capitalize on a market where neither OpenAI nor Google have an overwhelming incumbent advantage in cloud AI.
Critically, this isn't just about raw compute—it's about predictable economics. Cloud list prices for AI inference are notoriously high, and discounts are negotiated privately. By controlling its own racks, Anthropic can fix its marginal compute cost, which is essential for offering predictable pricing to its API customers and for planning long-term research budgets. If Anthropic can secure power purchase agreements (PPAs) for renewable energy at these sites—a likely goal given its Constitutional AI principles—it could also address the growing environmental critique of AI.
The timeline is telling. These are not postings for construction managers but for deal-makers. Anthropic likely aims to have capacity secured within 12-18 months, aligning with projected timelines for next-generation model training. This infrastructure build-out represents the single largest capital risk Anthropic has taken to date, marking its transition into the major leagues of AI infrastructure alongside OpenAI and the hyperscalers.
Frequently Asked Questions
Why is Anthropic building its own data centers?
Anthropic is likely seeking to secure guaranteed, cost-effective compute capacity for training future, larger versions of Claude and to provide low-latency inference for global customers. Owning or controlling infrastructure provides more predictability than relying solely on cloud spot markets and helps comply with data sovereignty laws in regions like Europe.
Does this mean Anthropic is leaving AWS?
No. Amazon remains a major investor and partner. This move is about diversification and control, not replacement. Anthropic will likely use a hybrid approach, running some workloads on its own infrastructure while maintaining a strong relationship with AWS for other needs and for redundancy.
How does this affect the AI compute shortage?
In the short term, it increases competition for already-scarce data center space, power, and GPUs, potentially driving up costs for smaller players. In the long term, if successful, it adds significant new AI-optimized capacity to the global market, which could ease constraints.
What does this mean for Claude's availability and pricing?
If Anthropic successfully secures cheaper, dedicated compute, it could potentially lower its operating costs. This might translate into more stable or reduced API pricing over time, or allow Anthropic to offer more generous usage tiers. More importantly, it ensures capacity to scale Claude to more users without performance degradation.






