What Happened
According to a report shared by AI researcher Rohan Pandey, Meta has entered into a five-year agreement worth approximately $27 billion with a Dutch cloud provider to secure "massive amounts of AI computing power." The deal, structured as a capital expenditure (CapEx) commitment, is intended to lock in the GPU capacity necessary for Meta's long-term AI research and product development, including its Llama model series and generative AI features across its platforms.
The specific Dutch provider was not named in the initial report. The scale of the commitment—averaging $5.4 billion annually—immediately places it among the largest single compute procurement deals in the AI sector, comparable to commitments made by other tech giants like Microsoft, Google, and Amazon.
Context
This massive expenditure is consistent with Meta's publicly stated ambitions and recent financial disclosures. In its Q1 2024 earnings call, Meta increased its full-year 2024 capital expenditure forecast to a range of $35-40 billion, up from its prior estimate of $30-37 billion, citing investments in AI infrastructure as the primary driver. CEO Mark Zuckerberg stated that building "leading AI capacity" was a major priority and that Meta would "continue investing aggressively."
The deal highlights the intense, global competition for advanced AI compute, primarily in the form of NVIDIA GPUs (like the H100 and upcoming Blackwell B200) and similar accelerators. With cloud providers and large tech companies securing supply years in advance, such long-term, multi-billion dollar agreements have become a strategic necessity to ensure the compute required for training next-generation frontier models.
For Meta, which is developing the open-weight Llama model family and integrating AI across Facebook, Instagram, WhatsApp, and its Reality Labs division, securing a predictable and scalable compute supply chain is critical to its product roadmap and competitive positioning against rivals like OpenAI, Google, and Anthropic.






