PJM reported 220GW in grid interconnection requests as of April 2026. Hyperscalers including Google, Microsoft, and Amazon face 3-7 year delays for new transmission capacity.
Key facts
- PJM interconnection queue: 220GW as of April 2026.
- 70% of planned U.S. data center capacity faces grid delays.
- Hyperscaler Q1 2026 capex exceeded $90B.
- Google's Texas data center targets 500MW by 2026.
- Transformer manufacturers booked through 2029.
The AI infrastructure buildout has hit a wall that no amount of GPUs can solve: the power grid. [According to POWER Magazine] and corroborated by PJM Interconnection filings, the queue of interconnection requests for new data center load now exceeds 220GW — more than the entire peak demand of the Eastern Interconnection.
This is not a compute shortage. It is a physics and permitting bottleneck. Transformers, switchgear, and high-voltage transmission lines have lead times of 3 to 7 years depending on the region. [Per the source] 70% of planned U.S. data center capacity is currently stuck in interconnection queues, waiting for grid upgrades that were never designed for the load density AI training clusters require.
The 220GW Queue: What It Means
The PJM queue figure, reported in April 2026, represents requests from hyperscalers, colocation providers, and developers. To put that in context: the entire U.S. peak electricity demand is roughly 740GW. A single 1GW AI data center — roughly what a 100,000-H100 cluster draws at full tilt — requires the equivalent of a medium-sized power plant and the transmission infrastructure to support it.
Google, which signed a 5GW compute capacity deal with Anthropic in May 2026 and committed $5B+ to a Texas data center for the company, is among the most exposed. [The company's blog post says] the Texas facility targets 500MW by 2026, but regional grid constraints mean that even fully funded projects face years-long interconnection studies.
Hyperscaler Capex vs. Grid Reality
Major hyperscalers collectively spent over $90B on capex in Q1 2026 alone, much of it on AI infrastructure. [As previously reported] that spending outpaces the rate at which new generation and transmission can be brought online. The result: data centers are being built faster than the grid can connect them, leading to stranded capacity and rising costs.
Microsoft has begun co-locating small modular reactors (SMRs) at data center sites, and Amazon has signed power purchase agreements for 2.5GW of new solar and wind. But these projects take 4-6 years to reach commercial operation. The gap between capital deployment and grid readiness is now the single largest risk factor for AI scaling timelines.
The Unique Take: Silicon Is Solved, Copper Is the Constraint
The AP wire coverage frames this as an environmental or regulatory story. The structural take is different: the AI industry has solved chip supply (Nvidia, AMD, Google TPU), networking (InfiniBand, Ethernet), and cooling (liquid, immersion). The remaining bottleneck is the 100-year-old technology of copper wires and steel towers. No amount of model optimization or inference efficiency gains can reduce the physical demand for electrons at training time.
[POWER Magazine notes] that transformer manufacturers are booked through 2029, and the skilled labor shortage for high-voltage line construction is acute. The AI buildout is now a civil engineering problem.
What to watch
Watch for PJM's Q3 2026 interconnection queue update and whether FERC intervenes on queue reform. Also track Google's Texas data center permitting milestones — if interconnection clears before 2027, it signals grid capacity is loosening.









