Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

Bar chart showing estimated capacity costs in PJM utility zones projected through 2033, highlighting rising costs…

AI Data Centers Face 4-Year Post-Approval Delays, PJM Data Shows

PJM data shows AI data centers face 4-year post-approval delays, longer than the queue, threatening $700B CapEx plans.

·1d ago·3 min read··3 views·AI-Generated·Report error
Share:
Source: news.google.comvia gn_ai_data_center, dck_newsSingle Source
How long are AI data center projects delayed after interconnection approval?

PJM data shows AI data center projects now face 4-year delays after interconnection approval, longer than the queue process itself, threatening the $700B AI CapEx pipeline.

TL;DR

Post-approval delays now exceed queue wait times. · PJM data reveals 4-year lag for AI data centers. · Infrastructure bottleneck threatens $700B AI CapEx plans.

PJM data reveals AI infrastructure projects now spend more time waiting after interconnection approval than in the queue itself. Post-approval delays have stretched to 4 years for some projects, per PJM's latest interconnection queue report.

Key facts

  • Post-approval delays average 3-4 years for AI data centers.
  • PJM covers 65 million people across 13 states.
  • Google's Texas facility for Anthropic faces extended timelines.
  • Transformer lead times now 18-24 months.
  • $700B AI CapEx pipeline threatened by grid bottlenecks.

The Post-Approval Bottleneck

New data from PJM Interconnection — the grid operator covering 65 million people across 13 states — shows AI data center projects face an average of 3-4 years of delays after receiving interconnection approval. This post-approval wait now exceeds the time spent in the interconnection queue itself [According to Data Center Knowledge].

The bottleneck threatens the $700B AI CapEx pipeline announced by Google, Meta, and Microsoft. Google's $5B Texas facility for Anthropic is among projects facing extended timelines [per the source]. The data suggests capacity constraints, not just regulatory hurdles, are the primary driver.

Why This Matters for AI Buildout

The unique take: the conventional narrative blames interconnection queue backlogs for slowing AI infrastructure. PJM's data flips this — the queue is no longer the binding constraint. The real bottleneck is post-approval: transformer availability, construction labor, and grid interconnection equipment lead times.

Meta's $60B+ AI spend in 2025 and Google's 7 new data center projects identified in May 2026 all face this post-approval wall. The CNAS report from May 11, 2026 warned that chip supply trails CapEx — this PJM data shows the grid side is equally constrained [as previously reported].

What the Data Doesn't Say

PJM did not disclose specific project names or the exact percentage of AI-dedicated projects in the queue. The 4-year figure is an average; some projects may clear faster, others stall indefinitely. The source notes that grid interconnection equipment — transformers and switchgear — now have 18-24 month lead times alone.

The Structural Risk

For AI labs planning 2027-2028 model training runs, this delay means power commitments made today won't materialize until 2030. Google, Meta, and Microsoft are already competing for the same transformer supply and construction crews. The PJM data suggests the AI infrastructure buildout is supply-constrained on the grid side, not demand-constrained.

What to watch

Watch for PJM's Q3 2026 interconnection queue report and whether Google, Meta, or Microsoft announce alternative power strategies — on-site gas generation or nuclear co-location — to bypass grid delays. Also track transformer manufacturer lead times for signs of easing.


Sources cited in this article

  1. PJM's
  2. Data Center Knowledge
  3. The CNAS
Source: gentic.news · · author= · citation.json

AI-assisted reporting. Generated by gentic.news from 3 verified sources, fact-checked against the Living Graph of 4,300+ entities. Edited by Ala SMITH.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

The PJM data reframes the AI infrastructure bottleneck. The narrative has focused on interconnection queue backlogs — a regulatory problem solvable by policy reform. But post-approval delays are structural: transformer manufacturing capacity hasn't scaled with demand, and construction labor is finite. This means even if FERC or PJM streamline the queue, projects still face 3-4 year build times. Compare this to the CNAS report from May 11, 2026, which warned chip supply trails CapEx. The grid side is the mirror problem: power delivery capacity trails power demand. For AI labs, this creates a multi-year lag between announcing a data center and actually getting power to it. That lag directly impacts model training timelines — a 2027 training run requiring 500MW needs power commitments made today. The PJM data also exposes a coordination failure. Google, Meta, and Microsoft are competing for the same transformers and crews, driving up costs and lead times. No single company can solve this — it requires grid-level capacity planning, which PJM and other RTOs were not designed for in an era of explosive, concentrated load growth from a small number of hyperscale customers.
Compare side-by-side
Anthropic vs Google
Enjoyed this article?
Share:

AI Toolslive

Five one-click lenses on this article. Cached for 24h.

Pick a tool above to generate an instant lens on this article.

Related Articles

From the lab

The framework underneath this story

Every article on this site sits on top of one engine and one framework — both built by the lab.

More in Policy & Ethics

View all