How to Build One
Site selection, permitting, the 18-36 month construction reality, vendor selection, and the realistic capex of a 100MW AI campus. This is what an actual greenfield project looks like, end to end.
1 · Step zero — what are you actually building?
Before any site is even considered, three questions need clear answers, because they determine every downstream decision:
- Total power target. 20 MW? 200 MW? 2 GW? This sets the entire project's character — small builds use existing utility connections, large builds require new substations and 5+ year planning horizons.
- Workload profile. Pure training (single-tenant, batch, fault-tolerant) or also inference (multi-tenant, latency-sensitive, always-on)? They want different network topologies and uptime tiers.
- Time-to-power vs cost-of-power. Faster online = pay more per MW. Cheaper power = wait longer or accept worse latency. Most operators are buying time today.
2 · Site selection
Power availability
For 100 MW+ builds, "available" usually means: utility can deliver new capacity within 3–5 years, the local generation/transmission can support it, and you've signed a long-term offtake agreement. Northern Virginia, the Dublin metro, Singapore, and Frankfurt all have multi-year queues; emerging markets (Mumbai, São Paulo, Phoenix, Dallas, Columbus, Atlanta) are racing to absorb demand.
Climate
Cold or dry climates allow free cooling for 4,000+ hours per year, slashing PUE. That's why so many hyperscale builds end up in Iowa, Nordic countries, the Pacific Northwest, or high-altitude desert (Reno, Phoenix).
Latency
For training, latency to peering doesn't matter much — moving data in/out of the cluster is bursty. For inference, you want to be 5–20 ms from major user populations: Northern Virginia for US East, Hillsboro/Quincy for US West, Dublin/Frankfurt for Europe.
Regulatory and political
Tax abatements (Texas, Virginia, Ohio), data localization (EU), local opposition to noise/water use (Loudoun County moratoria) — these can kill a project. Always assume permitting is the tail wagging the dog.
3 · Permitting
The most common categories needed:
4 · The realistic timeline
Hyperscalers compress this aggressively. Meta has demonstrated ~12-month builds with prefab modular designs (the new "DC 4.0" design unveiled in 2024). Microsoft uses a "Heartbeat" reference design that they've now repeated dozens of times.
5 · Who actually builds this
General contractors
DPR Construction, Holder Construction, Skanska, Mortenson, Whiting-Turner, Turner. A handful of firms dominate hyperscale work in the US.
MEP engineering
JBA Consulting Engineers, Bala Consulting Engineers, kW Mission Critical, Syska Hennessy, Stantec, Burns & McDonnell.
Power equipment
Schneider Electric, Vertiv, Eaton, ABB, Siemens, Cummins / Caterpillar (generators), Mitsubishi, Hitachi.
Cooling equipment
Vertiv, Stulz, Schneider (Liebert), Trane, Carrier; for DLC: CoolIT, Asetek, JetCool, Submer (immersion), GRC, LiquidStack.
IT racks / structured cabling
Vertiv (Knurr/Geist), Schneider (APC), Eaton, Panduit, Legrand, Corning, CommScope.
6 · Capex — what 100 MW costs
The IT (especially GPUs) dominates the bill. A 100 MW campus might fit ~30,000 H100s at ~$30k each — that's $900M just on chips, before networking, storage, or power.
Source: SemiAnalysis cost-of-buildout reports; CoreWeave S-1 filings; Synergy Research Group hyperscale capex tracking; industry estimates from JLL, CBRE.
Lesson 08 — TL;DR
- • Decide: power target, workload profile, time-vs-cost trade. Then site selection.
- • Power availability is the #1 constraint. Permitting is the long tail.
- • Realistic greenfield timeline: 24-36 months. Hyperscalers compress to ~12 with modular designs.
- • Vendor stack: general contractor + MEP engineer + power gear + cooling gear + IT racks.
- • Capex: $8-12M/MW shell-only, $30-50M/MW with modern GPUs. Silicon dominates.