Lesson 08/12Advanced20 min read·4 diagrams

How to Build One

Site selection, permitting, the 18-36 month construction reality, vendor selection, and the realistic capex of a 100MW AI campus. This is what an actual greenfield project looks like, end to end.

1 · Step zero — what are you actually building?

Before any site is even considered, three questions need clear answers, because they determine every downstream decision:

  1. Total power target. 20 MW? 200 MW? 2 GW? This sets the entire project's character — small builds use existing utility connections, large builds require new substations and 5+ year planning horizons.
  2. Workload profile. Pure training (single-tenant, batch, fault-tolerant) or also inference (multi-tenant, latency-sensitive, always-on)? They want different network topologies and uptime tiers.
  3. Time-to-power vs cost-of-power. Faster online = pay more per MW. Cheaper power = wait longer or accept worse latency. Most operators are buying time today.

2 · Site selection

Site Selection — Approximate Decision WeightsPower available30%Latency to peering18%Tax incentives12%Climate (free cooling)12%Water access10%Fiber routes8%Land cost6%Local opposition risk4%Weights vary by operator. Hyperscalers value power above all; latency-sensitive deployments rebalance toward peering.
Approximate weights — they vary widely by operator. Hyperscalers prioritize power; financial trading prioritizes peering.

Power availability

For 100 MW+ builds, "available" usually means: utility can deliver new capacity within 3–5 years, the local generation/transmission can support it, and you've signed a long-term offtake agreement. Northern Virginia, the Dublin metro, Singapore, and Frankfurt all have multi-year queues; emerging markets (Mumbai, São Paulo, Phoenix, Dallas, Columbus, Atlanta) are racing to absorb demand.

Climate

Cold or dry climates allow free cooling for 4,000+ hours per year, slashing PUE. That's why so many hyperscale builds end up in Iowa, Nordic countries, the Pacific Northwest, or high-altitude desert (Reno, Phoenix).

Latency

For training, latency to peering doesn't matter much — moving data in/out of the cluster is bursty. For inference, you want to be 5–20 ms from major user populations: Northern Virginia for US East, Hillsboro/Quincy for US West, Dublin/Frankfurt for Europe.

Regulatory and political

Tax abatements (Texas, Virginia, Ohio), data localization (EU), local opposition to noise/water use (Loudoun County moratoria) — these can kill a project. Always assume permitting is the tail wagging the dog.

3 · Permitting

The most common categories needed:

Land use / zoning
3-12 mo
Local planning commission
Building permit
3-9 mo
Structural, fire, accessibility
Environmental review
6-24 mo
EPA / equivalent; longer in EU
Utility interconnect
1-5 yr
Often the long pole
Air permit (gens)
3-12 mo
EPA Title V or state equivalent
Water rights
varies
Critical in arid regions

4 · The realistic timeline

Realistic Greenfield Build Timeline (overlapping)M0M6M12M18M24M30M36Site selection6moPermitting18moPower interconnect24moConstruction24moEquipment install6moCommissioning4moPower interconnect is the long pole. In tier-1 markets (Northern Virginia, Dublin, Singapore) the wait can stretch to 5+ years.
Greenfield Tier-III equivalent, 100 MW first phase. Phases overlap — you start construction before all permits land.

Hyperscalers compress this aggressively. Meta has demonstrated ~12-month builds with prefab modular designs (the new "DC 4.0" design unveiled in 2024). Microsoft uses a "Heartbeat" reference design that they've now repeated dozens of times.

5 · Who actually builds this

General contractors

DPR Construction, Holder Construction, Skanska, Mortenson, Whiting-Turner, Turner. A handful of firms dominate hyperscale work in the US.

MEP engineering

JBA Consulting Engineers, Bala Consulting Engineers, kW Mission Critical, Syska Hennessy, Stantec, Burns & McDonnell.

Power equipment

Schneider Electric, Vertiv, Eaton, ABB, Siemens, Cummins / Caterpillar (generators), Mitsubishi, Hitachi.

Cooling equipment

Vertiv, Stulz, Schneider (Liebert), Trane, Carrier; for DLC: CoolIT, Asetek, JetCool, Submer (immersion), GRC, LiquidStack.

IT racks / structured cabling

Vertiv (Knurr/Geist), Schneider (APC), Eaton, Panduit, Legrand, Corning, CommScope.

6 · Capex — what 100 MW costs

Greenfield (no IT)
$8-12M/MW
Building, mech/elec only
With H100 IT load
$25-40M/MW
Including GPUs
With B200 IT load
$30-50M/MW
Newer, denser, costlier
100 MW total project
$2.5-5B
Turnkey including silicon

The IT (especially GPUs) dominates the bill. A 100 MW campus might fit ~30,000 H100s at ~$30k each — that's $900M just on chips, before networking, storage, or power.

Source: SemiAnalysis cost-of-buildout reports; CoreWeave S-1 filings; Synergy Research Group hyperscale capex tracking; industry estimates from JLL, CBRE.

Lesson 08 — TL;DR

  • • Decide: power target, workload profile, time-vs-cost trade. Then site selection.
  • • Power availability is the #1 constraint. Permitting is the long tail.
  • • Realistic greenfield timeline: 24-36 months. Hyperscalers compress to ~12 with modular designs.
  • • Vendor stack: general contractor + MEP engineer + power gear + cooling gear + IT racks.
  • • Capex: $8-12M/MW shell-only, $30-50M/MW with modern GPUs. Silicon dominates.

Useful? Share so the next engineer learns this faster.

Share: