Data Center Fundamentals
What a data center actually is — physical building, power feed, cooling plant, networking gear, computers — and the four fundamental shifts that turned them from boring corporate basements into the most strategically important infrastructure of the AI era.
1 · What a data center actually is
A data center is a building purpose-built to keep computers running 24/7. Strip away the mystique and there are four physical things inside, in roughly equal importance:
- Power. A connection to the electric grid (often a dedicated substation), backed up by uninterruptible power supplies (UPS) and diesel or gas generators that can run the facility for days if the grid fails.
- Cooling. Servers turn ~100% of the electricity they consume into heat. A chiller plant, cooling towers, and either air handlers (CRAC/CRAH) or liquid loops carry that heat outside.
- Networking. Fiber from carriers enters at meet-me rooms, fans out through core/spine/leaf switches, and eventually plugs into each server.
- The IT itself. Racks of servers, storage, and switches. This is what the power and cooling exist to serve.
2 · The Tier classification
The most widely cited reliability framework comes from the Uptime Institute, a consultancy that has graded facility designs since 1995. Tiers I through IV measure what happens when something fails.
For AI workloads, Tier III is the de-facto floor — anything less and you lose training runs to unplanned downtime. Most hyperscale AI facilities are designed to Tier III or above, though hyperscalers often skip the formal Uptime certification because they've built their own equivalent specs (Open Compute Project standards).
Source: Uptime Institute, Tier Standard: Topology (current edition). See uptimeinstitute.com/tiers.
3 · The hierarchy: campus → U
When you read about "Stargate" or "Hyperion" or "Project Rainier", the term usually refers to a campus — the largest level of the hierarchy. Knowing the layers below helps you parse any infrastructure announcement.
4 · AI changed everything
From roughly 2005 to 2020, data center design was a slow-moving discipline. Power densities crept up, cooling became more efficient, but the basic shape was stable.
Then came the GPU boom. A modern AI rack pulls 10× more power than the email-and-database racks it replaced. That single shift cascades through every other decision — cooling, electrical, networking, even site selection.
The four shifts that matter
1 · Power density exploded
A single NVIDIA GB200 NVL72 rack draws ~120 kW — more than 20 traditional racks combined. This forces dedicated busways, larger PDUs, and rethinking the whole electrical room.
2 · Air cooling broke down
Above ~30–40 kW per rack, air physically can't move heat away fast enough. Direct liquid cooling (DLC) — pumping fluid through cold plates touching each chip — became mandatory for Blackwell-class hardware.
3 · The network became a bottleneck
Training a frontier model means thousands of GPUs synchronizing gradients many times per second. Standard 100 GbE doesn't cut it — you need InfiniBand at 400 or 800 Gbps, or NVIDIA's NVLink fabric, with rail-optimized topology.
4 · Scale moved from MW to GW
Project Rainier (Anthropic on AWS) is announced at 2.2 GW. Stargate Phase 1 is targeting ~1.2 GW. Meta's Hyperion in Louisiana is planned for 2 GW. These numbers were unimaginable five years ago.
Source: Capacity figures: Amazon investor announcement (Nov 2025) for Project Rainier; Meta investor day and Reuters reporting for Hyperion Louisiana; Reuters / WSJ for Stargate Phase 1 (Abilene, TX).
5 · Vocabulary you must know after this lesson
Lesson 01 — TL;DR
- • A data center has 4 physical components: power, cooling, networking, IT.
- • Uptime Institute Tiers I–IV measure fault tolerance. Tier III ≈ AI floor.
- • Hierarchy: Campus → Building → Hall → Pod → Rack → U.
- • AI made racks ~10× more power-dense, forcing liquid cooling and faster networks.
- • Stargate / Hyperion / Project Rainier are campuses in the 1–2 GW range.