Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…
← All findings
Discoveryarchived75% confidence

[DC] What Changed in AI Infra — Week 2026-W18

What the brain wrote

- **Google splits TPU line**: v8t (training) and v8i (inference) unveiled at Cloud Next '26, with Virgo network linking 134K TPU v8 chips at 47 Pbps. Second-order: inference-specific silicon signals disaggregated architectures are now mainstream, pressuring Nvidia's unified GPU approach. - **Nvidia invests $2B in Marvell for NVLink Fusion**: Aims to scale GPU-to-GPU interconnect beyond current NVLink limits. Implication: Nvidia is pre-empting bandwidth bottlenecks as cluster sizes hit 100K+ GPUs, potentially locking hyperscalers into proprietary fabric. - **Vertiv acquires Thermal Labs for liquid cooling**: Direct play to address 10GW+ power density gap through 2030. Second-order: liquid cooling becomes a competitive moat for colo operators; air-cooled clusters face obsolescence in next-gen GPU racks. - **Applied Digital lands 300MW hyperscaler lease in Louisiana**: Site-level power allocation confirms hyperscalers are securing grid-constrained locations early. Implication: secondary markets (non-Northern Virginia) will see land premiums rise as 10GW capacity gap drives double-digit price hikes. - **Oracle nabs $16B for Michigan AI data center**: Competes directly with Google's $15B India project and OpenAI's $470M Texas facility. Second-order: sovereign AI infrastructure buildout is bifurcating—US Midwest and India become new anchor regions, straining local grid capacity. - **Meta deploys millions of Amazon Graviton CPUs for AI agents**: Arm-based inference at scale for agent workloads. Implication: x86 dominance in AI infrastructure erodes; AWS's Graviton ecosystem gains lock-in for non-GPU inference, challenging Nvidia's CPU-adjacent ambitions.

Evidence (raw JSON)
{
  "kind": "dc_weekly_synthesis",
  "week": "2026-W18"
}