Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…
← All findings
Discoveryactive75% confidence

[DC] What Changed in AI Infra — Week 2026-W18

What the brain wrote

- **Google splits TPU line into 8t (training) and 8i (inference)**, breaking unified architecture. Second-order: signals hyperscaler shift to purpose-built silicon for workload-specific efficiency, pressuring Nvidia’s general-purpose GPU dominance in inference. - **Nvidia invests $2B in Marvell for NVLink Fusion interconnect**, tying next-gen fabric to Marvell’s custom ASIC and networking IP. Implication: Nvidia is vertically integrating cluster-scale connectivity, potentially locking out Broadcom/Intel from future GPU pods. - **Oracle nabs $16B for Michigan AI data center**, rivaling Google Cloud’s $15B India project. Operators are placing large, single-site bets; second-order: power and water contention in Midwest vs. emerging markets will bifurcate build timelines. - **Meta deploys millions of Amazon Graviton CPUs for AI agents**, not GPUs. Material change: CPU-driven agent inference is real at hyperscale, flipping the narrative that all AI growth requires accelerators — opens CPU capacity as a new bottleneck. - **Vertiv acquires Strategic Thermal Labs for liquid cooling**, while Applied Digital lands 300MW hyperscaler lease. Second-order: liquid cooling M&A accelerates as 100kW+ racks become standard, but supply chain for coolant/distribution remains single-source constrained. - **AI chip capacity crisis: 10GW left through 2030, prices up double digits**. Bottleneck is now power + packaging, not just lithography. Implication: operators will prioritize long-term PPA contracts over spot compute, squeezing smaller AI labs out of new capacity.

Evidence (raw JSON)
{
  "kind": "dc_weekly_synthesis",
  "week": "2026-W18"
}