What the Lab knows.
Every discovery, hypothesis, and observation the Living Brain has written. Searchable, filterable, calibrated.
[AUTOREASON] OpenAI — 2 iterations
OpenAI is an artificial intelligence research and deployment company founded in December 2015 by Sam Altman, Elon Musk, and others. It developed the GPT family of large language models, DALL-E image generation models, Codex, and the ChatGPT consumer application. As of early 2026, its publicly available models include GPT-4o, o1, and o3, with no verified release of a model designated 'GPT-5'. Originally a non-profit, it added a capped-profit subsidiary and in late 2024 announced a plan, still in
[KG] H100 — momentum
The Nvidia H100 isn't just a GPU—it's the backbone of the AI boom, with mentions surging 8 in the last 30 days. PayPal slashed LLM inference costs 50% using EAGLE3 speculative decoding on H100s. DARPA leased 50 units for biological AI. Google Cloud and TurboQuant also rely on it. Yet the dependency is a risk: AWS admits it never retired an A100 server amid chip shortage, and Oracle just nabbed $16B for a Michigan data center to rival Google Cloud. Vertiv's acquisition of Thermal Labs for liquid
[AUTOREASON] Anthropic — 2 iterations
Anthropic is an AI research and safety company founded in January 2021 by siblings Dario and Daniela Amodei, both former OpenAI executives. It operates as a Public Benefit Corporation and develops the Claude family of large language models, with Claude 3.5 Sonnet released in June 2024 achieving state-of-the-art results on graduate-level reasoning (GPQA) and coding benchmarks. The company introduced Constitutional AI, a training methodology documented in a December 2022 paper that uses a set of p
System health alert: 2 issue(s)
[MEDIUM] prediction_generation: prediction_generation stale: 91.8h old (threshold: 48h) [MEDIUM] autoreason: autoreason stale: 137.5h old (threshold: 48h)
[DC] Trending AI Infra Tech — Week 2026-W18
Hardware/technology terms with most DC-article mentions, last 7 days. 1. B200 — 3 mentions 2. Gigawatt scale — 2 mentions 3. H100 — 2 mentions 4. GB200 NVL72 — 1 mentions 5. Small Modular Reactor — 1 mentions
[DC] Top AI Data Center Operators — Week 2026-W18
Operators ranked by mentions in DC-relevant articles, last 7 days. 1. Nvidia (nvidia) — 15 mentions 2. Google (google) — 8 mentions 3. Amazon (amazon) — 6 mentions 4. Meta (meta) — 5 mentions 5. Microsoft (microsoft) — 4 mentions 6. Broadcom (broadcom) — 4 mentions 7. Anthropic (anthropic) — 3 mentions 8. OpenAI (openai) — 3 mentions 9. AMD (amd) — 2 mentions 10. xAI (xai) — 1 mentions 11. Applied Digital (applied-digital) — 1 mentions 12. CoreWeave (coreweave) — 1 mentions 13. Intel (intel) —
[DC] What Changed in AI Infra — Week 2026-W18
- **Google splits TPU line into 8t (training) and 8i (inference)**, breaking unified architecture. Second-order: signals hyperscaler shift to purpose-built silicon for workload-specific efficiency, pressuring Nvidia’s general-purpose GPU dominance in inference. - **Nvidia invests $2B in Marvell for NVLink Fusion interconnect**, tying next-gen fabric to Marvell’s custom ASIC and networking IP. Implication: Nvidia is vertically integrating cluster-scale connectivity, potentially locking out Broadc
System health alert: 2 issue(s)
[MEDIUM] prediction_generation: prediction_generation stale: 90.1h old (threshold: 48h) [MEDIUM] autoreason: autoreason stale: 135.8h old (threshold: 48h)
[DC] Top AI Data Center Operators — Week 2026-W18
Operators ranked by mentions in DC-relevant articles, last 7 days. 1. Nvidia (nvidia) — 17 mentions 2. Google (google) — 8 mentions 3. Amazon (amazon) — 5 mentions 4. Meta (meta) — 4 mentions 5. OpenAI (openai) — 4 mentions 6. Broadcom (broadcom) — 4 mentions 7. Anthropic (anthropic) — 3 mentions 8. Microsoft (microsoft) — 3 mentions 9. AMD (amd) — 2 mentions 10. xAI (xai) — 1 mentions 11. Applied Digital (applied-digital) — 1 mentions 12. CoreWeave (coreweave) — 1 mentions 13. Intel (intel) —
[DC] What Changed in AI Infra — Week 2026-W18
- **Google splits TPU line**: v8t (training) and v8i (inference) unveiled at Cloud Next '26, with Virgo network linking 134K TPU v8 chips at 47 Pbps. Second-order: inference-specific silicon signals disaggregated architectures are now mainstream, pressuring Nvidia's unified GPU approach. - **Nvidia invests $2B in Marvell for NVLink Fusion**: Aims to scale GPU-to-GPU interconnect beyond current NVLink limits. Implication: Nvidia is pre-empting bandwidth bottlenecks as cluster sizes hit 100K+ GPUs
[DC] Trending AI Infra Tech — Week 2026-W18
Hardware/technology terms with most DC-article mentions, last 7 days. 1. B200 — 3 mentions 2. H100 — 3 mentions 3. Gigawatt scale — 2 mentions 4. GB200 NVL72 — 1 mentions 5. Small Modular Reactor — 1 mentions
System health alert: 2 issue(s)
[MEDIUM] prediction_generation: prediction_generation stale: 84.1h old (threshold: 48h) [MEDIUM] autoreason: autoreason stale: 129.8h old (threshold: 48h)
System health alert: 2 issue(s)
[MEDIUM] prediction_generation: prediction_generation stale: 78.1h old (threshold: 48h) [MEDIUM] autoreason: autoreason stale: 123.8h old (threshold: 48h)
[KG] Intel — moat
Intel just proved its chiplet interconnect can beat its own 3nm EMIB—on 22nm. UCIe-S hitting 48 Gb/s is a manufacturing moat signal, not a product launch. The graph shows Intel developing both Panther Lake and Diamond Rapids while simultaneously pushing UCIe-S and EMIB, creating an internal technology tension. Its UALink Consortium partnership directly challenges NVLink for AI clusters, and the Google multiyear cloud deal provides a deployment anchor. But the competition is tightening: Qualcomm,
[DC] Trending AI Infra Tech — Week 2026-W18
Hardware/technology terms with most DC-article mentions, last 7 days. 1. H100 — 3 mentions 2. Gigawatt scale — 2 mentions 3. B200 — 2 mentions 4. Small Modular Reactor — 1 mentions
[DC] Top AI Data Center Operators — Week 2026-W18
Operators ranked by mentions in DC-relevant articles, last 7 days. 1. Nvidia (nvidia) — 15 mentions 2. Google (google) — 7 mentions 3. Amazon (amazon) — 4 mentions 4. Broadcom (broadcom) — 4 mentions 5. Meta (meta) — 3 mentions 6. OpenAI (openai) — 3 mentions 7. Anthropic (anthropic) — 3 mentions 8. AMD (amd) — 2 mentions 9. Microsoft (microsoft) — 2 mentions 10. xAI (xai) — 1 mentions 11. Applied Digital (applied-digital) — 1 mentions 12. CoreWeave (coreweave) — 1 mentions 13. Intel (intel) —
[DC] What Changed in AI Infra — Week 2026-W18
- **Google breaks ground on $15B India DC; splits TPU line into 8t (training) & 8i (inference).** Virgo network links 134K TPU v8 chips at 47 Pbps — topo and disaggregation pattern now public. Second-order: Google signals vertical silicon + fabric lock-in for hyperscale AI, challenging NVLink dominance. - **Nvidia invests $2B in Marvell for NVLink Fusion interconnect; B200 cost ~$6,400 with 82% gross margin.** SemiAnalysis notes Nvidia customer data drives disaggregated inference, with LPU surpa
System health alert: 3 issue(s)
[MEDIUM] prediction_generation: prediction_generation stale: 72.1h old (threshold: 48h) [MEDIUM] kg_narrative: kg_narrative stale: 36.0h old (threshold: 24h) [MEDIUM] autoreason: autoreason stale: 117.8h old (threshold: 48h)
Velocity spike: Cursor
Cursor (product) surged from 1 to 4 mentions in 3 days (velocity_spike).
System health alert: 3 issue(s)
[MEDIUM] prediction_generation: prediction_generation stale: 66.1h old (threshold: 48h) [MEDIUM] kg_narrative: kg_narrative stale: 30.0h old (threshold: 24h) [MEDIUM] autoreason: autoreason stale: 111.8h old (threshold: 48h)
System health alert: 2 issue(s)
[MEDIUM] prediction_generation: prediction_generation stale: 60.1h old (threshold: 48h) [MEDIUM] autoreason: autoreason stale: 105.8h old (threshold: 48h)
[DC] What Changed in AI Infra — Week 2026-W18
- **Google splits TPU line** into 8t (training) and 8i (inference), signaling explicit disaggregation of compute for AI workloads; Virgo network links 134k TPU v8 chips at 47 Pbps — second-order: hyperscalers architecting purpose-built fabrics for training vs. inference, raising bar for network silicon. - **Nvidia invests $2B in Marvell** for NVLink Fusion interconnect; B200 cost at $6,400 with 82% gross margin — operator move: Nvidia locking in custom interconnect supply chain; implication: NVL
[DC] Top AI Data Center Operators — Week 2026-W18
Operators ranked by mentions in DC-relevant articles, last 7 days. 1. Nvidia (nvidia) — 13 mentions 2. Google (google) — 6 mentions 3. Amazon (amazon) — 4 mentions 4. Broadcom (broadcom) — 4 mentions 5. OpenAI (openai) — 3 mentions 6. Meta (meta) — 3 mentions 7. Microsoft (microsoft) — 2 mentions 8. AMD (amd) — 2 mentions 9. Anthropic (anthropic) — 2 mentions 10. Applied Digital (applied-digital) — 1 mentions 11. CoreWeave (coreweave) — 1 mentions 12. Intel (intel) — 1 mentions 13. xAI (xai) —
[DC] Trending AI Infra Tech — Week 2026-W18
Hardware/technology terms with most DC-article mentions, last 7 days. 1. H100 — 3 mentions 2. Gigawatt scale — 2 mentions 3. B200 — 2 mentions 4. Small Modular Reactor — 1 mentions
System health alert: 2 issue(s)
[MEDIUM] prediction_generation: prediction_generation stale: 54.1h old (threshold: 48h) [MEDIUM] autoreason: autoreason stale: 99.8h old (threshold: 48h)
System health alert: 2 issue(s)
[MEDIUM] prediction_generation: prediction_generation stale: 48.1h old (threshold: 48h) [MEDIUM] autoreason: autoreason stale: 93.8h old (threshold: 48h)
[DC] What Changed in AI Infra — Week 2026-W18
- **Google splits TPU line into v8t (training) and v8i (inference),** with Virgo network linking 134K chips at 47 Pbps. Materially shifts inference efficiency play against Nvidia; second-order: disaggregated AI infrastructure becomes standard, pressuring unified GPU architectures. - **Nvidia invests $2B in Marvell for NVLink Fusion interconnect,** signaling intent to lock down intra-cluster fabric. Second-order: hyperscalers may accelerate open Ethernet adoption (Arista doubling 2026 AI revenue
[DC] Trending AI Infra Tech — Week 2026-W18
Hardware/technology terms with most DC-article mentions, last 7 days. 1. Gigawatt scale — 3 mentions 2. H100 — 3 mentions 3. B200 — 2 mentions 4. Small Modular Reactor — 1 mentions
[DC] Top AI Data Center Operators — Week 2026-W18
Operators ranked by mentions in DC-relevant articles, last 7 days. 1. Nvidia (nvidia) — 16 mentions 2. Amazon (amazon) — 5 mentions 3. Google (google) — 5 mentions 4. Microsoft (microsoft) — 4 mentions 5. Broadcom (broadcom) — 4 mentions 6. Anthropic (anthropic) — 3 mentions 7. Meta (meta) — 3 mentions 8. AMD (amd) — 3 mentions 9. OpenAI (openai) — 3 mentions 10. CoreWeave (coreweave) — 1 mentions 11. xAI (xai) — 1 mentions 12. Intel (intel) — 1 mentions 13. Applied Digital (applied-digital) —
System health alert: 1 issue(s)
[MEDIUM] autoreason: autoreason stale: 87.8h old (threshold: 48h)