Timeline
NVIDIA paying half the capital expenditure to expand supplier fabs for critical thin-film material.
Announced Nemotron 3 Nano Omni, an open multimodal model processing video, audio, images, and text.
Nvidia invests $2 billion in Marvell Technology for NVLink Fusion interconnect development
Nvidia trained a billion-parameter LLM without backpropagation or full-precision weights, using zero gradients.
Open-sourced Kimono, a motion diffusion model for humanoid robots
Expanded partnership with Google Cloud to advance agentic and physical AI infrastructure
Developed specialized Language Processing Units (LPUs) that can outperform GPUs for specific inference tasks
Expanded partnership with Samsung, increasing chip orders from 9,000 to 30,000 wafers
Ecosystem
Groq
Nvidia
Evidence (5 articles)
Nvidia's Strategic Shift: Merging Groq Hardware in New AI Chip Targeting OpenAI
Mar 10, 2026Nvidia's Groq Ramps Up AI Chip Production with Samsung in Major Partnership Expansion
Mar 11, 2026SemiAnalysis: NVIDIA's Customer Data Drives Disaggregated Inference, LPU Surpasses GPU
Apr 22, 2026Groq's LPU Inference Engine Demonstrates 500+ Token/s Performance on Llama 3.1 70B
Mar 16, 2026Jensen Huang Announces $20B Groq Integration, OpenClaw OS, and $50T+ Physical AI Market Vision on All-In Podcast
Mar 19, 2026