Timeline
NVIDIA paying half the capital expenditure to expand supplier fabs for critical thin-film material.
Announced Nemotron 3 Nano Omni, an open multimodal model processing video, audio, images, and text.
Nvidia invests $2 billion in Marvell Technology for NVLink Fusion interconnect development
Nvidia trained a billion-parameter LLM without backpropagation or full-precision weights, using zero gradients.
Released V4-Pro and V4-Flash open-weight models with up to 1.6 trillion parameters
Released DeepSeek-V4 with 1M-token context at 10% KV cache cost
Open-sourced Kimono, a motion diffusion model for humanoid robots
Expanded partnership with Google Cloud to advance agentic and physical AI infrastructure
Initiates first external fundraising round, targeting at least $300M at a $10B+ valuation to set a benchmark for employee stock options.
Delays to DeepSeek-V4 model due to Huawei Ascend chip compatibility work
Ecosystem
DeepSeek
Nvidia
Evidence (10 articles)
OpenAI Unleashes Real-Time Coding Revolution with GPT-5.3-Codex-Spark
Feb 12, 2026OpenAI's $100 Billion Horizon: How ChatGPT's Explosive Growth Is Reshaping the AI Industry
Feb 9, 2026DeepSeek's Blackwell Gambit: How a Chinese AI Firm Reportedly Circumvented U.S. Chip Export Controls
Feb 24, 2026DeepSeek's Blackwell Training Exposes Critical Gaps in US Chip Export Controls
Feb 27, 2026AI Power Shift: How DeepSeek's Alleged Blackwell Chip Access Could Reshape Global AI Race
Feb 24, 2026DeepSeek V4 Launch Signals China's Strategic Shift in AI Chip Independence
Feb 28, 2026China's Open-Source AI Surge: How Local Models Are Redefining Global Competition
Feb 12, 2026Nvidia Claims MLPerf Inference v6.0 Records with 288-GPU Blackwell Ultra Systems, Highlights 2.7x Software Gains
Apr 2, 2026+ 2 more articles