Timeline
NVIDIA paying half the capital expenditure to expand supplier fabs for critical thin-film material.
Announced Nemotron 3 Nano Omni, an open multimodal model processing video, audio, images, and text.
Nvidia invests $2 billion in Marvell Technology for NVLink Fusion interconnect development
Nvidia trained a billion-parameter LLM without backpropagation or full-precision weights, using zero gradients.
Open-sourced Kimono, a motion diffusion model for humanoid robots
Expanded partnership with Google Cloud to advance agentic and physical AI infrastructure
ByteDance introduced OmniShow, a unified multimodal framework for video generation.
Introduces Helios, a 14B parameter video generation model running at 19.5 FPS on a single H100 GPU.
Collaborated with Tsinghua University and Peking University to develop the HACPO research framework.
Introduced Mixture-of-Depths Attention (MoDA) for deep LLMs
Ecosystem
ByteDance
Nvidia
Evidence (7 articles)
Beyond Nvidia: How OpenAI's Cerebras-Powered Model Redefines AI Hardware Competition
Feb 13, 2026Disney's Legal Blitz Against ByteDance Signals New Era in AI Copyright Wars
Feb 14, 2026ByteDance's Helios: A 14B Parameter Video Generation Model Running at 19.5 FPS on a Single H100 GPU
Mar 23, 2026China's Open-Source AI Surge: How Local Models Are Redefining Global Competition
Feb 12, 2026OpenAI Unleashes Real-Time Coding Revolution with GPT-5.3-Codex-Spark
Feb 12, 2026OpenAI's $100 Billion Horizon: How ChatGPT's Explosive Growth Is Reshaping the AI Industry
Feb 9, 2026ByteDance's CUDA Agent: The AI System Outperforming Human Experts in GPU Code Generation
Mar 2, 2026