blackwell
30 articles about blackwell in AI news
GPT-5.5 Is a Blackwell-Native Model, Says OpenAI Engineer
OpenAI engineer Matt Weinbach revealed GPT-5.5 is a Blackwell-native model, trained on Nvidia GB200/GB300 NVL72 systems and used to boost inference speed by 20%.
Cursor AI Claims 1.84x Faster MoE Inference on NVIDIA Blackwell GPUs
Cursor AI announced a rebuilt inference engine for Mixture-of-Experts models on NVIDIA's new Blackwell GPUs, resulting in a claimed 1.84x speedup and improved output accuracy.
Nvidia Claims MLPerf Inference v6.0 Records with 288-GPU Blackwell Ultra Systems, Highlights 2.7x Software Gains
MLCommons released MLPerf Inference v6.0 results, introducing multimodal and video model tests. Nvidia set records using 288-GPU Blackwell Ultra systems and achieved a 2.7x performance jump on DeepSeek-R1 via software optimizations alone.
DeepSeek's Blackwell Training Exposes Critical Gaps in US Chip Export Controls
Chinese AI startup DeepSeek reportedly trained its latest model on Nvidia's restricted Blackwell chips, challenging US export controls. The development reveals significant loopholes in semiconductor restrictions amid escalating AI competition.
DeepSeek's Blackwell Gambit: How a Chinese AI Firm Reportedly Circumvented U.S. Chip Export Controls
Chinese AI company DeepSeek reportedly trained its upcoming model using Nvidia's restricted Blackwell chips, potentially clustered in an Inner Mongolia data center. This development highlights the escalating tech rivalry and challenges of enforcing export controls in the AI arms race.
AI Power Shift: How DeepSeek's Alleged Blackwell Chip Access Could Reshape Global AI Race
Chinese AI startup DeepSeek reportedly trained its next major model on Nvidia's banned Blackwell chips, potentially triggering a seismic shift in the AI landscape. US giants Google, OpenAI, and Anthropic are preparing for what could be a market-disrupting release next week.
NVIDIA's Blackwell Ultra Shatters Efficiency Records: 50x Performance Per Watt Leap Redefines AI Economics
NVIDIA's new Blackwell Ultra GB300 NVL72 systems promise a staggering 50x improvement in performance per megawatt and 35x lower cost per token compared to previous Hopper architecture, addressing the critical energy bottleneck in AI scaling.
We Hosted a 35B LLM on an NVIDIA DGX Spark — A Technical Post-Mortem
A detailed, practical guide to deploying the Qwen3.5–35B model on NVIDIA's GB10 Blackwell hardware. The article serves as a crucial case study on the real-world challenges and solutions for on-premise LLM inference.
Lilly's AI Factory: How a 9,000+ GPU SuperPOD is Rewriting Pharmaceutical Discovery
Eli Lilly has launched 'LillyPod,' the world's most powerful privately-owned AI factory for drug discovery. Powered by NVIDIA's new DGX B300 systems with over 1,000 Blackwell Ultra GPUs, it promises to accelerate medical breakthroughs at unprecedented scale.
Meta's $135 Billion AI Bet: How Confidential Computing Will Transform WhatsApp
Meta commits to buying millions of NVIDIA Blackwell and Rubin GPUs in a landmark partnership, deploying confidential computing technology to bring AI to WhatsApp while protecting user privacy. This represents a major shift in how AI will be integrated into secure messaging platforms.
Meta's Multi-Million GPU Gamble: How a Chip Deal Redefines AI's Future
Meta has signed a massive, multi-year pact with Nvidia to deploy millions of next-generation Blackwell and Rubin GPUs across its data centers. This unprecedented hardware commitment signals a new phase in the AI arms race, where computational scale becomes the primary competitive moat.
Yotta Data Services Seeks $4B Valuation in Pre-IPO Round, Expands India's Largest Nvidia GPU Cluster
Indian data center operator Yotta is raising $500-600M at a ~$4B valuation ahead of an IPO. The firm is scaling its Nvidia H100 and Blackwell (B200/B300) GPU fleet to position itself as a domestic AI infrastructure alternative.
Sam Altman: AI inference costs dropped 1000x from o1 to GPT-5.4
Sam Altman stated AI inference costs for solving a fixed hard problem dropped ~1000x from o1 to GPT-5.4 in ~16 months, crediting cross-layer engineering optimizations, not a single breakthrough.
AI Chip Capacity Crisis: 10GW Left Through 2030, Prices Up Double Digits
The AI accelerator market has only 10 gigawatts of capacity left for contract through 2030, with 100GW already under contract. Prices are rising double digits as one competitor has stopped taking orders entirely.
Google's Virgo Network Links 134,000 TPU v8 Chips with 47 Pbps Fabric
Google unveiled its Virgo networking stack for TPU v8, capable of linking 134,000 chips in a single fabric with 47 petabits/sec of bi-sectional bandwidth. This represents a massive scale-up in interconnect technology for large-scale AI model training.
DARPA Leases 50 Nvidia H100 GPUs for Biological AI Program
DARPA's Biological Technologies Office is procuring 50 Nvidia HGX H100 GPU systems for its NODES program, with hardware delivery required within one month. This represents a significant government investment in AI infrastructure for biological research applications.
Microsoft's 2000 Nvidia Veto Rights Resurface Amid AI Chip Wars
A 2000 investment deal granted Microsoft veto rights over any acquisition of Nvidia. This historical clause gains new relevance as Nvidia's AI dominance makes it a potential target in the ongoing semiconductor consolidation.
Microsoft's Fairwater AI Data Center Launches Early, Boosts Azure Capacity
Microsoft has launched its Fairwater AI data center ahead of schedule. The facility adds significant high-performance computing capacity to Azure's AI infrastructure, crucial for training and running large models.
AI Data Center Startup Phononic in Sale Talks at Multi-Billion Valuation
Phononic, a startup building liquid cooling systems for AI data centers, is in talks for a sale that could value it in the multi-billions. This reflects intense market pressure to solve the power and thermal challenges of scaling AI compute.
Foxconn to Mass-Produce 10,000+ CPO Optical Switches for AI in Q3 2026
Foxconn's manufacturing arm will begin volume production of advanced co-packaged optics (CPO) switches in Q3 2026, targeting over 10,000 units. This move directly addresses the critical bandwidth and power bottlenecks in next-generation AI data center infrastructure.
CoreWeave & Google Raise $6.7B in Junk Bonds for AI Infrastructure
Google and GPU cloud provider CoreWeave have jointly raised $6.7 billion through a junk bond offering, with Google taking $5.7 billion. The capital is earmarked for a significant build-out of AI data center infrastructure.
Adobe, NVIDIA, WPP Launch Enterprise AI Agents for Marketing with OpenShell
NVIDIA expands collaborations with Adobe and WPP to build agentic AI systems for enterprise marketing workflows. The stack uses NVIDIA's OpenShell runtime to enforce security and policy compliance in multi-step creative and customer experience tasks.
AI Datacenter Spend Hits 5-7 Manhattan Projects Yearly at $250-300B
Inflation-adjusted global datacenter CapEx reaches $250-300B annually, equivalent to 5-7 Manhattan Projects per year. This quantifies the unprecedented infrastructure investment driving the AI boom.
IOWN Forum Pushes All-Photonic WAN for AI Neocloud Interconnects
The IOWN Global Forum is focusing its optical networking tech on datacenter interconnects, aiming to let GPU 'neoclouds' and financial firms use cheaper, remote facilities without latency penalties for AI workloads.
Nvidia Invests $2B in Marvell to Expand NVLink Fusion Chip Partnership
Nvidia is investing $2 billion in Marvell Technology to deepen their partnership on NVLink Fusion, a chip-to-chip interconnect crucial for scaling AI training clusters. This strategic move aims to secure supply and accelerate development of high-bandwidth links between GPUs and custom AI accelerators.
Manycore Tech Pivots from Real Estate to AI Robotics, Hits $1B Valuation
Manycore Tech Inc., a Chinese software company previously focused on real estate, has raised $150 million to pivot into AI and robotics, achieving a $1 billion valuation. The move is led by an Nvidia alumnus and capitalizes on China's strategic push into automation.
Aehr Test Systems Lands $41M AI Chip Order; H2 Bookings Top $92M
Aehr Test Systems received a record $41 million production order from a key hyperscale AI customer. Total bookings for the second half of its fiscal year exceeded $92 million, highlighting surging demand for semiconductor test and burn-in equipment.
TSMC's $56B 2026 CapEx Fuels AI Chip Race with 22 New Fabs
TSMC is constructing up to 22 advanced semiconductor fabs simultaneously, backed by a $52–56 billion capital expenditure plan for 2026. This unprecedented manufacturing scale is critical for producing the 2nm-and-below chips required by next-generation AI models.
Google, CoreWeave Sell Record $5.7B in Junk Bonds for AI Data Centers
Google and its partner CoreWeave sold a record $5.7 billion in high-yield bonds to fund AI data center expansion. The deal was oversubscribed, showing strong investor appetite for AI infrastructure debt.
Superintelligence Podcast Launches with NVIDIA Nemotron 3 Deep Dive
The Superintelligence podcast has launched, promising in-depth interviews with AI industry leaders. Its first episode is an exclusive interview with NVIDIA's Kari Briski on the Nemotron 3 Super model.