cerebras
17 articles about cerebras in AI news
Cerebras Understates On-Chip SRAM by 8x, SemiAnalysis Notes
Cerebras understates on-chip SRAM by 8x per SemiAnalysis, a rare under-specification in chip marketing.
Cerebras' Strategic Partnership Yields Breakthrough AI Training Results
Cerebras Systems' partnership with Abu Dhabi's G42 has produced remarkable AI training benchmarks, achieving results 100x faster than traditional GPU clusters. The collaboration demonstrates the viability of wafer-scale computing for large language model development.
Beyond Nvidia: How OpenAI's Cerebras-Powered Model Redefines AI Hardware Competition
OpenAI's GPT-5.3-Codex-Spark demonstrates real-time coding capabilities on Cerebras hardware, challenging Nvidia's dominance and signaling a new era of specialized AI infrastructure.
Inference shift opens door for AI chip startups to challenge Nvidia
Inference shift from training to serving creates opportunities for AI chip startups. Nvidia's $20B Groq acquihire validates disaggregated compute strategies.
Google Opens TPU Sales to Select Customers, Raises Capex Forecast
Google sells TPUs to select customers, raising capex forecast for Q1 FY2026, monetizing in-house chips beyond Cloud.
Nvidia B200 Costs $6,400 to Produce, Gross Margin Hits 82%
Epoch AI estimates Nvidia's B200 GPU costs $5,700–$7,300 to produce, with HBM memory and advanced packaging accounting for two-thirds of the cost. At a $30k–$40k sale price, chip-level gross margins reach ~82%, though rack-scale margins may be lower.
AI Chip Capacity Crisis: 10GW Left Through 2030, Prices Up Double Digits
The AI accelerator market has only 10 gigawatts of capacity left for contract through 2030, with 100GW already under contract. Prices are rising double digits as one competitor has stopped taking orders entirely.
Google's Virgo Network Links 134,000 TPU v8 Chips with 47 Pbps Fabric
Google unveiled its Virgo networking stack for TPU v8, capable of linking 134,000 chips in a single fabric with 47 petabits/sec of bi-sectional bandwidth. This represents a massive scale-up in interconnect technology for large-scale AI model training.
DARPA Leases 50 Nvidia H100 GPUs for Biological AI Program
DARPA's Biological Technologies Office is procuring 50 Nvidia HGX H100 GPU systems for its NODES program, with hardware delivery required within one month. This represents a significant government investment in AI infrastructure for biological research applications.
Nvidia's Silicon Photonics Roadmap Targets AI Data Center Bottlenecks
Nvidia is developing its own silicon photonics-based interconnects to address the growing data transfer bottleneck within AI data centers and supercomputers. This move is critical as AI model size and cluster scale continue to grow exponentially.
Gur Singh Claims 7 M4 MacBooks Match A100, Calls Cloud GPU Training a 'Scam'
Developer Gur Singh posted that seven M4 MacBooks (2.9 TFLOPS each) match an NVIDIA A100's performance, calling cloud GPU training a 'scam' and advocating for distributed, consumer-hardware approaches.
Jensen Huang: Nvidia is a 'Computing Company,' Not a Car
Nvidia CEO Jensen Huang, in a new interview, argued that Nvidia is a 'computing company' and not a car—a product that can be easily interchanged. This distinction underscores Nvidia's strategy to be the indispensable platform for AI infrastructure.
Hugging Face Launches 'Kernels' Hub for GPU Code, Like GitHub for AI Hardware
Hugging Face has launched 'Kernels,' a new section on its Hub for sharing and discovering optimized GPU kernels. This treats performance-critical code as a first-class artifact, similar to AI models.
Nvidia Claims MLPerf Inference v6.0 Records with 288-GPU Blackwell Ultra Systems, Highlights 2.7x Software Gains
MLCommons released MLPerf Inference v6.0 results, introducing multimodal and video model tests. Nvidia set records using 288-GPU Blackwell Ultra systems and achieved a 2.7x performance jump on DeepSeek-R1 via software optimizations alone.
Groq's LPU Inference Engine Demonstrates 500+ Token/s Performance on Llama 3.1 70B
Groq's Language Processing Unit (LPU) inference engine achieves over 500 tokens/second on Meta's Llama 3.1 70B model, demonstrating significant performance gains for large language model inference.
MatX Secures $500M War Chest to Challenge Nvidia's AI Chip Dominance
AI chip startup MatX, founded by ex-Google semiconductor engineers, has raised over $500 million to develop hardware that directly competes with Nvidia. This massive funding round signals growing investor confidence in alternatives to the current AI chip market leader.
Thrive Capital's $10 Billion AI War Chest Signals New Era for Venture Investing
Josh Kushner's Thrive Capital has raised over $10 billion in its largest fund ever, positioning the OpenAI backer to aggressively expand investments in artificial intelligence applications and infrastructure. This massive capital infusion arrives as the AI landscape undergoes significant shifts in technology, business models, and competitive dynamics.