chips & hardware
30 articles about chips & hardware in AI news
China's Memory Chip Price War: How CXMT's Aggressive Pricing Strategy Is Reshaping Global AI Hardware Economics
Chinese semiconductor manufacturer CXMT is selling DDR4 memory chips at nearly half the global market rate, creating a significant price disruption even as worldwide DRAM prices surge 23.7% monthly. This aggressive pricing strategy could dramatically lower costs for AI infrastructure and computing hardware.
Google's Virgo Network Links 134,000 TPU v8 Chips with 47 Pbps Fabric
Google unveiled its Virgo networking stack for TPU v8, capable of linking 134,000 chips in a single fabric with 47 petabits/sec of bi-sectional bandwidth. This represents a massive scale-up in interconnect technology for large-scale AI model training.
Google, Marvell in Talks to Co-Develop New AI Chips, Including TPU-Optimized MPU
Google is reportedly in talks with Marvell Technology to co-develop two new AI chips: a memory processing unit (MPU) to pair with TPUs and a new, optimized TPU. This move is a direct effort to bolster Google's custom silicon stack and compete with Nvidia's dominance.
Canada's AI Compute Gap: Google Cloud Montreal Offers 2017-Era Chips
A technical developer's attempt to rent modern AI compute in Canada revealed a stark infrastructure gap, with major providers offering chips as old as 2017, undermining national AI ambitions.
AI-Powered Circuit Simulator Offers Free Hardware Prototyping
A new website provides a free, AI-assisted environment for designing and testing electronic circuits, featuring pre-built projects for learning. This lowers the barrier to entry for hardware prototyping and education.
OpenAI Forecasts $121B in AI Hardware Costs for 2028
OpenAI is forecasting its own AI research hardware costs will reach $121 billion in 2028, according to a WSJ report. This figure highlights the extreme capital intensity required to compete at the frontier of AI.
Anthropic Considers Custom AI Chips, Following Google & OpenAI
Anthropic is reportedly considering developing custom AI chips, a strategic move to gain control over its compute infrastructure and reduce costs. This follows similar initiatives by Google, Amazon, and OpenAI.
Broadcom to Manufacture Google TPU Chips in Foundry Partnership
Google has licensed its Tensor Processing Unit (TPU) intellectual property to Broadcom for chip fabrication. This allows Google to earn from its IP while Broadcom manages the complex hardware build and networking integration.
DeepSeek V4 to Run on Huawei Ascend 950PR Chips, Sparking 20% Price Surge
DeepSeek's anticipated V4 model will be powered by Huawei's Ascend 950PR chips, with Alibaba, ByteDance, and Tencent stockpiling hundreds of thousands of units ahead of launch. This has driven chip prices up approximately 20% in recent weeks.
Kimi 2.5's 1T Parameter MoE Model Runs on 96GB Mac Hardware via SSD Streaming
Developers have demonstrated that Kimi 2.5's 1 trillion parameter Mixture-of-Experts model can run on Mac hardware with just 96GB RAM by streaming expert weights from SSD, with only 32B parameters active per token.
AWS Commits 2 Gigawatts of Trainium Capacity to OpenAI, Reveals 1.4 Million Chips Deployed
Amazon's $50B OpenAI deal includes a 2-gigawatt commitment of Trainium computing capacity. AWS disclosed 1.4 million Trainium chips are deployed, with over 1 million Trainium2 chips running Anthropic's Claude.
Marc Andreessen's Warning: AI's Value Could Shift Entirely to Hardware and Energy
Venture capitalist Marc Andreessen predicts a dramatic shift where AI model companies might capture all economic value, with software becoming open-source while hardware and energy providers dominate the industry's profits.
Nvidia's Strategic Shift: Merging Groq Hardware in New AI Chip Targeting OpenAI
Nvidia is reportedly developing a new AI chip that combines its GPU technology with hardware from Groq, with OpenAI potentially becoming a major customer. This move signals Nvidia's recognition of specialized AI hardware beyond traditional GPUs.
Google's $1.9 Trillion Vertical Integration Strategy: Building an AI Empire from Chips to Power Grid
Google is investing $1.9 trillion over the next decade to control every layer of the AI stack, from custom TPU chips to power infrastructure. This vertical integration strategy creates a competitive moat that could reshape the entire AI industry landscape.
The Great GPU Scramble: How Hardware Shortages Are Defining the AI Arms Race
Oracle founder Larry Ellison identifies GPU acquisition as the primary bottleneck in AI development, with companies racing to secure limited hardware for breakthroughs in medicine, video generation, and autonomous systems.
AI Gold Rush Strains Apple Hardware: High-Memory Macs Sell Out as Local AI Agents Go Mainstream
A surge in demand for local AI development has created severe inventory shortages for high-memory Apple hardware. Mac Studio orders with 128GB or 512GB RAM face 6+ week delays as consumers buy up every available unit to run powerful AI agents like OpenClaw.
Apple's Neural Engine Jailbroken: Researchers Unlock Full Training Capabilities on M-Series Chips
Security researchers have reverse-engineered Apple's Neural Engine, bypassing private APIs to enable full neural network training directly on ANE hardware. This breakthrough unlocks 15.8 TFLOPS of compute previously restricted to inference-only operations across all M-series devices.
SEval-NAS: The Flexible Framework That Could Revolutionize Hardware-Aware AI Design
Researchers propose SEval-NAS, a search-agnostic evaluation method that decouples metric calculation from the Neural Architecture Search process. This allows AI developers to easily introduce new performance criteria, especially for hardware-constrained devices, without redesigning their entire search algorithms.
AI Hardware Race Accelerates as NVIDIA Ships Record Volumes Amid Global Demand Surge
NVIDIA continues shipping AI processors at unprecedented rates as global demand for AI infrastructure reaches fever pitch. The relentless pace highlights the intensifying hardware race powering the AI revolution.
NVIDIA GTC 2025 Preview: Leaked Highlights Signal Major AI Hardware and Software Breakthroughs
Early leaks from NVIDIA's upcoming GTC 2025 conference reveal significant advancements in AI hardware, software frameworks, and robotics. The preview suggests major performance leaps and new capabilities that could reshape AI development across industries.
Mac Studio AI Hardware Shortage Signals Shift to Cloud Rentals
Developers report a global shortage of high-memory Apple Silicon Macs, with 128GB Mac Studios unavailable worldwide. This pushes practitioners toward renting cloud H100 GPUs at ~$3/hr, marking a shift from the recent local AI trend.
DeepSeek Teases 'Much Larger' Base Model Release Amid Industry Silence and Hardware Challenges
DeepSeek staff confirmed a new, larger base model is coming soon, following months of quiet after reports of failed Huawei chip training. This comes as the Chinese AI lab faces heightened expectations after its breakthrough o1-level model in January 2025.
Nvidia's $30 Billion OpenAI Bet: The AI Hardware Giant Doubles Down on Software Dominance
Nvidia is reportedly negotiating a monumental $30 billion investment in OpenAI, potentially valuing the AI pioneer at over $800 billion. This strategic move would deepen the symbiotic relationship between the world's leading AI chipmaker and its most prominent customer, reshaping the competitive landscape of artificial intelligence.
Google Cloud Next '26: 8th-gen TPUs, agent platform, $750M fund
At Cloud Next 2026, Google unveiled two 8th-gen TPU chips, a Gemini-based enterprise AI agent platform, and a $750 million partner fund to drive secure, large-scale automation and heavy AI workloads.
DARPA Leases 50 Nvidia H100 GPUs for Biological AI Program
DARPA's Biological Technologies Office is procuring 50 Nvidia HGX H100 GPU systems for its NODES program, with hardware delivery required within one month. This represents a significant government investment in AI infrastructure for biological research applications.
Gur Singh Claims 7 M4 MacBooks Match A100, Calls Cloud GPU Training a 'Scam'
Developer Gur Singh posted that seven M4 MacBooks (2.9 TFLOPS each) match an NVIDIA A100's performance, calling cloud GPU training a 'scam' and advocating for distributed, consumer-hardware approaches.
MLX-Benchmark Suite Launches as First Comprehensive LLM Eval for Apple Silicon
The MLX-Benchmark Suite has been released as the first comprehensive evaluation framework for Large Language Models running on Apple's MLX framework. It provides standardized metrics for models optimized for Apple Silicon hardware.
AI Developer Tools Shift to Mac-First, Excluding Windows/Linux Users
AI developers report a growing trend of cutting-edge AI tools being released exclusively or primarily for macOS, making it difficult for Windows and Linux users to access the latest innovations. This platform shift creates a hardware-based barrier to entry in the AI development ecosystem.
GPT-5.4 Spends 3 Hours Optimizing Embedding Model for Qualcomm NPU
An X user observed GPT-5.4 working for three hours to optimize an embedding model specifically for the Qualcomm NPU. This suggests a practical application of advanced AI for hardware-specific model tuning.
TSMC's $56B 2026 CapEx Fuels AI Chip Race with 22 New Fabs
TSMC is constructing up to 22 advanced semiconductor fabs simultaneously, backed by a $52–56 billion capital expenditure plan for 2026. This unprecedented manufacturing scale is critical for producing the 2nm-and-below chips required by next-generation AI models.