memory chips
30 articles about memory chips in AI news
China's Memory Chip Price War: How CXMT's Aggressive Pricing Strategy Is Reshaping Global AI Hardware Economics
Chinese semiconductor manufacturer CXMT is selling DDR4 memory chips at nearly half the global market rate, creating a significant price disruption even as worldwide DRAM prices surge 23.7% monthly. This aggressive pricing strategy could dramatically lower costs for AI infrastructure and computing hardware.
Google, Marvell in Talks to Co-Develop New AI Chips, Including TPU-Optimized MPU
Google is reportedly in talks with Marvell Technology to co-develop two new AI chips: a memory processing unit (MPU) to pair with TPUs and a new, optimized TPU. This move is a direct effort to bolster Google's custom silicon stack and compete with Nvidia's dominance.
Google's Virgo Network Links 134,000 TPU v8 Chips with 47 Pbps Fabric
Google unveiled its Virgo networking stack for TPU v8, capable of linking 134,000 chips in a single fabric with 47 petabits/sec of bi-sectional bandwidth. This represents a massive scale-up in interconnect technology for large-scale AI model training.
Nvidia to Ship 1.19 Exabytes of HBM in 2026, Apple iPhone Memory 2x Larger
An analysis projects Nvidia will ship ~1.19 exabytes of HBM memory in 2026 for AI infrastructure, while Apple will ship ~2.4 exabytes of LPDDR5 for iPhones, putting AI's massive hardware scale in consumer market perspective.
Canada's AI Compute Gap: Google Cloud Montreal Offers 2017-Era Chips
A technical developer's attempt to rent modern AI compute in Canada revealed a stark infrastructure gap, with major providers offering chips as old as 2017, undermining national AI ambitions.
TurboQuant Ported to Apple MLX, Claims 75% Memory Reduction with Minimal Performance Loss
Developer Prince Canuma has successfully ported the TurboQuant quantization method to Apple's MLX framework, reporting a 75% reduction in memory usage with nearly no performance degradation for on-device AI models.
Google's TurboQuant AI Research Report Sparks Sell-Off in Micron, Samsung, and SK Hynix Memory Stocks
Google's TurboQuant research blog publication triggered immediate market reaction, with shares of major memory manufacturers dropping 2-4% as investors anticipate AI-driven efficiency gains reducing future memory demand.
SK Group Chairman Forecasts Memory Chip Shortage Until 2030, Warns of Sustained Price Increases
SK Group Chairman Chey Tae-won predicts the global memory chip supply crunch could persist until around 2030, with wafer supply lagging demand by over 20% and prices continuing to rise.
Memory Market Squeeze Threatens iPhone Price Hikes as AI Demands Strain Supply
A global RAM shortage and price increases could force Apple to raise iPhone prices by up to $250, according to industry analysis. The tech giant is reportedly unwilling to absorb the cost, passing it directly to consumers amid surging memory demands from AI applications.
AI's Insatiable Appetite: Nvidia's Rubin Chip Demands 288GB Memory, Sparking Global Shortage Crisis
Nvidia's upcoming Rubin AI chip requires 288GB of RAM—800% more than top desktop computers—creating unprecedented memory demand. Massive purchases by OpenAI and Alphabet have depleted supply, driving DDR4 prices up 2352% and causing a global memory chip shortage.
AI Gold Rush Strains Apple Hardware: High-Memory Macs Sell Out as Local AI Agents Go Mainstream
A surge in demand for local AI development has created severe inventory shortages for high-memory Apple hardware. Mac Studio orders with 128GB or 512GB RAM face 6+ week delays as consumers buy up every available unit to run powerful AI agents like OpenClaw.
Apple's Neural Engine Jailbroken: Researchers Unlock Full Training Capabilities on M-Series Chips
Security researchers have reverse-engineered Apple's Neural Engine, bypassing private APIs to enable full neural network training directly on ANE hardware. This breakthrough unlocks 15.8 TFLOPS of compute previously restricted to inference-only operations across all M-series devices.
Google Splits TPU Line: 8t for Training, 8i for Inference
At Cloud Next 2026, Google introduced two new AI chips — TPU 8t for training and TPU 8i for inference — splitting its custom silicon for the first time. OpenAI, Anthropic, and Meta are buying multi-gigawatt TPU capacity, signaling a crack in NVIDIA's 81% market share.
Nvidia B200 Costs $6,400 to Produce, Gross Margin Hits 82%
Epoch AI estimates Nvidia's B200 GPU costs $5,700–$7,300 to produce, with HBM memory and advanced packaging accounting for two-thirds of the cost. At a $30k–$40k sale price, chip-level gross margins reach ~82%, though rack-scale margins may be lower.
Google Cloud Next '26: 8th-gen TPUs, agent platform, $750M fund
At Cloud Next 2026, Google unveiled two 8th-gen TPU chips, a Gemini-based enterprise AI agent platform, and a $750 million partner fund to drive secure, large-scale automation and heavy AI workloads.
Mac Studio AI Hardware Shortage Signals Shift to Cloud Rentals
Developers report a global shortage of high-memory Apple Silicon Macs, with 128GB Mac Studios unavailable worldwide. This pushes practitioners toward renting cloud H100 GPUs at ~$3/hr, marking a shift from the recent local AI trend.
Samsung Projects Record $14.6B Q1 Profit on 300% DRAM Price Surge
Samsung Electronics expects a record Q1 operating profit of 20 trillion won (~$14.6B), nearly triple YoY, fueled by soaring AI-driven demand and a 300% price increase for DRAM chips.
Terafab's 1GW AI Compute Goal Requires Massive Fab Capacity
Analysis of Terafab's stated goals shows that achieving 1GW of AI compute would require approximately 190,000 wafer starts per month across logic and memory. This underscores the unprecedented scale of semiconductor manufacturing needed for future AI infrastructure.
Apple's Private Cloud Compute: Leak Suggests 4x M2 Ultra Cluster for On-Device AI Offload
A leak suggests Apple's Private Cloud Compute for AI may be built on clusters of four M2 Ultra chips, potentially offering high-performance, private server-side processing for iPhone AI tasks. This would mark Apple's strategic move into dedicated, privacy-focused AI infrastructure.
AI Data Center HBM Shortage Intensifies as Samsung, SK Hynix, and Micron Struggle with Supply
AI data centers are aggressively stockpiling high-bandwidth memory (HBM), creating a supply crunch. Only three manufacturers—Samsung, SK Hynix, and Micron—can produce this critical component for AI servers.
Apple's M5 Pro and Max: Fusion Architecture Redefines AI Computing on Silicon
Apple unveils M5 Pro and M5 Max chips with groundbreaking Fusion Architecture, merging two 3nm dies into a single SoC. The chips deliver up to 30% faster CPU performance and over 4x peak GPU compute for AI workloads compared to previous generations.
DeepSeek's Blackwell Training Exposes Critical Gaps in US Chip Export Controls
Chinese AI startup DeepSeek reportedly trained its latest model on Nvidia's restricted Blackwell chips, challenging US export controls. The development reveals significant loopholes in semiconductor restrictions amid escalating AI competition.
Nvidia's Record Earnings Mask China Dilemma: H200 Sales Frozen Amid AI Boom
Nvidia reported record quarterly revenue of $68.1 billion, up 73% year-over-year, driven by surging demand for data center processors. However, the company has generated zero revenue from its H200 chips in China and faces ongoing uncertainty about future sales in the critical market.
Meta's $100B AMD Gamble: The AI Chip War Enters Its Most Strategic Phase
Meta has secured a landmark deal to purchase up to $100 billion worth of AMD AI chips, receiving a massive stock warrant in return. This unprecedented agreement signals Meta's aggressive push to diversify its AI infrastructure beyond Nvidia while pursuing ambitious 'personal superintelligence' goals.
Apple's Neural Engine Jailbroken: Researchers Unlock On-Device AI Training Capabilities
A researcher has reverse-engineered Apple's private Neural Engine APIs to enable direct transformer training on M-series chips, bypassing CoreML restrictions. This breakthrough could enable battery-efficient local model training and fine-tuning without cloud dependency.
CPU Demand Flipping the AI Narrative as Datacenter Growth Shifts
A new analysis from SemiAnalysis indicates CPU demand is rising in AI datacenters, reversing a narrative of GPU-only dominance. This shift signals changing workload patterns and infrastructure priorities.
Microsoft’s VibeVoice: Open-Source Speech-to-Text with Diarization
Microsoft released VibeVoice, an MIT-licensed speech-to-text model with built-in speaker diarization. Simon Willison tested a 4-bit MLX conversion on an M5 MacBook, transcribing 1 hour of audio in ~9 minutes using ~60GB RAM.
Paper Details Full-Stack MFM Acceleration: Quant, Spec Decode, HW Co-Design
A research paper details a full-stack approach for accelerating multimodal foundation models, combining hierarchy-aware mixed-precision quantization, structural pruning, speculative decoding, model cascading, and a specialized hardware accelerator. Demonstrated on medical and code generation tasks.
Nvidia Invests $2B in Marvell for NVLink Fusion Interconnect
Nvidia is investing $2 billion in Marvell Technology to deepen their partnership on NVLink Fusion, a new interconnect architecture for scaling AI clusters beyond current limits.
DeepSeek-V4 Ported to MLX for Apple Silicon Inference
A developer has ported DeepSeek-V4 to Apple's MLX framework, allowing the large language model to run on Apple Silicon Macs. Early results show functional inference with room for optimization.