gpu
30 articles about gpu in AI news
Jensen Huang's 30-Year TSMC Battle: From 3D Graphics to AI GPUs
A 30-year-old comic shows Jensen Huang convincing TSMC to supply wafers for 3D graphics chips. Today, he's still fighting for wafer supply, but now for AI GPUs, alongside Broadcom, AMD, MediaTek, and Amazon.
SemiAnalysis: NVIDIA's Customer Data Drives Disaggregated Inference, LPU Surpasses GPU
SemiAnalysis states NVIDIA's direct customer feedback is leading the industry toward disaggregated inference architectures. In this model, specialized LPUs can outperform GPUs for specific pipeline tasks.
DARPA Leases 50 Nvidia H100 GPUs for Biological AI Program
DARPA's Biological Technologies Office is procuring 50 Nvidia HGX H100 GPU systems for its NODES program, with hardware delivery required within one month. This represents a significant government investment in AI infrastructure for biological research applications.
Cisco Reveals Scale-Across GPU Networking Needs 14x DCI Bandwidth
Cisco's chief architect detailed the massive bandwidth requirements for connecting AI clusters via 'scale-across' GPU networking, which needs 14x the capacity of traditional data center interconnects. This shift is creating a multi-billion dollar market for 800G coherent pluggables and deep-buffered switches.
LeWorldModel Solves JEPA Collapse with 15M Params, Trains on Single GPU
Researchers published LeWorldModel, solving the representation collapse problem in Yann LeCun's JEPA architecture. The 15M-parameter model trains on a single GPU and demonstrates intrinsic physics understanding.
Gur Singh Claims 7 M4 MacBooks Match A100, Calls Cloud GPU Training a 'Scam'
Developer Gur Singh posted that seven M4 MacBooks (2.9 TFLOPS each) match an NVIDIA A100's performance, calling cloud GPU training a 'scam' and advocating for distributed, consumer-hardware approaches.
Claude MCP GPU Debugging: AI Agent Identifies PyTorch Bottleneck in Kernel
A developer used an AI agent powered by Claude Code and the Model Context Protocol (MCP) to diagnose a severe GPU performance bottleneck. The agent analyzed system kernel traces, pinpointing excessive CPU context switches as the culprit, demonstrating a practical application of agentic AI for complex technical debugging.
Hugging Face Launches 'Kernels' Hub for GPU Code, Like GitHub for AI Hardware
Hugging Face has launched 'Kernels,' a new section on its Hub for sharing and discovering optimized GPU kernels. This treats performance-critical code as a first-class artifact, similar to AI models.
AI Compute Crisis: GPU Prices Up 48%, Anthropic API at 98.95% Uptime
The AI industry faces a severe compute capacity crisis, with GPU prices up 48%, Anthropic API uptime falling to 98.95%, and OpenAI shutting down Sora to reallocate resources. Demand for agentic AI is outstripping supply, forcing rationing and product cancellations.
A Practical Guide to Fine-Tuning an LLM on RunPod H100 GPUs with QLoRA
The source is a technical tutorial on using QLoRA for parameter-efficient fine-tuning of an LLM, leveraging RunPod's cloud H100 GPUs. It focuses on the practical setup and execution steps for engineers.
Intel, SambaNova Blueprint Pairs GPUs for AI Prefill, RDUs for Decoding
Intel and SambaNova Systems have outlined a new inference architecture for agentic AI workloads. It splits tasks between GPUs for 'prefill' and SambaNova's Reconfigurable Dataflow Units (RDUs) for high-throughput token generation.
Google's 5M H100-Equivalent GPU Fleet Powers Anthropic's AI Expansion
An analyst estimates Google's compute capacity at ~5 million Nvidia H100-equivalent GPUs, providing the infrastructure backbone for Anthropic's model deployment and growth. This highlights the strategic shift where foundational AI labs rely on hyperscaler scale for distribution.
Cursor AI Claims 1.84x Faster MoE Inference on NVIDIA Blackwell GPUs
Cursor AI announced a rebuilt inference engine for Mixture-of-Experts models on NVIDIA's new Blackwell GPUs, resulting in a claimed 1.84x speedup and improved output accuracy.
X Post Reveals Audible Quality Differences in GPU vs. NPU AI Inference
A developer demonstrated audible quality differences in AI text-to-speech output when run on GPU, CPU, and NPU hardware, highlighting a key efficiency vs. fidelity trade-off for on-device AI.
Nvidia Claims MLPerf Inference v6.0 Records with 288-GPU Blackwell Ultra Systems, Highlights 2.7x Software Gains
MLCommons released MLPerf Inference v6.0 results, introducing multimodal and video model tests. Nvidia set records using 288-GPU Blackwell Ultra systems and achieved a 2.7x performance jump on DeepSeek-R1 via software optimizations alone.
Fine-Tuning an LLM on a 4GB GPU: A Practical Guide for Resource-Constrained Engineers
A Medium article provides a practical, constraint-driven guide for fine-tuning LLMs on a 4GB GPU, covering model selection, quantization, and parameter-efficient methods. This makes bespoke AI model development more accessible without high-end cloud infrastructure.
Mistral Secures $830M Debt to Build Paris Data Center with 14,000 Nvidia GB300 GPUs
French AI startup Mistral has raised $830 million in debt financing to build and operate a sovereign AI data center near Paris, set to host nearly 14,000 Nvidia GB300 GPUs. The move signals a strategic European push for bespoke AI infrastructure, distinct from the gigawatt-scale builds of US hyperscalers.
Open-source AI system running on $500 GPU reportedly outperforms Claude Sonnet
An open-source AI system running on consumer-grade $500 GPU hardware claims to outperform Anthropic's Claude Sonnet model while costing only $0.004 per task, eliminating cloud dependencies and API costs.
Google's TurboQuant Compresses LLM KV Cache 6x with Zero Accuracy Loss, Cutting GPU Memory by 80%
Google researchers introduced TurboQuant, a method that compresses LLM KV cache from 32-bit to 3-bit precision without accuracy degradation. This reduces GPU memory consumption by over 80% and speeds up inference 8x on H100 GPUs.
Sparton: A New GPU Kernel Dramatically Speeds Up Learned Sparse Retrieval
Researchers propose Sparton, a fused Triton GPU kernel for Learned Sparse Retrieval models like Splade. It avoids materializing a massive vocabulary-sized matrix, achieving up to 4.8x speedups and 26x larger batch sizes. This is a core infrastructure breakthrough for efficient AI-powered search.
ByteDance's Helios: A 14B Parameter Video Generation Model Running at 19.5 FPS on a Single H100 GPU
ByteDance has introduced Helios, a 14-billion parameter video generation model that reportedly runs at 19.5 frames per second on a single NVIDIA H100 GPU. This represents a significant step in making high-quality, real-time video synthesis more computationally accessible.
Flash-KMeans: An IO-Aware GPU Implementation That Rethinks K-Means Memory Access
Flash-KMeans is a new, exact k-means clustering implementation designed for GPUs. It focuses on optimizing memory access patterns to overcome I/O bottlenecks that limit performance.
Flash-KMeans Achieves 200x Speedup Over FAISS by Targeting GPU Memory Bottlenecks
Flash-KMeans is an IO-aware GPU implementation of exact k-means that runs 30x faster than cuML and 200x faster than FAISS. At million-scale datasets, it completes iterations in milliseconds, enabling dynamic re-indexing and real-time quantization.
Yotta Data Services Seeks $4B Valuation in Pre-IPO Round, Expands India's Largest Nvidia GPU Cluster
Indian data center operator Yotta is raising $500-600M at a ~$4B valuation ahead of an IPO. The firm is scaling its Nvidia H100 and Blackwell (B200/B300) GPU fleet to position itself as a domestic AI infrastructure alternative.
How a GPU Memory Leak Nearly Cost an AI Team a Major Client During a Live Demo
A detailed post-mortem of a critical AI inference failure during a client demo reveals how silent GPU memory leaks, inadequate health checks, and missing circuit breakers can bring down a production pipeline. The author shares the architectural fixes implemented to prevent recurrence.
98× Faster LLM Routing Without a Dedicated GPU: Technical Breakthrough for vLLM Semantic Router
New research presents a three-stage optimization pipeline for the vLLM Semantic Router, achieving 98× speedup and enabling long-context classification on shared GPUs. This solves critical memory and latency bottlenecks for system-level LLM routing.
Flash-KMeans Revolutionizes GPU Clustering with 200x Speedup Over FAISS
New Flash-KMeans algorithm achieves dramatic speed improvements in GPU-based clustering through innovative IO-aware FlashAssign kernels that eliminate memory bottlenecks and atomic contention, potentially transforming large-scale data analysis.
Karpathy's 'Autoresearch' Tool Democratizes AI Research: One GPU, One Night, 100 Experiments
Andrej Karpathy has open-sourced 'autoresearch,' a tool that enables AI to autonomously improve its own training code. By writing simple prompts in Markdown, researchers can have AI agents run hundreds of experiments overnight on a single GPU, dramatically accelerating the research process.
The Great GPU Scramble: How Hardware Shortages Are Defining the AI Arms Race
Oracle founder Larry Ellison identifies GPU acquisition as the primary bottleneck in AI development, with companies racing to secure limited hardware for breakthroughs in medicine, video generation, and autonomous systems.
Lilly's AI Factory: How a 9,000+ GPU SuperPOD is Rewriting Pharmaceutical Discovery
Eli Lilly has launched 'LillyPod,' the world's most powerful privately-owned AI factory for drug discovery. Powered by NVIDIA's new DGX B300 systems with over 1,000 Blackwell Ultra GPUs, it promises to accelerate medical breakthroughs at unprecedented scale.