gpu architecture
30 articles about gpu architecture in AI news
NVIDIA Feynman GPU Power Semi Content Hits $191K, 17× Blackwell
NVIDIA Feynman GPUs require $191K in power semiconductors per system, 17× Blackwell, driven by 800V DC architecture shift.
SemiAnalysis: NVIDIA's Customer Data Drives Disaggregated Inference, LPU Surpasses GPU
SemiAnalysis states NVIDIA's direct customer feedback is leading the industry toward disaggregated inference architectures. In this model, specialized LPUs can outperform GPUs for specific pipeline tasks.
LeWorldModel Solves JEPA Collapse with 15M Params, Trains on Single GPU
Researchers published LeWorldModel, solving the representation collapse problem in Yann LeCun's JEPA architecture. The 15M-parameter model trains on a single GPU and demonstrates intrinsic physics understanding.
Intel, SambaNova Blueprint Pairs GPUs for AI Prefill, RDUs for Decoding
Intel and SambaNova Systems have outlined a new inference architecture for agentic AI workloads. It splits tasks between GPUs for 'prefill' and SambaNova's Reconfigurable Dataflow Units (RDUs) for high-throughput token generation.
Apple's M5 Pro and Max: Fusion Architecture Redefines AI Computing on Silicon
Apple unveils M5 Pro and M5 Max chips with groundbreaking Fusion Architecture, merging two 3nm dies into a single SoC. The chips deliver up to 30% faster CPU performance and over 4x peak GPU compute for AI workloads compared to previous generations.
AMD Launches PCIe GPU for AI Workloads, Targets Existing Server Install Base
AMD launched a PCIe-based GPU for AI workloads, targeting existing servers. The card provides immediate boost without new data center buildouts.
NVIDIA, DOE Build 100K-GPU Supercomputer for Science
DOE and NVIDIA announced Solstice, a 100K-GPU Vera Rubin supercomputer delivering 5,000 exaflops, and Equinox with 10K Blackwell GPUs.
Anthropic's 220K GPU Cluster: $5B Compute Bet Revealed
Anthropic reportedly has 220K NVIDIA GPUs and 310MW, implying a >$5B compute cluster, 3x OpenAI's largest.
OpenAI's MRC Protocol Sprays Packets Across 100+ Paths to Fix GPU Stragglers
OpenAI open-sourced MRC, a networking protocol that sprays packets across hundreds of paths to reduce GPU idle time from congestion and failures, contributed to OCP.
Nscale to Deploy 66K+ Rubin GPUs for Microsoft in Portugal
Nscale will deploy 66,000+ NVIDIA Rubin GPUs for Microsoft at Portugal's Start Campus. The deal is a first for Rubin and signals Microsoft's geographic diversification.
Moore Threads Q1 Revenue Up, Building 100K-GPU AI Cluster
Moore Threads reports Q1 2026 revenue growth and confirms progress building a 100,000-GPU cluster for AI training, signaling growing domestic AI infrastructure in China despite US export controls.
Jensen Huang's 30-Year TSMC Battle: From 3D Graphics to AI GPUs
A 30-year-old comic shows Jensen Huang convincing TSMC to supply wafers for 3D graphics chips. Today, he's still fighting for wafer supply, but now for AI GPUs, alongside Broadcom, AMD, MediaTek, and Amazon.
DARPA Leases 50 Nvidia H100 GPUs for Biological AI Program
DARPA's Biological Technologies Office is procuring 50 Nvidia HGX H100 GPU systems for its NODES program, with hardware delivery required within one month. This represents a significant government investment in AI infrastructure for biological research applications.
Cisco Reveals Scale-Across GPU Networking Needs 14x DCI Bandwidth
Cisco's chief architect detailed the massive bandwidth requirements for connecting AI clusters via 'scale-across' GPU networking, which needs 14x the capacity of traditional data center interconnects. This shift is creating a multi-billion dollar market for 800G coherent pluggables and deep-buffered switches.
Gur Singh Claims 7 M4 MacBooks Match A100, Calls Cloud GPU Training a 'Scam'
Developer Gur Singh posted that seven M4 MacBooks (2.9 TFLOPS each) match an NVIDIA A100's performance, calling cloud GPU training a 'scam' and advocating for distributed, consumer-hardware approaches.
Hugging Face Launches 'Kernels' Hub for GPU Code, Like GitHub for AI Hardware
Hugging Face has launched 'Kernels,' a new section on its Hub for sharing and discovering optimized GPU kernels. This treats performance-critical code as a first-class artifact, similar to AI models.
A Practical Guide to Fine-Tuning an LLM on RunPod H100 GPUs with QLoRA
The source is a technical tutorial on using QLoRA for parameter-efficient fine-tuning of an LLM, leveraging RunPod's cloud H100 GPUs. It focuses on the practical setup and execution steps for engineers.
Cursor AI Claims 1.84x Faster MoE Inference on NVIDIA Blackwell GPUs
Cursor AI announced a rebuilt inference engine for Mixture-of-Experts models on NVIDIA's new Blackwell GPUs, resulting in a claimed 1.84x speedup and improved output accuracy.
ASI-Evolve: This AI Designs Better AI Than Humans Can — 105 New Architectures, Zero Human Guidance
Researchers built an AI that runs the entire research cycle on its own — reading papers, designing experiments, running them, and learning from results. It discovered 105 architectures that beat human-designed models, and invented new learning algorithms. Open-sourced.
X Post Reveals Audible Quality Differences in GPU vs. NPU AI Inference
A developer demonstrated audible quality differences in AI text-to-speech output when run on GPU, CPU, and NPU hardware, highlighting a key efficiency vs. fidelity trade-off for on-device AI.
Nvidia Claims MLPerf Inference v6.0 Records with 288-GPU Blackwell Ultra Systems, Highlights 2.7x Software Gains
MLCommons released MLPerf Inference v6.0 results, introducing multimodal and video model tests. Nvidia set records using 288-GPU Blackwell Ultra systems and achieved a 2.7x performance jump on DeepSeek-R1 via software optimizations alone.
Open-source AI system running on $500 GPU reportedly outperforms Claude Sonnet
An open-source AI system running on consumer-grade $500 GPU hardware claims to outperform Anthropic's Claude Sonnet model while costing only $0.004 per task, eliminating cloud dependencies and API costs.
Google's TurboQuant Compresses LLM KV Cache 6x with Zero Accuracy Loss, Cutting GPU Memory by 80%
Google researchers introduced TurboQuant, a method that compresses LLM KV cache from 32-bit to 3-bit precision without accuracy degradation. This reduces GPU memory consumption by over 80% and speeds up inference 8x on H100 GPUs.
Sparton: A New GPU Kernel Dramatically Speeds Up Learned Sparse Retrieval
Researchers propose Sparton, a fused Triton GPU kernel for Learned Sparse Retrieval models like Splade. It avoids materializing a massive vocabulary-sized matrix, achieving up to 4.8x speedups and 26x larger batch sizes. This is a core infrastructure breakthrough for efficient AI-powered search.
ByteDance's Helios: A 14B Parameter Video Generation Model Running at 19.5 FPS on a Single H100 GPU
ByteDance has introduced Helios, a 14-billion parameter video generation model that reportedly runs at 19.5 frames per second on a single NVIDIA H100 GPU. This represents a significant step in making high-quality, real-time video synthesis more computationally accessible.
Flash-KMeans Achieves 200x Speedup Over FAISS by Targeting GPU Memory Bottlenecks
Flash-KMeans is an IO-aware GPU implementation of exact k-means that runs 30x faster than cuML and 200x faster than FAISS. At million-scale datasets, it completes iterations in milliseconds, enabling dynamic re-indexing and real-time quantization.
A Deep Dive into LoRA: The Mathematics, Architecture, and Deployment of Low-Rank Adaptation
A technical guide explores the mathematical foundations, memory architecture, and structural consequences of Low-Rank Adaptation (LoRA) for fine-tuning LLMs. It provides critical insights for practitioners implementing efficient model customization.
How a GPU Memory Leak Nearly Cost an AI Team a Major Client During a Live Demo
A detailed post-mortem of a critical AI inference failure during a client demo reveals how silent GPU memory leaks, inadequate health checks, and missing circuit breakers can bring down a production pipeline. The author shares the architectural fixes implemented to prevent recurrence.
98× Faster LLM Routing Without a Dedicated GPU: Technical Breakthrough for vLLM Semantic Router
New research presents a three-stage optimization pipeline for the vLLM Semantic Router, achieving 98× speedup and enabling long-context classification on shared GPUs. This solves critical memory and latency bottlenecks for system-level LLM routing.
RF-DETR: A Real-Time Transformer Architecture That Surpasses 60 mAP on COCO
RF-DETR is a new lightweight detection transformer using neural architecture search and internet-scale pre-training. It's the first real-time detector to exceed 60 mAP on COCO, addressing generalization issues in current models.