Skip to content
gentic.news — AI News Intelligence Platform

gpus

30 articles about gpus in AI news

Jensen Huang's 30-Year TSMC Battle: From 3D Graphics to AI GPUs

A 30-year-old comic shows Jensen Huang convincing TSMC to supply wafers for 3D graphics chips. Today, he's still fighting for wafer supply, but now for AI GPUs, alongside Broadcom, AMD, MediaTek, and Amazon.

75% relevant

A Practical Guide to Fine-Tuning an LLM on RunPod H100 GPUs with QLoRA

The source is a technical tutorial on using QLoRA for parameter-efficient fine-tuning of an LLM, leveraging RunPod's cloud H100 GPUs. It focuses on the practical setup and execution steps for engineers.

76% relevant

Intel, SambaNova Blueprint Pairs GPUs for AI Prefill, RDUs for Decoding

Intel and SambaNova Systems have outlined a new inference architecture for agentic AI workloads. It splits tasks between GPUs for 'prefill' and SambaNova's Reconfigurable Dataflow Units (RDUs) for high-throughput token generation.

85% relevant

Cursor AI Claims 1.84x Faster MoE Inference on NVIDIA Blackwell GPUs

Cursor AI announced a rebuilt inference engine for Mixture-of-Experts models on NVIDIA's new Blackwell GPUs, resulting in a claimed 1.84x speedup and improved output accuracy.

85% relevant

Mistral Secures $830M Debt to Build Paris Data Center with 14,000 Nvidia GB300 GPUs

French AI startup Mistral has raised $830 million in debt financing to build and operate a sovereign AI data center near Paris, set to host nearly 14,000 Nvidia GB300 GPUs. The move signals a strategic European push for bespoke AI infrastructure, distinct from the gigawatt-scale builds of US hyperscalers.

90% relevant

DARPA Leases 50 Nvidia H100 GPUs for Biological AI Program

DARPA's Biological Technologies Office is procuring 50 Nvidia HGX H100 GPU systems for its NODES program, with hardware delivery required within one month. This represents a significant government investment in AI infrastructure for biological research applications.

86% relevant

JPMorgan: Agentic AI Could Flip Server Ratio to CPU-Heavy

JPMorgan reports that agentic AI workloads could increase CPU demand, potentially flipping the GPU-to-CPU ratio from 7-8 GPUs per CPU to CPU-heavy deployments, with a $100B TAM for AI CPU infrastructure.

92% relevant

Meta Deploys Millions of Amazon Graviton CPUs for AI Agents

Meta will deploy tens of millions of AWS Graviton5 CPU cores for AI agent workloads, signaling that agentic inference favors CPUs over GPUs. The deal deepens Meta's $200B+ infrastructure push amid layoffs and cloud rivalry.

84% relevant

SemiAnalysis: NVIDIA's Customer Data Drives Disaggregated Inference, LPU Surpasses GPU

SemiAnalysis states NVIDIA's direct customer feedback is leading the industry toward disaggregated inference architectures. In this model, specialized LPUs can outperform GPUs for specific pipeline tasks.

85% relevant

Nvidia Invests $2B in Marvell to Expand NVLink Fusion Chip Partnership

Nvidia is investing $2 billion in Marvell Technology to deepen their partnership on NVLink Fusion, a chip-to-chip interconnect crucial for scaling AI training clusters. This strategic move aims to secure supply and accelerate development of high-bandwidth links between GPUs and custom AI accelerators.

84% relevant

Mac Studio AI Hardware Shortage Signals Shift to Cloud Rentals

Developers report a global shortage of high-memory Apple Silicon Macs, with 128GB Mac Studios unavailable worldwide. This pushes practitioners toward renting cloud H100 GPUs at ~$3/hr, marking a shift from the recent local AI trend.

85% relevant

NVIDIA's cuQuantum-DGX OS Aims to Manage Hybrid Quantum-Classical Workflows

NVIDIA announced its AI software stack is evolving into an operating system for quantum computing, aiming to manage the complex workflow between quantum processors and classical GPUs. This targets a major integration bottleneck as quantum hardware scales.

85% relevant

Hugging Face OCRs 27,000 arXiv Papers to Markdown with Open 5B Model

Hugging Face CEO Clement Delangue announced the OCR conversion of 27,000 arXiv papers to Markdown using an open 5B-parameter model and 16 parallel jobs on L40S GPUs. This demonstrates a scalable, open-source pipeline for large-scale academic document processing.

85% relevant

Microsoft's BitNet Enables 100B-Parameter LLMs on CPU, Cuts Energy 82%

Microsoft Research's BitNet project demonstrates 1-bit LLMs with 100B parameters that run efficiently on CPUs, using 82% less energy while maintaining performance, challenging the need for GPUs in local deployment.

95% relevant

Google's 5M H100-Equivalent GPU Fleet Powers Anthropic's AI Expansion

An analyst estimates Google's compute capacity at ~5 million Nvidia H100-equivalent GPUs, providing the infrastructure backbone for Anthropic's model deployment and growth. This highlights the strategic shift where foundational AI labs rely on hyperscaler scale for distribution.

85% relevant

Nvidia DLSS 4.5 Launches with Enhanced AI Frame Generation and Ray Reconstruction

Nvidia has released DLSS 4.5, a major update to its AI-powered upscaling technology featuring new frame generation modes and improved ray reconstruction. The update is available now for GeForce RTX 40 and 50 Series GPUs.

85% relevant

Google's TurboQuant Compresses LLM KV Cache 6x with Zero Accuracy Loss, Cutting GPU Memory by 80%

Google researchers introduced TurboQuant, a method that compresses LLM KV cache from 32-bit to 3-bit precision without accuracy degradation. This reduces GPU memory consumption by over 80% and speeds up inference 8x on H100 GPUs.

97% relevant

Google Research's TurboQuant Achieves 6x LLM Compression Without Accuracy Loss, 8x Speedup on H100

Google Research introduced TurboQuant, a novel compression algorithm that shrinks LLM memory footprint by 6x without retraining or accuracy drop. Its 4-bit version delivers 8x faster processing on H100 GPUs while matching full-precision quality.

95% relevant

Google's TurboQuant Cuts LLM KV Cache Memory by 6x, Enables 3-Bit Storage Without Accuracy Loss

Google released TurboQuant, a novel two-stage quantization algorithm that compresses the KV cache in long-context LLMs. It reduces memory by 6x, achieves 3-bit storage with no accuracy drop, and speeds up attention scoring by up to 8x on H100 GPUs.

95% relevant

Flash-KMeans: An IO-Aware GPU Implementation That Rethinks K-Means Memory Access

Flash-KMeans is a new, exact k-means clustering implementation designed for GPUs. It focuses on optimizing memory access patterns to overcome I/O bottlenecks that limit performance.

85% relevant

FASTER Method Compresses Multi-Step Denoising to Single Step, Enabling 10x Faster Action Sampling for Real-Time VLAs

The FASTER method compresses multi-step denoising into a single step, achieving 10x faster action sampling for real-time Vision-Language-Action models. This enables immediate reaction in dynamic tasks like table tennis on consumer GPUs like the RTX 4060.

85% relevant

98× Faster LLM Routing Without a Dedicated GPU: Technical Breakthrough for vLLM Semantic Router

New research presents a three-stage optimization pipeline for the vLLM Semantic Router, achieving 98× speedup and enabling long-context classification on shared GPUs. This solves critical memory and latency bottlenecks for system-level LLM routing.

80% relevant

Black Forest Labs Unleashes FLUX.2 klein: Sub-Second AI Image Generation Hits Hugging Face

Black Forest Labs has released FLUX.2 klein on Hugging Face, delivering state-of-the-art image generation and editing in under a second. The model runs on consumer GPUs with just 13GB VRAM, making high-speed AI art creation dramatically more accessible.

85% relevant

Nvidia's Strategic Shift: Merging Groq Hardware in New AI Chip Targeting OpenAI

Nvidia is reportedly developing a new AI chip that combines its GPU technology with hardware from Groq, with OpenAI potentially becoming a major customer. This move signals Nvidia's recognition of specialized AI hardware beyond traditional GPUs.

95% relevant

Trillion-Parameter AI Goes Open Source: AntLingAGI's Ring-2.5-1T Democratizes Massive Models

AntLingAGI has open-sourced Ring-2.5-1T, a trillion-parameter AI model that runs on consumer-grade GPUs at half the cost of comparable systems. This breakthrough eliminates traditional barriers like lab access, waitlists, and multi-million dollar compute clusters.

95% relevant

Karpathy's Autoresearch: Democratizing AI Experimentation with Minimalist Agentic Tools

Andrej Karpathy releases 'autoresearch,' a 630-line Python tool enabling AI agents to autonomously conduct machine learning experiments on single GPUs. This minimalist framework transforms how researchers approach iterative ML optimization.

85% relevant

Sarvam AI's Open-Source Models Signal India's Arrival in Global AI Race

Sarvam AI has open-sourced two reasoning models—Sarvam 30B and 105B—positioning India as a competitive player in global AI. The breakthrough lies not just in benchmark scores but in a full-stack approach: in-house data, training, RL, tokenizer design, and optimized inference for both frontier GPUs and consumer devices.

85% relevant

Uber CEO Predicts AI Will Surpass Human Engineers in ROI Within 5 Years

Uber CEO Dara Khosrowshahi predicts AI agents will deliver better ROI than human software engineers within five years. He plans to initially hire more engineers as AI makes them more efficient, but will eventually shift investment to AI systems and GPUs.

85% relevant

Lilly's AI Factory: How a 9,000+ GPU SuperPOD is Rewriting Pharmaceutical Discovery

Eli Lilly has launched 'LillyPod,' the world's most powerful privately-owned AI factory for drug discovery. Powered by NVIDIA's new DGX B300 systems with over 1,000 Blackwell Ultra GPUs, it promises to accelerate medical breakthroughs at unprecedented scale.

80% relevant

Ring All-Reduce: The Hidden Dance Powering Modern AI Training

A new visualization reveals the intricate communication patterns behind distributed AI training. The ring all-reduce algorithm enables efficient gradient synchronization across multiple GPUs, accelerating model development while minimizing bottlenecks.

85% relevant