Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

gpu computing

30 articles about gpu computing in AI news

Meta's $135 Billion AI Bet: How Confidential Computing Will Transform WhatsApp

Meta commits to buying millions of NVIDIA Blackwell and Rubin GPUs in a landmark partnership, deploying confidential computing technology to bring AI to WhatsApp while protecting user privacy. This represents a major shift in how AI will be integrated into secure messaging platforms.

82% relevant

Moore Threads Q1 Revenue Up, Building 100K-GPU AI Cluster

Moore Threads reports Q1 2026 revenue growth and confirms progress building a 100,000-GPU cluster for AI training, signaling growing domestic AI infrastructure in China despite US export controls.

74% relevant

Jensen Huang's 30-Year TSMC Battle: From 3D Graphics to AI GPUs

A 30-year-old comic shows Jensen Huang convincing TSMC to supply wafers for 3D graphics chips. Today, he's still fighting for wafer supply, but now for AI GPUs, alongside Broadcom, AMD, MediaTek, and Amazon.

75% relevant

DARPA Leases 50 Nvidia H100 GPUs for Biological AI Program

DARPA's Biological Technologies Office is procuring 50 Nvidia HGX H100 GPU systems for its NODES program, with hardware delivery required within one month. This represents a significant government investment in AI infrastructure for biological research applications.

86% relevant

LeWorldModel Solves JEPA Collapse with 15M Params, Trains on Single GPU

Researchers published LeWorldModel, solving the representation collapse problem in Yann LeCun's JEPA architecture. The 15M-parameter model trains on a single GPU and demonstrates intrinsic physics understanding.

95% relevant

Gur Singh Claims 7 M4 MacBooks Match A100, Calls Cloud GPU Training a 'Scam'

Developer Gur Singh posted that seven M4 MacBooks (2.9 TFLOPS each) match an NVIDIA A100's performance, calling cloud GPU training a 'scam' and advocating for distributed, consumer-hardware approaches.

77% relevant

Jensen Huang: Nvidia is a 'Computing Company,' Not a Car

Nvidia CEO Jensen Huang, in a new interview, argued that Nvidia is a 'computing company' and not a car—a product that can be easily interchanged. This distinction underscores Nvidia's strategy to be the indispensable platform for AI infrastructure.

85% relevant

A Practical Guide to Fine-Tuning an LLM on RunPod H100 GPUs with QLoRA

The source is a technical tutorial on using QLoRA for parameter-efficient fine-tuning of an LLM, leveraging RunPod's cloud H100 GPUs. It focuses on the practical setup and execution steps for engineers.

76% relevant

Cursor AI Claims 1.84x Faster MoE Inference on NVIDIA Blackwell GPUs

Cursor AI announced a rebuilt inference engine for Mixture-of-Experts models on NVIDIA's new Blackwell GPUs, resulting in a claimed 1.84x speedup and improved output accuracy.

85% relevant

Neuromorphic Computing Patents Surge 401% in 2025, Hits 596 by 2026

Patent filings for neuromorphic computing—hardware that mimics the brain's architecture—surged 401% in 2025, reaching 596 by early 2026. This indicates the technology is transitioning from lab prototypes to commercial products.

87% relevant

X Post Reveals Audible Quality Differences in GPU vs. NPU AI Inference

A developer demonstrated audible quality differences in AI text-to-speech output when run on GPU, CPU, and NPU hardware, highlighting a key efficiency vs. fidelity trade-off for on-device AI.

75% relevant

Nvidia Claims MLPerf Inference v6.0 Records with 288-GPU Blackwell Ultra Systems, Highlights 2.7x Software Gains

MLCommons released MLPerf Inference v6.0 results, introducing multimodal and video model tests. Nvidia set records using 288-GPU Blackwell Ultra systems and achieved a 2.7x performance jump on DeepSeek-R1 via software optimizations alone.

95% relevant

Mistral Secures $830M Debt to Build Paris Data Center with 14,000 Nvidia GB300 GPUs

French AI startup Mistral has raised $830 million in debt financing to build and operate a sovereign AI data center near Paris, set to host nearly 14,000 Nvidia GB300 GPUs. The move signals a strategic European push for bespoke AI infrastructure, distinct from the gigawatt-scale builds of US hyperscalers.

90% relevant

Sparton: A New GPU Kernel Dramatically Speeds Up Learned Sparse Retrieval

Researchers propose Sparton, a fused Triton GPU kernel for Learned Sparse Retrieval models like Splade. It avoids materializing a massive vocabulary-sized matrix, achieving up to 4.8x speedups and 26x larger batch sizes. This is a core infrastructure breakthrough for efficient AI-powered search.

72% relevant

Neurons Playing Doom: How Living Brain Cells Could Revolutionize Computing

Australian startup Cortical Labs is pioneering biological computing with a system that uses living human brain cells to perform computational tasks. Their CL1 computer consumes just 30 watts while learning to play Doom, potentially offering massive energy savings over traditional AI hardware.

85% relevant

Biological Computing Breakthrough: Human Neurons Play DOOM in Petri Dish

Cortical Labs has successfully trained 200,000 human brain cells to play the classic video game DOOM, marking a significant leap toward Synthetic Biological Intelligence. This biological computing approach could solve AI's massive energy consumption problem while enabling new forms of adaptive learning.

95% relevant

The Great GPU Scramble: How Hardware Shortages Are Defining the AI Arms Race

Oracle founder Larry Ellison identifies GPU acquisition as the primary bottleneck in AI development, with companies racing to secure limited hardware for breakthroughs in medicine, video generation, and autonomous systems.

85% relevant

Apple's M5 Pro and Max: Fusion Architecture Redefines AI Computing on Silicon

Apple unveils M5 Pro and M5 Max chips with groundbreaking Fusion Architecture, merging two 3nm dies into a single SoC. The chips deliver up to 30% faster CPU performance and over 4x peak GPU compute for AI workloads compared to previous generations.

95% relevant

LM Link Bridges the AI Hardware Divide: Secure Remote GPU Access Goes Mainstream

Tailscale and LM Studio have launched 'LM Link,' a zero-configuration service that creates encrypted, point-to-point tunnels to private GPU hardware. This allows developers to securely access powerful local workstations from anywhere, eliminating the productivity gap between location-bound 'Big Rigs' and portable laptops.

70% relevant

Meta's Multi-Million GPU Gamble: How a Chip Deal Redefines AI's Future

Meta has signed a massive, multi-year pact with Nvidia to deploy millions of next-generation Blackwell and Rubin GPUs across its data centers. This unprecedented hardware commitment signals a new phase in the AI arms race, where computational scale becomes the primary competitive moat.

85% relevant

ByteDance's CUDA Agent: The AI System Outperforming Human Experts in GPU Code Generation

ByteDance has unveiled CUDA Agent, a large-scale reinforcement learning system that generates high-performance CUDA kernels. The system achieves state-of-the-art results, outperforming torch.compile by up to 100% and beating leading AI models like Claude Opus 4.5 and Gemini 3 Pro by approximately 40% on the most challenging tasks.

95% relevant

NVIDIA's cuQuantum-DGX OS Aims to Manage Hybrid Quantum-Classical Workflows

NVIDIA announced its AI software stack is evolving into an operating system for quantum computing, aiming to manage the complex workflow between quantum processors and classical GPUs. This targets a major integration bottleneck as quantum hardware scales.

85% relevant

Cerebras' Strategic Partnership Yields Breakthrough AI Training Results

Cerebras Systems' partnership with Abu Dhabi's G42 has produced remarkable AI training benchmarks, achieving results 100x faster than traditional GPU clusters. The collaboration demonstrates the viability of wafer-scale computing for large language model development.

85% relevant

Microsoft's Fairwater AI Data Center Launches Early, Boosts Azure Capacity

Microsoft has launched its Fairwater AI data center ahead of schedule. The facility adds significant high-performance computing capacity to Azure's AI infrastructure, crucial for training and running large models.

92% relevant

DOE Seeks Input on AI Infrastructure for Federal Lands

The U.S. Department of Energy has published a Request for Information (RFI) to solicit input on developing AI and high-performance computing infrastructure on DOE-owned lands. This marks a significant step in the federal government's strategy to directly address the national AI compute shortage.

70% relevant

Nvidia Invests $2B in Marvell to Expand NVLink Fusion Chip Partnership

Nvidia is investing $2 billion in Marvell Technology to deepen their partnership on NVLink Fusion, a chip-to-chip interconnect crucial for scaling AI training clusters. This strategic move aims to secure supply and accelerate development of high-bandwidth links between GPUs and custom AI accelerators.

84% relevant

Meta Deploys Unified AI Agents to Manage Hyperscale Infrastructure

Meta's engineering team has built and deployed a system of unified AI agents to autonomously manage capacity and performance across its hyperscale infrastructure. This represents a significant shift from rule-based automation to AI-driven orchestration for one of the world's largest computing fleets.

70% relevant

Hugging Face OCRs 27,000 arXiv Papers to Markdown with Open 5B Model

Hugging Face CEO Clement Delangue announced the OCR conversion of 27,000 arXiv papers to Markdown using an open 5B-parameter model and 16 parallel jobs on L40S GPUs. This demonstrates a scalable, open-source pipeline for large-scale academic document processing.

85% relevant

InCoder-32B-Thinking Hits 81.3% on LiveCodeBench, Trained on Chip & Kernel Traces

InCoder-32B-Thinking, a 32B parameter model trained on execution traces from chip design, GPU kernels, and embedded systems, scores 81.3% on LiveCodeBench V5 and an 84% compile pass rate on CAD-Coder.

92% relevant

Google's TurboQuant Cuts LLM KV Cache Memory by 6x, Enables 3-Bit Storage Without Accuracy Loss

Google released TurboQuant, a novel two-stage quantization algorithm that compresses the KV cache in long-context LLMs. It reduces memory by 6x, achieves 3-bit storage with no accuracy drop, and speeds up attention scoring by up to 8x on H100 GPUs.

95% relevant