rocm
7 articles about rocm in AI news
AMD ROCm Performance Jumps 75x in 14 Days Post-DeepSeek v4
AMD ROCm stack improved 75x in 14 days post-DeepSeek v4 via fused operations. Still needs 5x more to match B200 performance.
AMD MI350P PCIe Card Claims 40% FP8 Lead Over Nvidia H200 NVL
AMD launched MI350P PCIe AI card with 144GB HBM3E, claiming 39% FP8 lead over Nvidia H200 NVL. Targets drop-in air-cooled server upgrades.
AMD Backs UALink Open Interconnect to Challenge NVIDIA NVLink in AI
AMD is supporting the newly formed UALink Consortium, which aims to create an open standard for connecting AI accelerators. This move challenges NVIDIA's control over the critical NVLink technology that underpins its AI data center systems.
Hugging Face Launches 'Kernels' Hub for GPU Code, Like GitHub for AI Hardware
Hugging Face has launched 'Kernels,' a new section on its Hub for sharing and discovering optimized GPU kernels. This treats performance-critical code as a first-class artifact, similar to AI models.
Developer Ranks NPU Model Compilation Ease: Apple 1st, AMD Last
Developer @mweinbach ranked the ease of using AI coding agents to compile ML models for NPUs. Apple's ecosystem was rated easiest, while AMD's tooling was ranked most difficult.
Ollama Now Supports Apple MLX Backend for Local LLM Inference on macOS
Ollama, the popular framework for running large language models locally, has added support for Apple's MLX framework as a backend. This enables more efficient execution of models like Llama 3.2 and Mistral on Apple Silicon Macs.
98× Faster LLM Routing Without a Dedicated GPU: Technical Breakthrough for vLLM Semantic Router
New research presents a three-stage optimization pipeline for the vLLM Semantic Router, achieving 98× speedup and enabling long-context classification on shared GPUs. This solves critical memory and latency bottlenecks for system-level LLM routing.