fusion
30 articles about fusion in AI news
Nvidia Invests $2B in Marvell to Deepen NVLink Fusion Tie-Up
Nvidia invested $2B in Marvell to deepen NVLink Fusion partnership, integrating Marvell custom silicon into AI interconnect fabric.
DeepMind’s New VAE Matches Stable Diffusion at 10x Resolution
DeepMind’s new VAE produces 1024x1024 images with quality comparable to Stable Diffusion’s 256x256 output, potentially replacing the standard VAE in generative pipelines. This cuts the token count by 10x, enabling faster generation and lower memory usage.
Nvidia Invests $2B in Marvell for NVLink Fusion Interconnect
Nvidia is investing $2 billion in Marvell Technology to deepen their partnership on NVLink Fusion, a new interconnect architecture for scaling AI clusters beyond current limits.
NVIDIA Open-Sources Motion Diffusion Model for Humanoid Robots
NVIDIA open-sourced Kimono, a motion diffusion model for humanoid robots, trained on 700 hours of motion capture data. It generates 3D human and robot motions from text prompts, supports keyframe and end-effector control, and runs on Unitree G1.
Alibaba's DCW Fixes SNR-t Bias in Diffusion Models, Boosts FLUX & EDM
Alibaba researchers developed DCW, a wavelet-based method to correct SNR-t misalignment in diffusion models. The fix improves performance for models like FLUX and EDM with minimal computational cost.
Nvidia Invests $2B in Marvell to Expand NVLink Fusion Chip Partnership
Nvidia is investing $2 billion in Marvell Technology to deepen their partnership on NVLink Fusion, a chip-to-chip interconnect crucial for scaling AI training clusters. This strategic move aims to secure supply and accelerate development of high-bandwidth links between GPUs and custom AI accelerators.
Uni-ViGU Unifies Video Generation & Understanding in Single Diffusion Model
A new paper introduces Uni-ViGU, a unified model that performs video generation and understanding within a single diffusion process via flow matching. This inverts the standard approach of separate models for each task.
LPM 1.0: 17B-Parameter Diffusion Model Generates 60K-Second AI Avatar Videos
Researchers introduced LPM 1.0, a 17B-parameter real-time diffusion model that generates infinite-length conversational videos with stable identity, achieving over 60,000 seconds of consistent character performance.
JBM-Diff: A New Graph Diffusion Model for Denoising Multimodal Recommendations
A new arXiv paper introduces JBM-Diff, a conditional graph diffusion model designed to clean 'noise' from multimodal item features (like images/text) and user behavior data (like accidental clicks) in recommendation systems. It aims to improve ranking accuracy by ensuring only preference-relevant signals are used.
GeoSR Achieves SOTA on VSI-Bench with Geometry Token Fusion
GeoSR improves spatial reasoning by masking 2D vision tokens to prevent shortcuts and using gated fusion to amplify geometry information, achieving state-of-the-art results on key benchmarks.
mmAnomaly: New Multi-Modal Framework Uses Conditional Latent Diffusion to Achieve 94% F1 Score for mmWave Anomaly Detection
Researchers introduced mmAnomaly, a multi-modal anomaly detection system that uses a conditional latent diffusion model to synthesize expected mmWave spectra from visual context, achieving up to a 94% F1 score for detecting concealed weapons and through-wall anomalies.
Diffusion Recommender Models Fail Reproducibility Test: Study Finds 'Illusion of Progress' in Top-N Recommendation Research
A reproducibility study of nine recent diffusion-based recommender models finds only 25% of reported results are reproducible. Well-tuned simpler baselines outperform the complex models, revealing a conceptual mismatch and widespread methodological flaws in the field.
Geometric Latent Diffusion (GLD) Achieves SOTA Novel View Synthesis, Trains 4.4× Faster Than VAE
GLD repurposes features from geometric foundation models like Depth Anything 3 as a latent space for multi-view diffusion. It trains significantly faster than VAE-based approaches and achieves state-of-the-art novel view synthesis without text-to-image pretraining.
MinerU-Diffusion: A 2.5B Parameter Diffusion Model for OCR Achieves 3.2x Speedup Over Autoregressive Methods
Researchers introduced MinerU-Diffusion, a 2.5B parameter diffusion model for OCR that replaces autoregressive decoding with parallel block-wise diffusion. It achieves up to 3.2x faster inference while improving robustness on complex documents with tables and formulas.
Sam Altman Steps Down as Helion Board Chair Amid Fusion Startup's DOE Milestone Push
OpenAI CEO Sam Altman has resigned as board chair of fusion energy startup Helion Energy, which he backs. The move comes as Helion works toward a critical 2024 milestone with the U.S. Department of Energy.
LLMs Show 'Privileged Access' to Own Policies in Introspect-Bench, Explaining Self-Knowledge via Attention Diffusion
Researchers formalize LLM introspection as computation over model parameters, showing frontier models outperform peers at predicting their own behavior. The study provides causal evidence for how introspection emerges via attention diffusion without explicit training.
WiT: Waypoint Diffusion Transformers Achieve FID 2.09 on ImageNet 256×256 in 265 Epochs, Matching JiT-L/16 Efficiency
Researchers introduced WiT, a diffusion transformer that uses semantic waypoints from pretrained vision models to resolve trajectory conflicts in pixel-space flow matching. It matches the performance of JiT-L/16 at 600 epochs in just 265 epochs, achieving an FID of 2.09 on ImageNet 256×256.
Video Reasoning Models Use Chain-of-Steps in Diffusion Denoising, Not Cross-Frame Analysis
New research reveals video reasoning models don't analyze frames sequentially but instead use a Chain-of-Steps mechanism within diffusion denoising, developing emergent working memory and self-correction.
NVIDIA Releases Brain MRI Generation Model on Hugging Face: 3D Latent Diffusion for T1, FLAIR, T2, and SWI Scans
NVIDIA has open-sourced a 3D latent diffusion model for generating high-resolution brain MRI scans across four modalities. The model claims state-of-the-art FID scores and 33× faster inference than prior methods.
Diffusion Recommender Model (DiffRec): A Technical Deep Dive into Generative AI for Recommendation Systems
A detailed analysis of DiffRec, a novel recommendation system architecture that applies diffusion models to collaborative filtering. This represents a significant technical shift from traditional matrix factorization to generative approaches.
New AI Framework Uses Diffusion Models to Authenticate Anti-Counterfeit Codes
Researchers propose a novel diffusion-based AI system to authenticate Copy Detection Patterns (CDPs), a key anti-counterfeiting technology. It outperforms existing methods by classifying printer signatures, showing resilience against unseen counterfeits.
Evo LLM Unifies Autoregressive and Diffusion AI, Achieving New Balance in Language Generation
Researchers introduce Evo, a novel large language model architecture that bridges autoregressive and diffusion-based text generation. By treating language creation as a continuous evolutionary flow, Evo adaptively balances confident refinement with exploratory planning, achieving state-of-the-art results across 15 benchmarks while maintaining fast inference speeds.
NVIDIA's DiffiT: A New Vision Transformer Architecture Sets Diffusion Model Benchmark
NVIDIA has released DiffiT, a Diffusion Vision Transformer achieving state-of-the-art image generation with an FID score of 1.73 on ImageNet-256 while using fewer parameters than previous models.
Apple's M5 Pro and Max: Fusion Architecture Redefines AI Computing on Silicon
Apple unveils M5 Pro and M5 Max chips with groundbreaking Fusion Architecture, merging two 3nm dies into a single SoC. The chips deliver up to 30% faster CPU performance and over 4x peak GPU compute for AI workloads compared to previous generations.
dLLM Framework Unifies Diffusion Language Models, Opening New Frontiers in AI Text Generation
Researchers have introduced dLLM, a unified framework that standardizes training, inference, and evaluation for diffusion language models. This breakthrough enables conversion of existing models like BERT into diffusion architectures and facilitates reproduction of cutting-edge models like LLaDA and Dream.
DeepMind's Diffusion Breakthrough: Training Better Latents for Superior AI Generation
Google DeepMind researchers have developed new techniques for training latent representations in diffusion models, potentially leading to more efficient, higher-quality AI-generated content across images, audio, and video domains.
Diffusion Models Accelerated: New AI Framework Makes Autonomous Driving Predictions 100x Faster
Researchers have developed cVMDx, a diffusion-based AI model that predicts highway trajectories 100x faster than previous approaches. By using DDIM sampling and Gaussian Mixture Models, it provides multimodal, uncertainty-aware predictions crucial for autonomous vehicle safety. The breakthrough addresses key efficiency and robustness challenges in real-world driving scenarios.
Google DeepMind Reveals Fundamental Flaw in Diffusion Model Training
Google DeepMind researchers have identified a critical weakness in how diffusion models are trained, challenging the standard approach of borrowing KL penalties from VAEs. Their new paper reveals this method lacks principled control over latent information, potentially limiting model performance.
Diffusion Architecture Breaks Speed Barrier: Inception's Mercury 2 Hits 1,000 Tokens/Second
Inception's Mercury 2 achieves unprecedented text generation speeds of 1,000 tokens per second using diffusion architecture borrowed from image AI. This represents a 10x speed advantage over leading models like Claude 4.5 Haiku and GPT-5 Mini without requiring custom hardware.
CGCMA Model Achieves +0.449 Sharpe Ratio in Asynchronous Crypto News Fusion
Researchers propose CGCMA, a model for fusing sporadic news with continuous market data. It achieved a +0.449 Sharpe ratio on a new crypto trading benchmark, showing gains not explained by simple heuristics.