Skip to content
gentic.news — AI News Intelligence Platform

stable diffusion

30 articles about stable diffusion in AI news

DeepMind’s New VAE Matches Stable Diffusion at 10x Resolution

DeepMind’s new VAE produces 1024x1024 images with quality comparable to Stable Diffusion’s 256x256 output, potentially replacing the standard VAE in generative pipelines. This cuts the token count by 10x, enabling faster generation and lower memory usage.

85% relevant

LPM 1.0: 17B-Parameter Diffusion Model Generates 60K-Second AI Avatar Videos

Researchers introduced LPM 1.0, a 17B-parameter real-time diffusion model that generates infinite-length conversational videos with stable identity, achieving over 60,000 seconds of consistent character performance.

95% relevant

Uni-ViGU Unifies Video Generation & Understanding in Single Diffusion Model

A new paper introduces Uni-ViGU, a unified model that performs video generation and understanding within a single diffusion process via flow matching. This inverts the standard approach of separate models for each task.

85% relevant

mmAnomaly: New Multi-Modal Framework Uses Conditional Latent Diffusion to Achieve 94% F1 Score for mmWave Anomaly Detection

Researchers introduced mmAnomaly, a multi-modal anomaly detection system that uses a conditional latent diffusion model to synthesize expected mmWave spectra from visual context, achieving up to a 94% F1 score for detecting concealed weapons and through-wall anomalies.

72% relevant

Geometric Latent Diffusion (GLD) Achieves SOTA Novel View Synthesis, Trains 4.4× Faster Than VAE

GLD repurposes features from geometric foundation models like Depth Anything 3 as a latent space for multi-view diffusion. It trains significantly faster than VAE-based approaches and achieves state-of-the-art novel view synthesis without text-to-image pretraining.

95% relevant

LLMs Show 'Privileged Access' to Own Policies in Introspect-Bench, Explaining Self-Knowledge via Attention Diffusion

Researchers formalize LLM introspection as computation over model parameters, showing frontier models outperform peers at predicting their own behavior. The study provides causal evidence for how introspection emerges via attention diffusion without explicit training.

86% relevant

WiT: Waypoint Diffusion Transformers Achieve FID 2.09 on ImageNet 256×256 in 265 Epochs, Matching JiT-L/16 Efficiency

Researchers introduced WiT, a diffusion transformer that uses semantic waypoints from pretrained vision models to resolve trajectory conflicts in pixel-space flow matching. It matches the performance of JiT-L/16 at 600 epochs in just 265 epochs, achieving an FID of 2.09 on ImageNet 256×256.

85% relevant

Diffusion Recommender Model (DiffRec): A Technical Deep Dive into Generative AI for Recommendation Systems

A detailed analysis of DiffRec, a novel recommendation system architecture that applies diffusion models to collaborative filtering. This represents a significant technical shift from traditional matrix factorization to generative approaches.

95% relevant

Evo LLM Unifies Autoregressive and Diffusion AI, Achieving New Balance in Language Generation

Researchers introduce Evo, a novel large language model architecture that bridges autoregressive and diffusion-based text generation. By treating language creation as a continuous evolutionary flow, Evo adaptively balances confident refinement with exploratory planning, achieving state-of-the-art results across 15 benchmarks while maintaining fast inference speeds.

75% relevant

NVIDIA's DiffiT: A New Vision Transformer Architecture Sets Diffusion Model Benchmark

NVIDIA has released DiffiT, a Diffusion Vision Transformer achieving state-of-the-art image generation with an FID score of 1.73 on ImageNet-256 while using fewer parameters than previous models.

95% relevant

dLLM Framework Unifies Diffusion Language Models, Opening New Frontiers in AI Text Generation

Researchers have introduced dLLM, a unified framework that standardizes training, inference, and evaluation for diffusion language models. This breakthrough enables conversion of existing models like BERT into diffusion architectures and facilitates reproduction of cutting-edge models like LLaDA and Dream.

85% relevant

DeepMind's Diffusion Breakthrough: Training Better Latents for Superior AI Generation

Google DeepMind researchers have developed new techniques for training latent representations in diffusion models, potentially leading to more efficient, higher-quality AI-generated content across images, audio, and video domains.

85% relevant

Google DeepMind Reveals Fundamental Flaw in Diffusion Model Training

Google DeepMind researchers have identified a critical weakness in how diffusion models are trained, challenging the standard approach of borrowing KL penalties from VAEs. Their new paper reveals this method lacks principled control over latent information, potentially limiting model performance.

85% relevant

Diffusion Architecture Breaks Speed Barrier: Inception's Mercury 2 Hits 1,000 Tokens/Second

Inception's Mercury 2 achieves unprecedented text generation speeds of 1,000 tokens per second using diffusion architecture borrowed from image AI. This represents a 10x speed advantage over leading models like Claude 4.5 Haiku and GPT-5 Mini without requiring custom hardware.

95% relevant

VHS: Latent Verifier Cuts Diffusion Model Verification Cost by 63.3%, Boosts GenEval by 2.7%

Researchers propose Verifier on Hidden States (VHS), a verifier operating directly on DiT generator features, eliminating costly pixel-space decoding. It reduces joint generation-and-verification time by 63.3% and improves GenEval performance by 2.7% versus MLLM verifiers.

95% relevant

New Research Establishes State-of-the-Art for Virtual Try-Off with

A new arXiv paper introduces a systematic framework for Virtual Try-Off (VTOFF)—reconstructing a garment's canonical form from a worn image. The Dual-UNet Diffusion model achieves state-of-the-art results on standard datasets, providing foundational insights for this emerging computer vision task.

72% relevant

Stanford's EgoNav Trains Robot Navigation on 5 Hours of Human Video, Enables Zero-Shot Control of Unitree G1

Stanford's EgoNav system uses a 5-hour egocentric video walk of campus to train a diffusion model that enables zero-shot navigation for a Unitree G1 humanoid robot, eliminating the need for robot-specific training data.

99% relevant

Luma Labs Launches Uni-1: An Autoregressive Transformer for Image Generation with a Pre-Generation Reasoning Phase

Luma Labs has released Uni-1, a foundational image model that uses an autoregressive transformer to reason about user intent before generating pixels. It aims to address the 'intent gap' common in diffusion models by adding a structured reasoning step.

88% relevant

The Hidden Bias in AI Image Generators: Why 'Perfect' Training Can Leak Private Data

New research reveals diffusion models continue to memorize training data even after achieving optimal test performance, creating privacy risks. This 'biased generalization' phase occurs when models learn fine details that overfit to specific samples rather than general patterns.

75% relevant

New AI Framework Prevents Image Generators from Copying Training Data Without Sacrificing Quality

Researchers have developed RADS, a novel inference-time framework that prevents text-to-image diffusion models from memorizing and regurgitating training data. Using reachability analysis and constrained reinforcement learning, RADS steers generation away from memorized content while maintaining image quality and prompt alignment.

75% relevant

BetterScene Bridges the Gap: How Aligning AI Representations Unlocks Photorealistic 3D Synthesis

Researchers introduce BetterScene, a novel AI method that dramatically improves 3D scene generation from just a handful of photos. By aligning the internal representations of a powerful video diffusion model, it produces consistent, artifact-free novel views, pushing the boundary of what's possible in computational photography and virtual world creation.

78% relevant

OpenSCAD Web: Open-Source Text-to-CAD Tool Runs Fully In-Browser via WebAssembly

A developer has released an open-source text-to-CAD tool that runs entirely in a web browser using WebAssembly. Users describe a 3D object in plain English, optionally upload a reference image, and receive a parametric model with adjustable dimensions that exports directly to 3D printer formats.

85% relevant

GPT-5.4 LLM Choice Drastically Impacts GPT-ImageGen-2 Output Quality

The quality of images generated by GPT-ImageGen-2 is heavily dependent on the underlying LLM used for reasoning. GPT-5.4 'Thinking' and 'Pro' models produce superior outputs, especially for complex concepts, a non-intuitive finding not documented by OpenAI.

85% relevant

Microsoft's 2000 Nvidia Veto Rights Resurface Amid AI Chip Wars

A 2000 investment deal granted Microsoft veto rights over any acquisition of Nvidia. This historical clause gains new relevance as Nvidia's AI dominance makes it a potential target in the ongoing semiconductor consolidation.

87% relevant

GPT-Image-2 Adds Self-Review Loop for Iterative Image Correction

A new capability in GPT-Image-2 allows the model to review and iteratively correct its own image generations, aiming for higher accuracy before final output.

85% relevant

OpenAI Teases 'Not a Screenshot' AI Video Model

OpenAI posted a cryptic tweet stating 'This is not a screenshot' with a video link, strongly hinting at a new AI video generation model. This marks a direct move into a space currently led by rivals like Runway and Pika.

85% relevant

Stirling-PDF Hits 77K GitHub Stars as Local AI Document Processing Surges

Stirling-PDF, a fully local, open-source PDF toolkit, has surpassed 77,100 GitHub stars and 25M+ downloads. Its growth highlights a major shift toward privacy-first, self-hosted document AI, challenging paid cloud services like Adobe Acrobat.

89% relevant

Tencent Releases MegaStyle: 1.4M AI-Generated Image Style Dataset

Tencent has open-sourced MegaStyle, a 1.4 million image dataset for style transfer and text-to-image fine-tuning. It was generated by systematically pairing 170,000 style prompts with 400,000 content prompts using the Qwen-Image model.

85% relevant

Yann LeCun's JEPA Vision Gains Traction as Generative AI Hits Limits

A widely-shared critique claims the generative AI paradigm is a dead end, aligning with Meta's Yann LeCun's years of advocating for his Joint Embedding Predictive Architecture (JEPA) approach.

85% relevant

Modly Desktop App Generates 3D Models from Images, Runs Locally

A developer has launched Modly, a desktop application that creates 3D models from images and processes them entirely on a user's local machine, eliminating cloud dependency.

89% relevant