3d
30 articles about 3d in AI news
Modly Desktop App Generates 3D Models from Images, Runs Locally
A developer has launched Modly, a desktop application that creates 3D models from images and processes them entirely on a user's local machine, eliminating cloud dependency.
NVIDIA Lyra 2.0 Launches on Hugging Face for Persistent 3D World Generation
NVIDIA has released Lyra 2.0 on Hugging Face, a framework designed to generate persistent, explorable 3D worlds at scale. It specifically addresses the core technical challenges of spatial forgetting and temporal drifting in long-horizon video generation.
Claude Code Builds Browser-Based 3D Flight Simulator in Weekend
A developer used Anthropic's Claude Code to build a complete 3D flight simulator that runs in a web browser over a weekend, demonstrating rapid AI-assisted game development.
Tencent Open-Sources HY-World 2.0 Multimodal 3D World Model
Tencent's Hunyuan AI lab has open-sourced HY-World 2.0, a multimodal world model capable of generating, reconstructing, and simulating interactive 3D scenes. This release provides a significant, freely available tool for 3D content creation and embodied AI research.
Tencent's HY-World 2.0 Generates Navigable 3D Worlds in Single Forward Pass
Tencent has open-sourced HY-World 2.0 on Hugging Face, a 3D world model that generates navigable 3D environments from text or image inputs in a single forward pass, advancing beyond video generation.
3D-Printed Rocket Uses $5 Sensor for AI-Guided Mid-Flight Correction
A builder created a fully 3D-printed rocket that uses a $5 sensor and AI to recalculate its trajectory mid-air. This showcases accessible, real-time control systems outside traditional aerospace.
Open-Source 3D Building Editor Runs in Browser, Powered by AI
A developer has open-sourced a full 3D building editor that runs entirely in a web browser. This tool uses AI to lower the barrier to architectural design, potentially disrupting professional software workflows.
AllenAI's WildDet3D Enables Promptable 3D Object Detection from Single Images
Allen Institute for AI (AllenAI) has open-sourced WildDet3D, a model for promptable 3D object detection from single RGB images. It predicts 3D bounding boxes using flexible prompts and can integrate optional depth data.
Browser-Based Text-to-CAD Tool Emerges, Enabling Local 3D Model Generation from Prompts
A developer has built a text-to-CAD application that operates entirely within a web browser, enabling local generation and manipulation of 3D models from natural language descriptions. This approach eliminates cloud dependency and could lower barriers for rapid prototyping.
Developer Open-Sources 'Prompt-to-3D' Tool for Instant, Navigable World Generation
A developer has released an open-source tool that creates interactive 3D worlds from text or image inputs. This moves 3D asset generation from static models to instant, explorable environments.
How to Build a 3D Engine with Claude Code: The Demoscene Case Study
A developer used Claude Code to build a complete 3D engine from scratch. Here are the actionable prompting techniques and CLAUDE.md strategies that made it work.
QuatRoPE: New Positional Embedding Enables Linear-Scale 3D Spatial Reasoning in LLMs, Outperforming Quadratic Methods
Researchers propose QuatRoPE, a novel positional embedding method that encodes 3D object relations with linear input scaling. Paired with IGRE, it improves spatial reasoning in LLMs while preserving their original language capabilities.
Momentum-Consistency Fine-Tuning (MCFT) Achieves 3.30% Gain in 5-Shot 3D Vision Tasks Without Adapters
Researchers propose MCFT, an adapter-free fine-tuning method for 3D point cloud models that selectively updates encoder parameters with momentum constraints. It outperforms prior methods by 3.30% in 5-shot settings and maintains original inference latency.
NVIDIA Releases NVPanoptix-3D on Hugging Face: Single-Image 3D Indoor Scene Reconstruction
NVIDIA has open-sourced NVPanoptix-3D, a model that reconstructs complete 3D indoor scenes—including panoptic segmentation, depth, and geometry—from a single RGB image in one forward pass.
AI Agents Now Work in Persistent 3D Office Simulators, Raising Questions About Digital Labor
A developer has created a persistent 3D office environment where AI agents autonomously perform tasks across multiple days. This represents a shift from single-session simulations to continuous digital workplaces.
Open-Source 'AI Office' Platform Lets Users Walk Through 3D Space to Monitor Autonomous Agents
An open-source project called AI Office creates a 3D virtual workspace where AI agents are visualized as avatars performing tasks. Users can navigate the space instead of reading logs, offering a novel interface for multi-agent systems.
NVIDIA DLSS 5 Demo Shows 3D Guided Neural Rendering for Next-Gen Upscaling
A leaked demo of NVIDIA's upcoming DLSS 5 technology showcases 3D guided neural rendering, promising a significant leap in image reconstruction quality for real-time graphics.
NVIDIA Releases Brain MRI Generation Model on Hugging Face: 3D Latent Diffusion for T1, FLAIR, T2, and SWI Scans
NVIDIA has open-sourced a 3D latent diffusion model for generating high-resolution brain MRI scans across four modalities. The model claims state-of-the-art FID scores and 33× faster inference than prior methods.
New Research Improves Text-to-3D Motion Retrieval with Interpretable Fine-Grained Alignment
Researchers propose a novel method for retrieving 3D human motion sequences from text descriptions using joint-angle motion images and token-patch interaction. It outperforms state-of-the-art methods on standard benchmarks while offering interpretable correspondences.
New Research Shows Pre-Aligned Multi-Modal Models Advance 3D Shape Retrieval from Images
A new arXiv paper demonstrates that pre-aligned image and 3D shape encoders, combined with hard contrastive learning, achieve state-of-the-art performance for image-based shape retrieval. This enables zero-shot retrieval without database-specific training.
Meshcraft Democratizes 3D Creation: Multi-Engine AI Platform Bridges Text-to-3D Gap
Meshcraft emerges as a web-based platform offering text-to-3D and image-to-3D generation with selectable AI engines. The tool provides both free and premium options, addressing quality bottlenecks in 3D generation through engine optimization rather than image model refinement.
VAST's $50M Funding Signals 3D AI Revolution: From Foundation Models to World Simulation
AI startup VAST has secured $50 million in Series A funding while advancing its 3D foundation models that are setting new industry standards. The company is preparing to launch its first world model, positioning itself at the forefront of spatial AI development.
From Flat Images to 3D Worlds: How Persistent 3D State Models Will Revolutionize Virtual Try-On and Digital Showrooms
PERSIST introduces world models with persistent 3D scene memory, enabling coherent, evolving 3D environments from single images. For luxury retail, this means photorealistic virtual try-on with perfect garment physics and immersive digital showrooms that customers can explore and customize.
Utonia AI Breakthrough: A Single Transformer Model Unifies All 3D Point Cloud Data
Researchers have developed Utonia, a single self-supervised transformer that learns unified 3D representations across diverse point cloud data types including LiDAR, CAD models, indoor scans, and video-lifted data. This breakthrough enables unprecedented cross-domain transfer and emergent behaviors in 3D AI.
VGGT-Det: How AI Is Learning to See in 3D Without Camera Calibration
Researchers have developed VGGT-Det, a breakthrough framework for multi-view 3D object detection that works without calibrated camera poses. The system mines internal geometric priors through attention mechanisms, outperforming traditional methods in indoor environments.
AI Game Engine Breakthrough: Complete 3D Worlds Generated in Seconds
A revolutionary AI system can now generate fully functional 3D games in seconds, complete with interactive worlds, moving characters, and working gameplay systems. This browser-based technology represents a quantum leap in procedural content creation.
BetterScene Bridges the Gap: How Aligning AI Representations Unlocks Photorealistic 3D Synthesis
Researchers introduce BetterScene, a novel AI method that dramatically improves 3D scene generation from just a handful of photos. By aligning the internal representations of a powerful video diffusion model, it produces consistent, artifact-free novel views, pushing the boundary of what's possible in computational photography and virtual world creation.
Text-to-Game AI Emerges: How a Single Prompt Can Now Generate Complete 3D Worlds
A breakthrough AI system can transform simple text descriptions into fully playable 3D games complete with NPCs, physics, multiplayer capabilities, and persistent worlds. This development represents a quantum leap in procedural content generation and democratizes game development.
From Prompt to Playable: New AI Platform Generates Complete 3D Games Instantly
A groundbreaking AI system can now transform simple text prompts into fully functional 3D games complete with NPCs, physics, multiplayer capabilities, and persistent worlds. Backed by NVIDIA and YouTube's co-founder with $28M in funding, this represents a seismic shift in game development.
The Next Platform Shift: How Persistent 3D World Models Are Becoming the New Programmable Interface
A new collaboration between Baseten and World Labs signals a paradigm shift where persistent 3D world models become programmable platforms, potentially rivaling the transformative impact of large language models through accessible developer APIs.