physics
30 articles about physics in AI news
PilotBench Exposes LLM Physics Gap: 11-14 MAE vs. 7.01 for Forecasters
PilotBench, a new benchmark built from 708 real-world flight trajectories, evaluates LLMs on safety-critical physics prediction. It uncovers a 'Precision-Controllability Dichotomy': LLMs follow instructions well but suffer high error (11-14 MAE), while traditional forecasters are precise (7.01 MAE) but lack semantic reasoning.
CATCHES Launches Generative AI with Physics-Based Sizing Technology for Fashion E-Commerce
CATCHES has launched a generative AI platform for fashion e-commerce featuring physics-based sizing technology. The launch is in partnership with luxury brand AMIRI and is powered by NVIDIA's AI infrastructure. This directly targets a core pain point in online apparel retail: fit uncertainty and high return rates.
AI Cracks Cosmic Code: How Neuro-Symbolic Systems Are Solving Physics' Toughest Puzzles
Researchers have developed an AI system that autonomously solved an open problem in theoretical physics, deriving exact analytical solutions for gravitational radiation from cosmic strings. The neuro-symbolic approach combines Gemini Deep Think with systematic tree search to achieve what previous AI attempts couldn't.
Beyond CGI: How Physics-Consistent 4D AI Will Transform Luxury Product Visualization
Phys4D's physics-consistent 4D modeling pipeline solves the 'uncanny valley' of AI-generated product videos, enabling hyper-realistic, physically plausible digital twins for luxury goods. This enables scalable, high-fidelity content creation for marketing, virtual try-on, and digital archives.
Beyond the Loss Function: New AI Architecture Embeds Physics Directly into Neural Networks for 10x Faster Wave Modeling
Researchers have developed a novel Physics-Embedded PINN that integrates wave physics directly into neural network architecture, achieving 10x faster convergence and dramatically reduced memory usage compared to traditional methods. This breakthrough enables large-scale 3D wave field reconstruction for applications from wireless communications to room acoustics.
Physics-Inspired AI Memory: How Continuous Fields Could Solve AI's Forgetting Problem
Researchers have developed a revolutionary memory system for AI agents that treats information as continuous fields governed by physics-inspired equations rather than discrete database entries. The approach shows dramatic improvements in long-context reasoning, with +116% performance on multi-session tasks and near-perfect collective intelligence in multi-agent scenarios.
AI Models Detect 'Nothingness' Moving Faster Than Light in Physics Data
A study in Nature reports AI has identified points in the quantum vacuum accelerating past light speed. This is the first direct measurement of such an effect, enabled by machine learning analysis of experimental data.
How to Use Claude Code's 'Grad Student' Research Mode for Complex Problem-Solving
Claude Code's advanced reasoning can now tackle complex research tasks like a grad student. Here's how to prompt it for 'vibe physics' and deep technical analysis.
LLM-Driven Heuristic Synthesis for Industrial Process Control: Lessons from Hot Steel Rolling
Researchers propose a framework where an LLM iteratively writes and refines human-readable Python controllers for industrial processes, using feedback from a physics simulator. The method generates auditable, verifiable code and employs a principled budget strategy, eliminating need for problem-specific tuning.
Theoretical Physicist Matthew Schwartz Rates Claude 4.5 Opus as 'Second-Year Grad Student Level', Claims 10x Research Acceleration
Theoretical physicist Matthew Schwartz found Anthropic's Claude 4.5 Opus performs at roughly a second-year graduate student level in physics research tasks, accelerating his workflow by 10x according to a guest post analysis.
Tsinghua & Peking University Researchers Train Humanoid Robot to Play Tennis Using Scattered, Imperfect Human Motion Clips
A team from Tsinghua, Peking University, and other labs taught a humanoid robot to play tennis using short, imperfect human swing clips instead of perfect match data. The system uses a physics simulator to correct errors, lowering the barrier for teaching robots complex physical tasks.
Digital Fruit Fly Brain Achieves First Full Perception-Action Loop in Simulation
Startup Eon Systems has demonstrated what appears to be the first complete whole-brain emulation controlling a simulated body. Their digital model of a fruit fly brain, with 125,000 neurons and 50 million synapses, successfully drives realistic behaviors in a physics-simulated fly body.
From Flat Images to 3D Worlds: How Persistent 3D State Models Will Revolutionize Virtual Try-On and Digital Showrooms
PERSIST introduces world models with persistent 3D scene memory, enabling coherent, evolving 3D environments from single images. For luxury retail, this means photorealistic virtual try-on with perfect garment physics and immersive digital showrooms that customers can explore and customize.
Text-to-Game AI Emerges: How a Single Prompt Can Now Generate Complete 3D Worlds
A breakthrough AI system can transform simple text descriptions into fully playable 3D games complete with NPCs, physics, multiplayer capabilities, and persistent worlds. This development represents a quantum leap in procedural content generation and democratizes game development.
From Prompt to Playable: New AI Platform Generates Complete 3D Games Instantly
A groundbreaking AI system can now transform simple text prompts into fully functional 3D games complete with NPCs, physics, multiplayer capabilities, and persistent worlds. Backed by NVIDIA and YouTube's co-founder with $28M in funding, this represents a seismic shift in game development.
NVIDIA's DreamDojo: Teaching Robots to 'Dream' in Pixels with 44,000 Hours of Human Experience
NVIDIA has open-sourced DreamDojo, a revolutionary robot world model trained on 44,711 hours of real-world human video. Instead of relying on physics engines, it predicts action outcomes directly in pixel space, potentially accelerating robotics development by orders of magnitude.
AI Crosses the Rubicon: From Scientific Tool to Active Discovery Partner
This week marked a paradigm shift as AI systems transitioned from research tools to active participants in scientific discovery. OpenAI's GPT-5.2 Pro helped conjecture a new formula in particle physics, while Google's Gemini 3 Deep Think achieved unprecedented results on reasoning benchmarks. These developments signal AI's growing capacity for genuine scientific contribution.
Beyond Recognition: New Framework Forces AI to Prove Its Physical Reasoning Through Code
Researchers introduce VisPhyWorld, a novel framework that evaluates AI's physical reasoning by requiring models to generate executable simulator code from visual observations. This approach moves beyond traditional benchmarks to test whether models truly understand physics rather than just recognizing patterns.
BrainCo Revo 3 Dexterous Hand Targets Real-World Robot Deployment Gap
BrainCo announced the Revo 3 dexterous robotic hand, engineered to bridge the gap between lab demos and real-world deployment. It features 21 active degrees of freedom, a 5kg per-finger load capacity, and one-click sim-to-real transfer.
Sabi Cap: 100k-Sensor EEG Hat Decodes Internal Speech at 30 WPM
Sabi released the Sabi Cap, a wearable EEG beanie with 70k-100k biosensors and a brain foundation model trained on 100k hours of neural data. It decodes internal speech to text at ~30 WPM and enables cursor control via intention.
Sabicap Develops Brain Wearable to Decode Imagined Speech into Text
Sabicap is developing a brain wearable with tens of thousands of sensors to decode imagined speech into text. The company, backed by Vinod Khosla, aims to create a system that works across users with minimal calibration for broad adoption.
Google Launches PaperBanana AI to Format Raw Methods into Publication Text
Google has launched PaperBanana, an AI tool designed to transform unstructured methodology notes into polished, publication-ready text. This targets a key bottleneck in academic writing, automating the formatting and structuring of methods sections.
Altman: Next-Gen AI Models to Aid 'Career-Defining' Scientific Discovery
OpenAI CEO Sam Altman stated that upcoming AI models will assist researchers in making 'career-defining' discoveries, though he tempered expectations of immediate Nobel-level breakthroughs.
MiniMax M2.7 Used by AtomicBot to Generate Flappy Bird Clone
A developer used the open-source MiniMax M2.7 frontier model to generate a complete, playable desktop game from a text prompt. This demonstrates practical code generation for creative applications.
Kyutai Labs Releases OVIE: Single-Image Novel View Synthesis Model
French AI lab Kyutai Labs released OVIE, a novel view generation model trained only on single images, bypassing the need for costly multi-view datasets. This could democratize 3D content creation from 2D photos.
AI Agent Research Faces Human Evaluation Bottleneck
A prominent AI researcher argues that human-based evaluation is fundamentally flawed for testing autonomous AI agents, as humans cannot perceive or replicate agent logic, creating a major research bottleneck.
ByteDance's OmniShow Unifies Text, Image, Audio, Pose for Video Gen
ByteDance introduced OmniShow, a unified multimodal framework for video generation that accepts text, reference images, audio, and pose inputs simultaneously. It claims state-of-the-art performance across diverse conditioning settings.
AGIBOT Launches GE-Sim 2.0: A Foundation Model for Robot Simulation
AGIBOT has launched GE-Sim 2.0, a foundation model for robot simulation. It allows AI agents to generate and reason within photorealistic simulated environments for planning and training.
Seedance 2.0 Generates Complex 'Mech Battle' Video from Text Prompt
Academic Ethan Mollick highlighted Seedance 2.0's ability to generate a coherent video for the complex prompt 'a mech battle between Neanderthal and Homo Sapiens'. This demonstrates the model's progress in multi-concept scene composition and temporal consistency.
India's Human Motion Farms Train Humanoid Robots with First-Person Hand Data
Labs in India are capturing detailed human motion data—focusing on grip, force, and error recovery—to train AI models for humanoid robots. This addresses the critical bottleneck of acquiring physical intelligence data for robotics.