evolutionary computation
30 articles about evolutionary computation in AI news
AI Architects Itself: How Evolutionary Algorithms Are Creating the Next Generation of AI
Sakana AI's Shinka Evolve system uses evolutionary algorithms to autonomously design new AI architectures. By pairing LLMs with mutation and selection, it discovers high-performing models without human guidance, potentially uncovering paradigm-shifting innovations.
AlphaEvolve: Google DeepMind's LLM-Powered Evolutionary Leap in AI Development
Google DeepMind has unveiled AlphaEvolve, a groundbreaking system that uses large language models to automatically write and evolve AI algorithms. This represents a paradigm shift where AI begins creating more advanced AI, potentially accelerating development beyond human capabilities.
Evo LLM Unifies Autoregressive and Diffusion AI, Achieving New Balance in Language Generation
Researchers introduce Evo, a novel large language model architecture that bridges autoregressive and diffusion-based text generation. By treating language creation as a continuous evolutionary flow, Evo adaptively balances confident refinement with exploratory planning, achieving state-of-the-art results across 15 benchmarks while maintaining fast inference speeds.
Beyond General AI: How Liquid Foundation Models Are Revolutionizing Drug Discovery
Researchers have developed MMAI Gym, a specialized training platform that teaches AI the 'language of molecules' to create more efficient drug discovery models. The resulting Liquid Foundation Models outperform larger general-purpose AI while requiring fewer computational resources.
EvoX: The Self-Improving AI That Evolves Its Own Evolution Strategy
Researchers have developed EvoX, a meta-evolution system that dynamically optimizes its own search strategies while solving problems. Unlike traditional evolutionary algorithms with fixed parameters, EvoX continuously adapts how it selects and varies solutions based on real-time progress. The system outperformed existing AI-driven evolutionary methods across nearly 200 real-world optimization tasks.
Evolver: How AI-Driven Evolution Is Creating GPT-5-Level Performance Without Training
Imbue's newly open-sourced Evolver tool uses LLMs to automatically optimize code and prompts through evolutionary algorithms, achieving 95% on ARC-AGI-2 benchmarks—performance comparable to hypothetical GPT-5.2 models. This approach eliminates the need for gradient descent while dramatically reducing optimization costs.
AI Teaches Itself to See: Adversarial Self-Play Forges Unbreakable Vision Models
Researchers propose AOT, a revolutionary self-play framework where AI models generate their own adversarial training data through competitive image manipulation. This approach overcomes the limitations of finite datasets to create multimodal models with unprecedented perceptual robustness.
Perplexity AI Unveils 'Perplexity Computer': The Next Evolution in AI-Powered Computing
Perplexity AI has launched 'Perplexity Computer,' a groundbreaking AI-native computing platform that integrates search, writing, and computational tools into a unified interface. This development represents a significant shift toward more integrated, conversational AI systems that could redefine how users interact with computers.
Google's 'Deep-Thinking Ratio' Breakthrough: Smarter AI Reasoning at Half the Cost
Google researchers have developed a 'Deep-Thinking Ratio' metric that identifies when AI models are genuinely reasoning versus just generating longer text. This breakthrough improves accuracy while cutting inference costs by approximately 50% through early halting of unpromising computations.
AI Video Processing Breakthrough: MIT & NVIDIA Team Achieves 19x Speed Boost by Skipping Static Pixels
Researchers from MIT, NVIDIA, UC Berkeley, and Clarifai have developed a revolutionary method that accelerates AI video processing by 19 times. Their system acts as a smart filter, skipping static pixels and focusing only on moving elements, enabling efficient 4K video analysis.
Jensen Huang's '5-Layer Cake': Nvidia CEO Redefines AI as Industrial Infrastructure
Nvidia CEO Jensen Huang introduces a revolutionary framework positioning AI as essential infrastructure spanning energy, chips, infrastructure, models, and applications. This industrial perspective reshapes how we understand AI's technological and economic foundations.
Microsoft's VibeVoice-ASR Shatters Transcription Limits with 60-Minute Single-Pass Processing
Microsoft has released VibeVoice-ASR on Hugging Face, a revolutionary speech recognition model that transcribes 60-minute audio in one pass with speaker diarization, timestamps, and multilingual support across 50+ languages without configuration.
Typeless AI Redefines Voice-to-Text: From Transcription to Native-Level Rewriting
Typeless AI has introduced a revolutionary voice-to-text tool that doesn't just transcribe speech but rewrites it with native-level fluency, grammar correction, and tone adjustment across multiple languages, potentially eliminating manual typing for many professional tasks.
Physics-Inspired AI Memory: How Continuous Fields Could Solve AI's Forgetting Problem
Researchers have developed a revolutionary memory system for AI agents that treats information as continuous fields governed by physics-inspired equations rather than discrete database entries. The approach shows dramatic improvements in long-context reasoning, with +116% performance on multi-session tasks and near-perfect collective intelligence in multi-agent scenarios.
ZeroClaw: The $10 AI Assistant That Could Democratize Personal AI
ZeroClaw is a revolutionary AI assistant that runs on $10 hardware with less than 5MB RAM, making AI accessible on ultra-low-cost devices. Built entirely in Rust, it represents a breakthrough in efficient AI deployment.
NVIDIA's DreamDojo: Teaching Robots to 'Dream' in Pixels with 44,000 Hours of Human Experience
NVIDIA has open-sourced DreamDojo, a revolutionary robot world model trained on 44,711 hours of real-world human video. Instead of relying on physics engines, it predicts action outcomes directly in pixel space, potentially accelerating robotics development by orders of magnitude.
Living Architecture: AI-Designed Cyanobacteria Concrete That Repairs Itself and Captures Carbon
Researchers have developed a revolutionary living building material using cyanobacteria that captures atmospheric CO₂ and self-reinforces over time. This bio-concrete, validated by 400+ days of laboratory data, represents a paradigm shift toward regenerative construction.
LLM-as-a-Judge Framework Fixes Math Evaluation Failures
Researchers propose an LLM-as-a-judge framework for evaluating math reasoning that beats rule-based symbolic comparison, fixing failures in Lighteval and SimpleRL. This enables more accurate benchmarking of LLM math abilities.
TACO Framework Cuts Agent Token Overhead 10% via Self-Evolving Compression
Researchers introduced TACO, a framework that enables terminal agents to automatically discover and refine context compression rules from their own interaction trajectories. This approach cuts token overhead by approximately 10% on benchmarks like TerminalBench and SWE-Bench Lite while preserving task accuracy.
Altman: Next-Gen AI Models to Aid 'Career-Defining' Scientific Discovery
OpenAI CEO Sam Altman stated that upcoming AI models will assist researchers in making 'career-defining' discoveries, though he tempered expectations of immediate Nobel-level breakthroughs.
AI Firms Target Biotech for High-Impact, High-Margin Applications
A trend analysis notes AI companies are shifting focus to biotech, where accurate prediction models can be monetized through drug discovery and synthetic biology, creating a new competitive frontier.
ASI-Evolve: This AI Designs Better AI Than Humans Can — 105 New Architectures, Zero Human Guidance
Researchers built an AI that runs the entire research cycle on its own — reading papers, designing experiments, running them, and learning from results. It discovered 105 architectures that beat human-designed models, and invented new learning algorithms. Open-sourced.
NVIDIA CEO Jensen Huang Declares All Future Software Will Be Agentic
NVIDIA CEO Jensen Huang stated that all future software will be agentic, meaning every software company must transform into an agentic company. This vision positions AI agents as the fundamental architecture for future computing.
OpenAI Reallocates Compute and Talent Toward 'Automated Researchers' and Agent Systems
OpenAI is reallocating significant compute resources and engineering talent toward developing 'automated researchers' and agent-based systems capable of executing complex tasks end-to-end, signaling a strategic pivot away from some existing projects.
DRKL: Diversity-Aware Reverse KL Divergence Fixes Overconfidence in LLM Distillation
A new paper proposes Diversity-aware Reverse KL (DRKL), a fix for the overconfidence and reduced diversity caused by the popular Reverse KL divergence in LLM distillation. DRKL consistently outperforms existing objectives across multiple benchmarks.
An AI Agent Autonomously Tuned a Model and Beat Grid Search
A developer set up an AI agent to autonomously experiment with and tune a model's hyperparameters. The agent, working unattended, modified code and ran short training cycles, ultimately outperforming a traditional grid search.
HyEvo Framework Automates Hybrid LLM-Code Workflows, Cuts Inference Cost 19x vs. SOTA
Researchers propose HyEvo, an automated framework that generates agentic workflows combining LLM nodes for reasoning with deterministic code nodes for execution. It reduces inference cost by up to 19x and latency by 16x while outperforming existing methods on reasoning benchmarks.
LLM Fine-Tuning Explained: A Technical Primer on LoRA, QLoRA, and When to Use Them
A technical guide explains the fundamentals of fine-tuning large language models, detailing when it's necessary, how the parameter-efficient LoRA method works, and why the QLoRA innovation made the process dramatically more accessible.
Recommendation System Evolution: From Static Models to LLM-Powered Personalization
This article traces the technological evolution of recommendation systems through multiple transformative stages, culminating in the current LLM-powered era. It provides a conceptual framework for understanding how large language models are reshaping personalization.
Google DeepMind's Intelligent Delegation Framework: The Missing Infrastructure for AI Agents
Google DeepMind has introduced a groundbreaking framework called Intelligent AI Delegation that enables AI agents to safely hand off tasks to other agents and humans. The system addresses critical issues of accountability, transparency, and reliability in multi-agent systems.