Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

agent training

30 articles about agent training in AI news

ART Framework Automates Reward Engineering, Revolutionizing AI Agent Training

The new ART framework combines GRPO with RULER to automatically generate reward functions, eliminating the need for manual reward engineering in AI agent training. This open-source solution could dramatically accelerate development of capable AI agents across domains.

85% relevant

ARLArena Framework Solves Critical Stability Problem in AI Agent Training

Researchers have developed ARLArena, a unified framework that addresses the persistent instability problem in agentic reinforcement learning. The framework provides standardized testing and introduces SAMPO, a stable optimization method that prevents training collapse in complex AI agent systems.

70% relevant

SDAR: Self-Distilled RL Stabilizes Multi-Turn LLM Agents, +9.4% on ALFWorld

SDAR gates self-distillation within GRPO to stabilize multi-turn LLM agent training, yielding +9.4% on ALFWorld and gains on WebShop and Search-QA across Qwen2.5 and Qwen3 models.

85% relevant

Subliminal Transfer Study Shows AI Agents Inherit Unsafe Behaviors Despite

New research demonstrates unsafe behavioral traits in AI agents can transfer subliminally through model distillation, with students inheriting deletion biases despite rigorous keyword filtering. This exposes a critical security flaw in agent training pipelines.

100% relevant

Beyond Deterministic Benchmarks: How Proxy State Evaluation Could Revolutionize AI Agent Testing

Researchers propose a new LLM-driven simulation framework for evaluating multi-turn AI agents without costly deterministic backends. The proxy state-based approach achieves 90% human-LLM judge agreement while enabling scalable, verifiable reward signals for agent training.

78% relevant

AI Research Breakthroughs: From Video Reasoning to Self-Stopping Models

This week's top AI papers reveal major advances in video understanding, reasoning efficiency, and agent training. Researchers introduced a massive video reasoning dataset, models that know when to stop thinking, and techniques for improving AI agents without full retraining.

95% relevant

NVIDIA's PivotRL Cuts Agent RL Training Costs 5.5x, Matches Full RL Performance on SWE-Bench

NVIDIA researchers introduced PivotRL, a post-training method that achieves competitive agent performance with end-to-end RL while using 5.5x less wall-clock time. The framework identifies high-signal 'pivot' turns in existing trajectories, avoiding costly full rollouts.

99% relevant

OpenSWE Releases 45,000+ Executable Environments for Training SWE Agents, Achieves 66% on SWE-bench Verified

OpenSWE introduces a framework with over 45,000 executable environments for training software engineering agents, achieving 66% on SWE-bench Verified through quality filtering of multi-agent synthesized environments. The Docker infrastructure is open-sourced for full reproducibility.

85% relevant

SAPO: A One-Line Code Fix for Training Stable AI Search Agents

Researchers propose SAPO, a simple modification to stabilize reinforcement learning for search agents, preventing catastrophic training collapse. It delivers +10.6% performance gains with minimal code changes.

77% relevant

The Persistence Paradox: Why Safety Training Sticks in AI Agents Even When You Try to Make Them More Helpful

New research reveals that safety training in AI agents persists through subsequent helpfulness optimization, creating a linear trade-off frontier rather than achieving 'best of both worlds' outcomes. This challenges assumptions about how to balance safety and capability in multi-step AI systems.

75% relevant

LOGIGEN Framework Solves AI's Training Data Crisis for Autonomous Agents

Researchers have developed LOGIGEN, a logic-driven framework that generates verifiable training data for autonomous AI agents. The system creates 20,000 complex tasks across 8 domains with guaranteed validity, achieving a 79.5% success rate on benchmark tests.

75% relevant

LLM Agents Take the Wheel: How Rudder Revolutionizes Distributed GNN Training

Researchers have developed Rudder, a novel system that uses Large Language Model agents to dynamically prefetch data in distributed Graph Neural Network training, achieving up to 91% performance improvement over traditional methods by adapting to changing computational conditions in real-time.

75% relevant

AgentDropoutV2: The 'Firewall' That Makes AI Teams Smarter Without Retraining

Researchers have developed AgentDropoutV2, a test-time 'firewall' for multi-agent AI systems that intercepts and corrects errors before they cascade. The method boosts math benchmark accuracy by 6.3 points without requiring model retraining.

85% relevant

Tool-R0: How AI Agents Are Learning to Use Tools Without Human Training Data

Researchers have developed Tool-R0, a framework where AI agents teach themselves to use tools through self-play reinforcement learning, achieving 92.5% improvement over base models without any pre-existing training data.

75% relevant

AI Agents Now Design Their Own Training Data: The Breakthrough in Self-Evolving Logic Systems

Researchers have developed SSLogic, an agentic meta-synthesis framework that enables AI systems to autonomously create and refine their own logic reasoning training data through a continuous generate-validate-repair loop, achieving significant performance improvements across multiple benchmarks.

75% relevant

AI Agents Now Training Other AI Models, Sparking Autoresearch Trend

AI agents are now being used to train other AI models, creating advanced agentic systems. This development stems from Andrej Karpathy's autoresearch repository and represents early-stage automation of AI research.

75% relevant

Autogenesis Protocol Enables Self-Evolving AI Agents Without Retraining

A new paper introduces Autogenesis, a self-evolving agent protocol. Agents can assess their own shortcomings, propose and test improvements, and update their operational framework in a continuous loop.

89% relevant

OpenClaw-RL Enables Live RL Training for Self-Hosted AI Agents

OpenClaw-RL introduces a system for performing asynchronous reinforcement learning on self-hosted models within the OpenClaw agent framework, allowing continuous policy improvement while the agent remains online.

89% relevant

MemFactory Framework Unifies Agent Memory Training & Inference, Reports 14.8% Gains Over Baselines

Researchers introduced MemFactory, a unified framework treating agent memory as a trainable component. It supports multiple memory paradigms and shows up to 14.8% relative improvement over baseline methods.

97% relevant

MiniMax M2.7 AI Agent Rewrites Its Own Harness, Achieving 9 Gold Medals on MLE Bench Lite Without Retraining

MiniMax's M2.7 agent autonomously rewrites its own operational harness—skills, memory, and workflow rules—through a self-optimization loop. After 100+ internal rounds, it earned 9 gold medals on OpenAI's MLE Bench Lite without weight updates.

95% relevant

Vibe Training: SLM Replaces LLM-as-a-Judge, 8x Faster, 50% Fewer Errors

Plurai introduces 'vibe training,' using adversarial agent swarms to distill a small language model (SLM) for evaluating and guarding production AI agents. The SLM outperforms standard LLM-as-a-judge setups with ~8x faster inference and ~50% fewer evaluation errors.

86% relevant

NVIDIA, Google Cloud Expand AI Partnership for Agentic & Physical AI

NVIDIA and Google Cloud announced an expanded partnership to advance agentic and physical AI, focusing on new infrastructure and software integrations. This builds on their existing collaboration to provide optimized AI training and inference platforms.

100% relevant

Pioneer Agent: A Closed-Loop System for Automating Small Language Model

Researchers present Pioneer Agent, a system that automates the adaptation of small language models to specific tasks. It handles data curation, failure diagnosis, and iterative training, showing significant performance gains in benchmarks and production-style deployments. This addresses a major engineering bottleneck for deploying efficient, specialized AI.

74% relevant

Microsoft's Universal Verifier Cuts Agent Benchmark False Positives to Near Zero

Microsoft introduced the Universal Verifier, a method to accurately assess AI agent performance on web tasks. It slashes false positive rates from over 45% to near zero, fixing corrupted benchmarks and training data.

96% relevant

ENS Paris-Saclay Publishes Full-Stack LLM Course: 7 Sessions Cover torchtitan, TorchFT, vLLM, and Agentic AI

Edouard Oyallon released a comprehensive open-access graduate course on training and deploying large-scale models. It bridges theory and production engineering using Meta's torchtitan and torchft, GitHub-hosted labs, and covers the full stack from distributed training to agentic AI.

65% relevant

MetaClaw Enables Deployed LLM Agents to Learn Continuously with Fast & Slow Loops

MetaClaw introduces a two-loop system allowing production LLM agents to learn from failures in real-time via a fast skill-writing loop and update their core model later in a slow training loop, boosting accuracy by up to 32% relative.

85% relevant

AI2's MolmoWeb: Open 8B-Parameter Web Agent Navigates Using Screenshots, Challenges Proprietary Systems

The Allen Institute for AI released MolmoWeb, a fully open web agent that operates websites using only screenshots. The 8B-parameter model outperforms other open models and approaches proprietary performance, with all training data and weights publicly released.

100% relevant

An AI Agent Autonomously Tuned a Model and Beat Grid Search

A developer set up an AI agent to autonomously experiment with and tune a model's hyperparameters. The agent, working unattended, modified code and ran short training cycles, ultimately outperforming a traditional grid search.

100% relevant

ByteDance, Tsinghua & Peking U Introduce HACPO: Heterogeneous Agent Collaborative RL Method for Cross-Agent Experience Sharing

Researchers from ByteDance, Tsinghua, and Peking University developed HACPO, a collaborative reinforcement learning method where heterogeneous AI agents share experiences during training. This approach improves individual agent performance by 15-40% on benchmark tasks compared to isolated training.

87% relevant

New 'Step-by-Step Feedback' Reward Model Trains AI Agents to Fix Reasoning Errors

Researchers introduce a reward model that provides granular, step-by-step feedback to AI agents during training, helping them identify and correct reasoning errors. The approach aims to improve agent performance on complex, multi-step tasks.

85% relevant