agent memory
30 articles about agent memory in AI news
MemFactory Framework Unifies Agent Memory Training & Inference, Reports 14.8% Gains Over Baselines
Researchers introduced MemFactory, a unified framework treating agent memory as a trainable component. It supports multiple memory paradigms and shows up to 14.8% relative improvement over baseline methods.
AMA-Bench Released: New Benchmark Focuses on Agent Memory Beyond Dialogue
Researchers have released AMA-Bench, a new evaluation framework designed to test AI agent memory capabilities specifically, moving beyond standard dialogue-based assessments. The benchmark aims to address limitations in existing memory evaluation methods.
AI Agents Get a Memory Upgrade: New Framework Treats Multi-Agent Memory as Computer Architecture
A new paper proposes treating multi-agent memory systems as a computer architecture problem, introducing a three-layer hierarchy and identifying critical protocol gaps. This approach could significantly improve reasoning, skills, and tool usage in collaborative AI systems.
Accenture's Memex(RL) Revolutionizes AI Agent Memory for Complex Tasks
Accenture researchers have developed Memex(RL), a breakthrough system that gives AI agents structured, searchable memory for long-horizon tasks. This solves the critical problem of agents losing track of past experiences during complex operations like deep research and multi-step planning.
Replace Karpathy's Agent Memory Automation with This 30-Line /close-day Hook
Background automation fails on laptops; use a simple /close-day skill and date tags in MEMORY.md instead.
Structured Distillation for Personalized Agent Memory: 11x Compression with Minimal Recall Loss
New research introduces structured distillation to compress AI agent conversation history by 11x (371→38 tokens/exchange) while preserving 96% retrieval effectiveness. This enables storing thousands of exchanges in a single prompt while maintaining verbatim source access.
Stateless Memory for Enterprise AI Agents: Scaling Without State
The paper replaces stateful agent memory with immutable decision logs using event-sourcing, allowing thousands of concurrent agent instances to scale horizontally without state bottlenecks.
Cognee Open-Source Framework Unifies Vector, Graph, and Relational Memory for AI Agents
Developer Akshay Pachaar argues AI agent memory requires three data stores—vector, graph, and relational—to handle semantics, relationships, and provenance. His open-source project Cognee unifies them behind a simple API.
Building a Memory Layer for a Voice AI Agent: A Developer's Blueprint
A developer shares a technical case study on building a voice-first journal app, focusing on the critical memory layer. The article details using Redis Agent Memory Server for working/long-term memory and key latency optimizations like streaming APIs and parallel fetches to meet voice's strict responsiveness demands.
AI Agents Get a Memory Upgrade: New Research Tackles Long-Horizon Task Challenges
Researchers have developed new methods to scale AI agent memory for complex, long-horizon tasks. The breakthrough addresses one of the biggest limitations in current agent systems—their inability to retain and utilize information over extended sequences of actions.
Beyond RAG: How AI Memory Systems Are Creating Truly Adaptive Agents
AI development is shifting from static retrieval systems to dynamic memory architectures that enable continual learning. This evolution from RAG to agent memory represents a fundamental change in how AI systems accumulate and utilize knowledge over time.
AI Memory Survey: Three Systems Needed for Human-Like Recall
A new survey paper proposes that modern AI requires three distinct memory systems—parametric, retrieval, and agent memory—to achieve human-like cognition, highlighting control as the key bottleneck.
Distillery 0.4.0 Stabilizes Its MCP API
Distillery 0.4.0 stabilizes its MCP API surface, enabling reliable agent memory and team knowledge bases for Claude Code workflows.
MNEMA: A Witness Lattice for Multi-Agent AI Memory
Today's agentic AI fails three ways: agents miscoordinate, memory gets quietly poisoned, and decisions can't be audited. A new EUMAS 2026 submission argues the fix is to stop treating memory as static records. Make it *living* — every memory unit becomes an autonomous cryptographic witness that interacts with other witnesses (agree, disagree, give birth to new witnesses, split, coalesce, retire), and decisions emerge from a fixed signed protocol rather than from a single orchestrator.
OpenAI Codex Update Adds macOS Agent, Browser, Memory; 3M Weekly Users
OpenAI released a major Codex update featuring background macOS automation, an in-app browser, persistent memory, and 90+ plugins. With 3M weekly users and nearly half of usage now non-coding, Codex is being repositioned as a general work agent.
Mind: Open-Source Persistent Memory for AI Coding Agents
An open-source tool called Mind creates a shared memory layer for AI coding agents, allowing them to remember project context across sessions and different interfaces like Claude Code, Cursor, and Windsurf.
Nous Research's Hermes Agent Features Self-Improving Skills, Persistent Memory
A new evaluation of Nous Research's Hermes Agent highlights its self-improving ability to build reusable tools from experience and a smarter persistent memory system that conserves token usage. The agent reportedly improves with continued use, representing a shift towards more adaptive AI systems.
Memory Systems for AI Agents: Architectures, Frameworks, and Challenges
A technical analysis details the multi-layered memory architectures—short-term, episodic, semantic, procedural—required to transform stateless LLMs into persistent, reliable AI agents. It compares frameworks like MemGPT and LangMem that manage context limits and prevent memory drift.
MemoryCD: New Benchmark Tests LLM Agents on Real-World, Lifelong User Memory for Personalization
Researchers introduce MemoryCD, the first large-scale benchmark for evaluating LLM agents' long-context memory using real Amazon user data across 12 domains. It reveals current methods are far from satisfactory for lifelong personalization.
Did You Check the Right Pocket? A New Framework for Cost-Sensitive Memory Routing in AI Agents
A new arXiv paper frames memory retrieval in AI agents as a 'store-routing' problem. It shows that selectively querying specialized data stores, rather than all stores for every request, significantly improves efficiency and accuracy, formalizing a cost-sensitive trade-off.
Hindsight AI: How Biomimetic Memory Systems Are Revolutionizing Agent Intelligence
Hindsight, an open-source AI memory system, achieves state-of-the-art performance on the LongMemEval benchmark by mimicking human memory structures. Unlike traditional RAG approaches, it employs parallel retrieval strategies to enable agents that don't just remember—they learn.
Hybrid Self-evolving Structured Memory: A Breakthrough for GUI Agent Performance
Researchers propose HyMEM, a graph-based memory system for GUI agents that combines symbolic nodes with continuous embeddings. It enables multi-hop retrieval and self-evolution, boosting open-source VLMs to surpass closed-source models like GPT-4o on computer-use tasks.
Google's 'Always-On Memory Agent' Could Revolutionize How AI Remembers and Learns
Google has unveiled an experimental 'Always-On Memory Agent' system that gives AI persistent, evolving memory capabilities. This breakthrough could transform how AI assistants learn from continuous interactions and maintain context across sessions.
Google's Always-On Memory Agent: The AI That Never Forgets
Google has unveiled Always-On Memory Agent, an open-source AI system that maintains continuous memory across sessions. The agent learns from user files and connects ideas autonomously, promising affordable 24/7 operation when paired with Gemini 3.1 Flash-Lite.
AI Gold Rush Strains Apple Hardware: High-Memory Macs Sell Out as Local AI Agents Go Mainstream
A surge in demand for local AI development has created severe inventory shortages for high-memory Apple hardware. Mac Studio orders with 128GB or 512GB RAM face 6+ week delays as consumers buy up every available unit to run powerful AI agents like OpenClaw.
PlugMem: The Universal Memory Module That Could Revolutionize AI Agents
Researchers have developed PlugMem, a task-agnostic memory module that can be attached to any LLM agent without redesign. By structuring memories into a knowledge-centric graph, it enables more efficient reasoning while outperforming both task-specific and task-agnostic alternatives across diverse benchmarks.
Microsoft's EMPO²: A Memory-Augmented RL Framework That Supercharges LLM Agent Exploration
Microsoft has unveiled EMPO², a hybrid reinforcement learning framework that enhances LLM agents with augmented memory for true exploration. The system combines on- and off-policy optimization to discover novel states, achieving 128.6% performance gains over existing methods on ScienceWorld benchmarks.
Hermes Agent: How Nous Research's New AI System Solves the 'Goldfish Memory' Problem
Nous Research has released Hermes Agent, an open-source autonomous system that addresses AI's persistent memory limitations. It features multi-level memory, persistent terminal access, and self-evolving skill documents, enabling AI to function as a true long-term collaborator rather than a forgetful assistant.
Google's Design.md Gives AI Coding Agents a Visual Design Memory
Google introduced Design.md, a file format for storing design tokens and rules that AI coding agents can read to maintain visual consistency, addressing a key failure point in automated UI generation.
Stop Losing Agent Context: Implement Session Memory Files in Your Claude
A simple pattern using structured markdown files to persist session state across context windows, preventing Claude Code agents from redoing work or making inconsistent decisions.