Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

Diagram of Hermes agent's three-tier memory architecture with MEMORY.md and USER.md files as tier 1 core…
AI ResearchScore: 75

Hermes Agent's Three-Tier Memory Cuts Context Bloat, Keeps 2,200-Char Core

Hermes agent's three-tier memory uses two tiny markdown files (2,200 chars), SQLite FTS5 search (10ms over 10K docs), and 8 pluggable providers. The composition solves the always-on vs. deep recall trade-off.

·5h ago·3 min read··7 views·AI-Generated·Report error
Share:
How does Hermes agent implement three-tier memory?

Hermes agent uses three-tier memory: tier 1 is two tiny markdown files (MEMORY.md at 2,200 chars, USER.md at 1,375 chars) injected into the system prompt; tier 2 is SQLite with FTS5 full-text search; tier 3 are 8 pluggable external providers.

TL;DR

Hermes uses three-tier memory system. · Tier 1: two tiny markdown files. · Tier 2: SQLite with FTS5 search.

Hermes agent's three-tier memory system uses two tiny markdown files—MEMORY.md (2,200 chars) and USER.md (1,375 chars)—as its always-present tier 1. The architecture solves the agent memory trade-off between shallow always-on context and deep but passive vector stores.

Key facts

  • MEMORY.md is 2,200 chars; USER.md is 1,375 chars.
  • Tier 2 FTS5 search takes ~10ms over 10,000+ docs.
  • 8 pluggable external providers in tier 3.
  • Periodic nudge fires every ~300 seconds.
  • MEMORY.md consolidates at ~80% capacity.

Current agent memory systems face a binary trade-off: either pack everything into the prompt (always-on but shallow, limited by context window) or rely on vector stores that rarely fire at the right moment. Hermes agent, described by developer @akshay_pachaar, introduces a three-tier composition that splits the difference.

How the Three Tiers Compose

Tier 1 is two tiny markdown files—MEMORY.md (2,200 chars) and USER.md (1,375 chars)—injected into the system prompt at session start as a frozen snapshot [According to @akshay_pachaar]. MEMORY.md holds project conventions, tool quirks, and lessons learned; USER.md stores user profile data such as name, communication style, and skill level. When MEMORY.md hits ~80% capacity, the agent consolidates: merges related entries, drops redundancy, and keeps only the densest facts. This is natural selection pressure applied to memory—the files stay small, but what's inside gets sharper over time.

Tier 2 is SQLite with FTS5 indexing, storing every conversation for full-text search. When the agent calls session_search, FTS5 ranks matches in ~10ms over 10,000+ docs, an LLM summarizes the top hits, and a concise result returns to context [According to @akshay_pachaar]. Tier 1 is always present but tiny; tier 2 has unlimited capacity but requires an active search.

Tier 3 offers 8 pluggable external providers that run alongside tiers 1 and 2, never replacing them. Notable providers include Honcho (dialectic user modeling, 12 identity layers), Holographic (local-first, HRR vectors, no external calls), and Supermemory (context fencing that prevents infinite re-storage of the same fact). When active, Hermes auto-syncs every turn: prefetch before, sync after, extract at session end.

The Five-Step Turn Cycle

The tiers compose on every turn through a five-step cycle:

  1. Turn opens. Tier 1 is already in prompt, tier 3 prefetches and prepends.
  2. Agent responds using all three tiers as context.
  3. A periodic nudge fires every ~300s. The agent reflects: "has anything worth persisting happened?" If yes, it writes; if no, it returns silently.
  4. Memory written to MEMORY.md on disk. Invisible this session because the prefix cache stays warm.
  5. Session closes. Tier 2 logs the transcript, tier 3 extracts semantics. Next session opens with the new state.

Unique Take: Composition Over Single-Store

The structural insight here is that Hermes composes across multiple memory tiers rather than choosing one. Most agent frameworks pick a single memory mechanism (vector store, long-term context, or fine-tuning). Hermes uses tiny always-present files for critical facts, full-text search for deep recall, and external providers for semantic modeling—all orchestrated by a nudge that decides autonomously what's worth saving. The agent doesn't just store memories; it curates them under pressure.

What to watch

Watch for open-source release of Hermes agent's memory orchestration code, which would allow benchmarking against MemGPT and Letta. Also track whether the periodic nudge interval (300s) proves optimal across diverse agent workloads—too short wastes tokens, too long misses ephemeral context.

Source: gentic.news · · author= · citation.json

AI-assisted reporting. Generated by gentic.news from multiple verified sources, fact-checked against the Living Graph of 4,300+ entities. Edited by Ala SMITH.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

The Hermes memory architecture is notable not for any single tier but for the composition pattern—a deliberate design that mirrors how human memory works: a small working set (tier 1), a searchable episodic store (tier 2), and semantic models (tier 3). The periodic nudge at 300s is a clever token-efficiency hack; most agents either write every turn (wasteful) or rely on user triggers (unreliable). The consolidation at 80% capacity imposes a budget constraint that forces the agent to prioritize facts, effectively performing a form of memory compression. The main risk is that the nudge-based write might miss important but infrequent events—the agent must correctly judge what's worth persisting, which is a non-trivial meta-cognitive task. Compared to MemGPT's hierarchical summarization approach, Hermes is more explicit about tier boundaries and offers more pluggable external providers, but it's unclear how it handles multi-modal memory (images, code diffs) or very long sessions (hours of interaction).
Compare side-by-side
Hermes Agent vs MEMORY.md

Mentioned in this article

Enjoyed this article?
Share:

AI Toolslive

Five one-click lenses on this article. Cached for 24h.

Pick a tool above to generate an instant lens on this article.

Related Articles

From the lab

The framework underneath this story

Every article on this site sits on top of one engine and one framework — both built by the lab.

More in AI Research

View all