Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

Developer typing code on a laptop with a terminal window showing memoclaw-mcp installation commands for Claude Code…
Open SourceScore: 95

Add Persistent Memory to Claude Code in 5 Minutes with memoclaw-mcp

Stop re-explaining your preferences. Install the memoclaw-mcp server to give Claude Code persistent, semantic memory across sessions using the Model Context Protocol.

·Mar 25, 2026·2 min read··131 views·AI-Generated·Report error
Share:
Source: dev.tovia devto_mcp, hn_claude_codeWidely Reported

What It Does — Persistent Memory as an MCP Tool

Your Claude Code agent forgets everything between sessions. You've likely hacked around this with MEMORY.md files or by re-stating your preferences in every new chat. The memoclaw-mcp server fixes this by adding semantic memory-as-a-service directly to any MCP-compatible client, including Claude Code. It exposes three native tools—store_memory, recall_memories, and list_memories—that let your agent remember project details, your coding preferences, and architectural decisions across sessions.

Setup — How to Install and Configure with Claude Code

Installation is a single command. You'll need Node.js 18+ and an Ethereum wallet (like MetaMask) for identity; no API keys or account registration is required.

npm install -g memoclaw-mcp

Next, configure Claude Code to use the server. Find or create your Claude Desktop configuration file (typically claude_desktop_config.json) and add the MemoClaw server block.

{
  "mcpServers": {
    "memoclaw": {
      "command": "memoclaw-mcp",
      "env": {
        "MEMOCLAW_PRIVATE_KEY": "your-wallet-private-key"
      }
    }
  }
}

Restart Claude Code. That's it—your agent now has access to persistent memory tools.

When To Use It — Specific Use Cases Where It Shines

This server eliminates the need to manually manage context. Use it to:

  • Remember Personal Preferences: Tell Claude Code once that you prefer TypeScript, and it will recall this for all future project setups.

    You: "Remember that I prefer TypeScript over JavaScript for all new projects and use pnpm as my package manager."
    Agent calls store_memory with tags ["preference", "language", "tooling"].

  • Maintain Project Context: Store key architectural decisions or client requirements that persist beyond a single coding session.

    // Example of storing a project-specific memory
    store_memory({
      content: "The /api/v2 endpoint uses Bearer token auth, not API keys.",
      importance: 0.9,
      namespace: "project-saas-backend",
      tags: ["api", "authentication", "architecture"]
    })
    
  • Isolate Memories with Namespaces: Work on multiple projects without cross-contamination. Use the namespace parameter (like "project-acme") to keep memories for different codebases completely separate.

When you start a new session and ask, "What are the auth rules for the SaaS backend?", your agent can call recall_memories({ query: "authentication api", namespace: "project-saas-backend" }) and get the exact note you stored last week.

Source: gentic.news · · author= · citation.json

AI-assisted reporting. Generated by gentic.news from multiple verified sources, fact-checked against the Living Graph of 4,300+ entities. Edited by Ala SMITH.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

Claude Code users should stop treating each session as stateless. The integration of memory via MCP is a fundamental shift. Here’s what to do: 1. **Install `memoclaw-mcp` today.** The 5-minute setup is a trivial investment for a permanent upgrade. Use it to offload the mental tax of re-explaining your stack, linting rules, or project quirks. 2. **Adopt a proactive storing habit.** When you state a non-obvious preference or make a key decision in a chat, prompt your agent to store it. A simple "Please store that in memory with the tag 'architecture'" is enough. 3. **Use namespaces from day one.** Even for solo projects, prefix your namespace (e.g., `github-repo-name`). This creates clean boundaries immediately and scales when you context-switch. This follows Anthropic's broader push to make Claude Code more agentic through the Model Context Protocol, which we covered in "Claude Code Now Supports MCP: Here Are the First Servers to Install." Memory is a critical component for true agentic behavior, moving beyond single-session task execution.
Compare side-by-side
Claude Code vs memoclaw-mcp
Enjoyed this article?
Share:

AI Toolslive

Five one-click lenses on this article. Cached for 24h.

Pick a tool above to generate an instant lens on this article.

Related Articles

From the lab

The framework underneath this story

Every article on this site sits on top of one engine and one framework — both built by the lab.

More in Open Source

View all
Google logo and Gemma 4 branding on a dark gradient background, representing the new open-weight AI model family…
Open SourceBreakthrough
100

Google Releases Gemma 4 Family Under Apache 2.0, Featuring 2B to 31B Models with MoE and Multimodal Capabilities

Google has released the Gemma 4 family of open-weight models, derived from Gemini 3 technology. The four models, ranging from 2B to 31B parameters and including a Mixture-of-Experts variant, are available under a permissive Apache 2.0 license and feature multimodal processing.

engadget.com/Apr 2, 2026/3 min read/Widely Reported
product launchopen sourcegoogle
A sleek interface shows a waveform graph with a transcription panel, highlighting Cohere's ASR model achieving top…
Open Source
95

Cohere Transcribe: 2B-Parameter Open-Source ASR Model Achieves 5.42% WER, Topping Hugging Face Leaderboard

Cohere released Transcribe, a 2B-parameter open-source speech recognition model. It claims a 5.42% average word error rate, beating OpenAI Whisper v3 and topping the Hugging Face Open ASR Leaderboard.

the-decoder.com/Mar 27, 2026/3 min read/Widely Reported
open-sourcespeech-aibenchmarks
Students and instructors collaborate around a workstation in a modern classroom at ENS Paris-Saclay, with code and…
Open Source
65

ENS Paris-Saclay Publishes Full-Stack LLM Course: 7 Sessions Cover torchtitan, TorchFT, vLLM, and Agentic AI

Edouard Oyallon released a comprehensive open-access graduate course on training and deploying large-scale models. It bridges theory and production engineering using Meta's torchtitan and torchft, GitHub-hosted labs, and covers the full stack from distributed training to agentic AI.

admin/Mar 27, 2026/3 min read
open sourcellmsai engineering