Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

Claude Agent SDK's a2a Tool Lets You Build Persistent, Observable AI Assistants
Products & LaunchesBreakthroughScore: 90

Claude Agent SDK's a2a Tool Lets You Build Persistent, Observable AI Assistants

Use the a2a CLI tool to add persistent memory, skill management, and observability to your Claude Code projects, moving prototypes to production.

GAla Smith & AI Research Desk·1d ago·3 min read·3 views·AI-Generated
Share:
Source: rohinmahesh.medium.comvia medium_claude, devto_claudecode, hn_claude_codeSingle Source

The Technique — Building Persistent Agents with a2a

The Claude Agent SDK is powerful for prototyping, but moving to production requires handling persistence, state, and observability. The open-source a2a (Agent-to-Agent) tool solves this. It's a CLI and framework that wraps Claude Code, adding three critical layers:

  1. Skill Management: Define reusable functions (skills) your agent can call. Unlike one-off prompts, these are versioned, documented, and callable by name.
  2. Persistence & Context Management: Agents maintain memory across conversations. The tool manages context windows, automatically pruning or summarizing to stay within limits while preserving key details.
  3. Observability: Log all agent interactions, skill calls, and token usage. This is essential for debugging and improving production agents.

Why It Works — Structured State Over Ad-Hoc Prompts

Building complex agents with just claude code and prompts hits limits. You manually juggle context, re-explain goals each session, and have no audit trail. a2a imposes a lightweight structure.

It uses a simple YAML file (agent.yaml) to define your agent's skills, initial context, and persistence rules. The CLI then runs the agent, handling the orchestration. This separates the agent's logic (in skills) from its state (managed by the tool). The result is an agent you can stop, start, and query days later, and it remembers its mission.

How To Apply It — From Prototype to Production in 3 Steps

1. Install and Initialize

# Install the CLI
pip install a2a-cli

# Create a new agent in your project directory
a2a init my-assistant

This creates an agent.yaml and a skills/ directory.

2. Define Skills

Edit skills/example_skill.py. Skills are just Python functions with a docstring that becomes the LLM's description.

def search_web(query: str) -> str:
    """Searches the web for the given query and returns a summary."""
    # Your implementation (e.g., using Serper API)
    return f"Results for {query}: ..."

List the skill in agent.yaml:

skills:
  - name: search_web
    description: Searches the web for current information.

3. Run with Persistence

Run your agent with state saved to a local SQLite file:

a2a run --persist ./agent_state.db

Your agent now runs in a loop, can call search_web, and its conversation context is saved. Stop it with CTRL+C and restart later with the same command—it will load prior context.

Key CLI Commands for Development

  • a2a run --persist ./state.db: The main command for a persistent agent.
  • a2a skills list: View all available skills.
  • a2a logs: Show the interaction log for observability.
  • a2a context --summary: Get a summary of the agent's current persisted memory.

This workflow turns a conversational prototype into a durable tool. Instead of pasting a massive CLAUDE.md each time, you bootstrap an agent with a known state and capabilities.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

Claude Code users should shift from building stateless, single-session scripts to creating persistent agent *services*. Start by identifying a repetitive task you currently solve with a one-off prompt—like weekly code review summaries or dependency update checks. Use `a2a` to encapsulate the logic for that task into a skill. Run the agent with `--persist`, give it its initial goal, and then let it live. You can now query it anytime (`a2a run` on an existing state file) to get an update or have it perform the task again, without re-explaining everything. This is ideal for project-specific assistants that need to remember your codebase's unique quirks over weeks. For team use, commit the `agent.yaml` and skills to git; the state file is local data. This makes AI assistance a continuous, evolving part of your project, not a disposable chat.
Enjoyed this article?
Share:

Related Articles

More in Products & Launches

View all