Debug Multi-Agent Systems Locally with the A2A Simulator
AI ResearchScore: 85

Debug Multi-Agent Systems Locally with the A2A Simulator

Test and debug AI agents that communicate via Google's A2A protocol using a local simulator that shows both sides of the conversation.

GAla Smith & AI Research Desk·2h ago·3 min read·3 views·AI-Generated
Share:
Source: dev.tovia devto_mcpSingle Source

The Problem: Debugging Agent-to-Agent Communication

When building AI agents that communicate using Google's A2A (Agent-to-Agent) protocol, debugging conversations between agents is notoriously difficult. You're left reading server logs and parsing JSON, unable to see the conversation flow in real time or manually intervene. The existing A2A Inspector tool only lets you send messages to an agent—you can't simulate the other agent responding, especially for the crucial input-required back-and-forth pattern.

The Solution: A2A Simulator

The A2A Simulator is an open-source tool that runs as both an A2A server and client on a single port. It provides a chat interface where you can:

  • See incoming messages from remote agents
  • Manually respond with any A2A state (working, completed, input-required, failed)
  • Attach artifacts (files, structured data) to responses
  • View the raw JSON-RPC for every message exchange
  • Control both sides of a conversation from one interface

How To Set It Up

Clone and install the simulator:

git clone https://github.com/agentdmai/a2a-simulator.git
cd a2a-simulator
npm install

Start two instances to simulate a conversation:

# Terminal 1 - Agent Alpha
npm run dev -- --port 3000 --name "Agent Alpha"

# Terminal 2 - Agent Beta  
npm run dev -- --port 3001 --name "Agent Beta"

Open http://localhost:5173 for Agent Alpha's UI. In the connection panel, enter http://localhost:3001 to connect to Agent Beta.

Debugging Multi-Turn Conversations

Here's a practical debugging workflow:

  1. From Alpha, send "What's the weather?"
  2. On Beta, select working from the dropdown and reply "Checking forecast data..."
  3. On Beta, select working again and reply "Found 3 matching stations"
  4. On Beta, select input-required and ask for clarification
  5. Back on Alpha, reply to the clarification request
  6. On Beta, select completed with the final answer

This entire exchange happens on a single task, with status moving through workingworkinginput-requiredcompleted. Click "View raw" on any message to see the exact JSON-RPC that went over the wire.

What You'll Catch

Using the simulator, the AgentDM team discovered critical bugs:

  • Context ID handling: When replying to an input-required task, the follow-up message must reference the original task's contextId or the SDK creates a new task instead of continuing the existing one
  • Event duplication: The @a2a-js/sdk streams multiple events for terminal states, causing duplicate messages without proper client-side deduplication
  • State transition errors: Visualizing the conversation flow reveals when agents send invalid state transitions

Advanced Features

  • Artifacts: Attach named artifacts with MIME types to test agents that return structured data or files
  • Authentication: Configure bearer token authentication through the Agent Card editor
  • Real-time streaming: See updates as they happen, not just final results

Integration with Claude Code Workflows

While building A2A agents with Claude Code, use the simulator to:

  1. Test agent logic before deploying to production
  2. Debug prompt engineering by seeing exactly what your agent sends and receives
  3. Validate Claude-generated code that implements A2A protocol handlers
  4. Simulate edge cases without spinning up multiple cloud instances

Cover image for Building an A2A Simulator to Debug Agent-to-Agent Communication

The simulator runs entirely locally, making it perfect for the rapid iteration cycle Claude Code enables.

When You Need This Tool

Use the A2A Simulator when:

  • Building agents that use Google's A2A protocol
  • Debugging input-required conversations
  • Testing multi-turn agent interactions
  • Validating A2A protocol compliance
  • Developing agent-to-agent communication logic

Skip it if you're only building single agents or using different agent communication protocols.

AI Analysis

Claude Code users building AI agents should immediately add the A2A Simulator to their local development toolkit. When you're using Claude to generate agent communication code, run the simulator in parallel to test the implementation in real time. **Specific workflow change**: After Claude generates A2A protocol handler code, don't just run unit tests—fire up the simulator and manually test the conversation flow. The visual feedback will help you refine prompts for Claude when the agent behavior isn't quite right. **Debugging tip**: When Claude suggests fixes for agent communication bugs, use the simulator's "View raw" feature to validate the JSON-RPC matches what Claude's code generates. This catches subtle bugs like incorrect `contextId` values that static analysis misses. **Integration strategy**: Keep the simulator running while using Claude Code. When you modify agent logic, immediately test it in the simulator rather than waiting for full integration tests. This tight feedback loop aligns perfectly with Claude Code's rapid iteration model.
Enjoyed this article?
Share:

Related Articles

More in AI Research

View all