What It Does — A Sandboxed Coding Agent
lmcli is a Go-based command-line and terminal user interface that implements the "agentic coding loop" — the same core workflow as Claude Code where an AI can read, edit, and execute code in a project. The key differentiator is its built-in sandbox for tool execution and its model-agnostic design, supporting both OpenAI-compatible and Anthropic-compatible APIs.
Unlike Claude Code's tight integration with Anthropic's ecosystem, lmcli lets you point it at any API endpoint. This includes local models via llama-server or services like OpenRouter. The tool implements dedicated coding tools (Read, Write, Edit, Grep, Glob, Bash) with a workflow guide that mirrors Claude Code's best practices: read before editing, preserve indentation, make minimal changes.
Setup — Configuration That Mirrors Claude Code
Installation is straightforward:
go install codeberg.org/mlow/lmcli@latest
The configuration file (~/.config/lmcli/config.yaml) is where you define your coding agent. Here's a minimal setup for a Claude-like experience:
defaults:
model: claude-3-5-sonnet-20241022 # Or any Anthropic model
maxTokens: 64000
agent: coding
agents:
- name: coding
code: true # Critical: enables CLAUDE.md-style behavior
tools:
- Bash
- Glob
- Grep
- Read
- Write
- Edit
systemPrompt: |-
<agent>
<persona>You are an expert software engineer helping with coding tasks in a local repository.</persona>
<tools_guide>
Prefer dedicated tools over Bash whenever possible:
- To read a file, use Read — not Bash with cat/head/tail
- To search file contents, use Grep — not Bash with grep/rg
- To find files by name, use Glob — not Bash with find/ls
- To edit a file, use Edit — not Bash with sed/awk
- To create a new file, use Write
- Reserve Bash for tasks which aren't covered by other tools
</tools_guide>
</agent>
The code: true setting is crucial — it automatically appends content from an AGENTS.md or CLAUDE.md file in your current directory to the system prompt, exactly like Claude Code's workspace context feature.
When To Use It — Specific Workflows Where It Shines
1. Testing agentic workflows with different models: Since lmcli works with any API, you can compare how Claude Opus 4.6, GPT-4o, or local models handle the same coding task using identical tools and prompts.

2. Development in restricted environments: The sandboxed execution provides an additional security layer when working with untrusted codebases or AI-generated scripts.
3. Extending Claude Code's capabilities: Need to work with a model Claude Code doesn't support? lmcli bridges that gap while maintaining similar tooling patterns.
4. Learning how coding agents work: The open-source nature (versus Claude Code's closed implementation) lets you examine exactly how tool calling, state management, and the edit loop are implemented.
Limitations to note: lmcli is a solo developer project with "rough edges" per the creator. It lacks Claude Code's deep IDE integrations, MCP server ecosystem, and polished UX. But for core coding tasks, it's surprisingly capable.
Try This Now — A Quick Test Drive
- Install
lmcliand configure it with your Anthropic API key - Create a test directory with a simple Python file
- Add a
CLAUDE.mdfile with project context - Run:
lmcli --agent coding - Ask: "Read the main.py file and suggest improvements"
You'll see the same read-analyze-edit workflow you're familiar with, but running through a different stack.






