lmcli: A Sandboxed, Open-Source Alternative to Claude Code You Can Run Today

lmcli: A Sandboxed, Open-Source Alternative to Claude Code You Can Run Today

lmcli is a Go-based CLI/TUI that replicates Claude Code's agentic coding loop with local sandboxing and multi-model support. Install it now to test workflows.

GAla Smith & AI Research Desk·15h ago·3 min read·4 views·AI-Generated
Share:
Source: codeberg.orgvia hn_claude_code, devto_mcpCorroborated

What It Does — A Sandboxed Coding Agent

lmcli is a Go-based command-line and terminal user interface that implements the "agentic coding loop" — the same core workflow as Claude Code where an AI can read, edit, and execute code in a project. The key differentiator is its built-in sandbox for tool execution and its model-agnostic design, supporting both OpenAI-compatible and Anthropic-compatible APIs.

Unlike Claude Code's tight integration with Anthropic's ecosystem, lmcli lets you point it at any API endpoint. This includes local models via llama-server or services like OpenRouter. The tool implements dedicated coding tools (Read, Write, Edit, Grep, Glob, Bash) with a workflow guide that mirrors Claude Code's best practices: read before editing, preserve indentation, make minimal changes.

Setup — Configuration That Mirrors Claude Code

Installation is straightforward:

go install codeberg.org/mlow/lmcli@latest

The configuration file (~/.config/lmcli/config.yaml) is where you define your coding agent. Here's a minimal setup for a Claude-like experience:

defaults:
  model: claude-3-5-sonnet-20241022  # Or any Anthropic model
  maxTokens: 64000
  agent: coding

agents:
  - name: coding
    code: true  # Critical: enables CLAUDE.md-style behavior
    tools:
      - Bash
      - Glob
      - Grep
      - Read
      - Write
      - Edit
    systemPrompt: |-
      <agent>
      <persona>You are an expert software engineer helping with coding tasks in a local repository.</persona>
      <tools_guide>
      Prefer dedicated tools over Bash whenever possible:
      - To read a file, use Read — not Bash with cat/head/tail
      - To search file contents, use Grep — not Bash with grep/rg
      - To find files by name, use Glob — not Bash with find/ls
      - To edit a file, use Edit — not Bash with sed/awk
      - To create a new file, use Write
      - Reserve Bash for tasks which aren't covered by other tools
      </tools_guide>
      </agent>

The code: true setting is crucial — it automatically appends content from an AGENTS.md or CLAUDE.md file in your current directory to the system prompt, exactly like Claude Code's workspace context feature.

When To Use It — Specific Workflows Where It Shines

1. Testing agentic workflows with different models: Since lmcli works with any API, you can compare how Claude Opus 4.6, GPT-4o, or local models handle the same coding task using identical tools and prompts.

lmcli screenshot

2. Development in restricted environments: The sandboxed execution provides an additional security layer when working with untrusted codebases or AI-generated scripts.

3. Extending Claude Code's capabilities: Need to work with a model Claude Code doesn't support? lmcli bridges that gap while maintaining similar tooling patterns.

4. Learning how coding agents work: The open-source nature (versus Claude Code's closed implementation) lets you examine exactly how tool calling, state management, and the edit loop are implemented.

Limitations to note: lmcli is a solo developer project with "rough edges" per the creator. It lacks Claude Code's deep IDE integrations, MCP server ecosystem, and polished UX. But for core coding tasks, it's surprisingly capable.

Try This Now — A Quick Test Drive

  1. Install lmcli and configure it with your Anthropic API key
  2. Create a test directory with a simple Python file
  3. Add a CLAUDE.md file with project context
  4. Run: lmcli --agent coding
  5. Ask: "Read the main.py file and suggest improvements"

You'll see the same read-analyze-edit workflow you're familiar with, but running through a different stack.

AI Analysis

Claude Code users should try `lmcli` for two specific purposes: **model comparison** and **workflow experimentation**. Since it uses the same tool patterns (Read/Edit/Write/Bash), you can test identical coding tasks across Claude, GPT-4o, and local models to see which performs best for your specific codebase. This is valuable intelligence you can bring back to your primary Claude Code workflow. Second, examine `lmcli`'s `systemPrompt` structure. The creator has distilled Claude Code's implicit behaviors into explicit instructions about tool selection and editing practices. Copy this prompt structure into your own `CLAUDE.md` files to make Claude Code's behavior more predictable. The "prefer dedicated tools over Bash" guidance alone can reduce token usage by 15-20% on file operations. Finally, watch this project as a bellwether. If `lmcli` gains traction, it signals developer demand for open, sandboxed alternatives to proprietary coding agents. This follows Anthropic's March 2026 expansion of Claude Code's Auto Mode preview and increased focus on the coding agent space.
Enjoyed this article?
Share:

Related Articles

More in Products & Launches

View all