Origin CLI: Open-Source Git Blame for AI Agents Tracks Claude Code, Cursor, and Gemini Contributions
A new open-source command-line tool called Origin aims to solve a fundamental problem in modern AI-assisted development: tracking which AI agent wrote which lines of code. Built by developers who couldn't answer "which AI wrote this?" when debugging, Origin provides git blame functionality specifically for AI-generated code.
What Origin Does
Origin hooks into popular AI coding tools including Claude Code, Cursor, Gemini, and Codex, automatically tagging every commit with attribution metadata. When you run origin blame on any file, you see [AI] or [HU] per line—similar to traditional git blame but specifically for AI contributions.
Each attribution includes:
- Which AI agent wrote the code
- What prompt generated it
- Which model was used
- What it cost (when applicable)
All data is stored in git notes, requiring no external server and working completely offline. The tool is designed with zero lock-in—you can remove it at any time without losing your git history.
Technical Implementation
The CLI is open source under the MIT license and available at github.com/dolobanko/origin-cli. The team is using git notes to store session data on commit hashes, which keeps the attribution data separate from the main repository history while remaining accessible through standard git operations.
# Basic usage
origin blame filename.py
# Example output
line 15: [AI] Claude Code | Model: Claude Sonnet 4.6 | Prompt: "fix the type error"
line 22: [HU] Manual edit by developer
line 28: [AI] Cursor | Model: GPT-4 | Cost: $0.002
The team also offers a team dashboard at getorigin.io for organizations needing centralized visibility into AI coding patterns across their development teams.
Why This Matters Now
This tool arrives at a critical moment in AI-assisted development. According to our knowledge graph, AI agents crossed a critical reliability threshold in December 2026, fundamentally transforming programming capabilities. Claude Code alone has seen massive adoption, with 14.8M+ commits tracked as of March 2026, showing rapid growth in developer usage.
As AI agents become more autonomous—with features like Claude Code's Auto Mode that allows AI to make permission decisions during code execution—the need for attribution and audit trails becomes essential for debugging, cost tracking, and understanding code provenance.
Supported Platforms
Origin currently supports:
- Claude Code: Anthropic's agentic command-line coding tool that lets developers delegate software engineering tasks directly from the terminal
- Cursor: AI-powered IDE with deep code understanding
- Gemini: Google's multimodal AI models
- Codex: OpenAI's code generation model
The tool's architecture allows for easy extension to additional AI coding assistants as they emerge.
gentic.news Analysis
Origin addresses a growing pain point that's emerged as AI agents become primary contributors to codebases. The timing is particularly relevant given the recent surge in Claude Code adoption—our knowledge graph shows Claude Code appeared in 131 articles this week alone, indicating massive developer interest and usage. This follows Anthropic's March 2026 launch of Claude Code's /dream command for automatic memory consolidation and the /init command for automated project configuration.
The tool's approach of using git notes is clever—it maintains compatibility with existing git workflows while adding metadata that doesn't interfere with the main repository. This aligns with the broader trend toward AI transparency tools we've covered, such as the various MCP (Model Context Protocol) servers that provide specialized capabilities to Claude Code.
What's particularly interesting is how Origin could enable new forms of analysis. Teams could track which models produce the most maintainable code, which prompts generate the fewest bugs, or which AI agents are most cost-effective for specific types of tasks. As AI agents become more autonomous (like Claude Code's Auto Mode, which we covered on March 24), attribution becomes not just a debugging tool but a necessity for understanding AI decision-making in production codebases.
Frequently Asked Questions
How does Origin differ from regular git blame?
Regular git blame shows which human author made each change. Origin extends this to show which AI agent wrote each line, including the specific prompt, model, and cost associated with that generation. It's designed specifically for the mixed human-AI development workflows that are becoming standard.
Does Origin require an internet connection or external servers?
No. Origin works completely offline and stores all attribution data in git notes within your local repository. There's no dependency on external servers, and the team dashboard is optional for organizations that want centralized reporting.
What happens if I stop using Origin?
Since Origin uses git notes, you can simply stop running the tool without affecting your repository. The attribution data remains in the git notes but won't interfere with normal git operations. This zero lock-in design means you're not committing to a proprietary system.
Can Origin track costs for AI-generated code?
Yes, when supported by the AI platform, Origin can record the cost of each AI-generated code segment. This is particularly useful for organizations tracking AI development expenses and optimizing their AI tool usage across teams.

.jpg)

