Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

A developer's laptop screen displays a CC-Lens dashboard with real-time analytics showing Claude Code usage…
Open SourceScore: 95

CC-Lens: The Open-Source Dashboard That Shows You Exactly How You Use Claude Code

Run `npx cc-lens` to get a real-time, local analytics dashboard for your Claude Code usage, revealing token costs, project activity, and tool efficiency.

·Mar 26, 2026·3 min read··433 views·AI-Generated·Report error
Share:
Source: github.comvia hn_claude_code, reddit_claudeWidely Reported

The Technique — A Local Analytics Dashboard for Claude Code

CC-Lens is an open-source, real-time monitoring dashboard that reads directly from your ~/.claude/ directory. It requires no installation, cloud services, or telemetry. You run it with a single command:

npx cc-lens

The CLI automatically finds a free port, starts a local server, and opens the dashboard in your browser. Data refreshes every 5 seconds while the dashboard is open.

Why It Works — Unlocking Your Local Data

Claude Code stores a wealth of local data in JSONL files, but it's not easily human-readable. CC-Lens parses this data to provide actionable insights. It primarily reads from:

  • ~/.claude/projects/<slug>/*.jsonl: Session data.
  • ~/.claude/stats-cache.json: Aggregated statistics.
  • ~/.claude/usage-data/session-meta/: Session metadata.
  • ~/.claude/history.jsonl: Your command history.
  • ~/.claude/todos/, ~/.claude/plans/, ~/.claude/memory/: Your project artifacts.

Because it runs locally, your data never leaves your machine. This follows a broader trend in the Claude Code ecosystem, as seen with tools for running Claude Code locally with Ollama and debugging MCP servers, prioritizing developer control and privacy.

How To Apply It — Key Dashboards for Better Workflows

Once running, CC-Lens provides several views that can immediately improve how you use Claude Code.

demo

1. The Cost Dashboard (/costs)
This is the most actionable panel. The Stacked Area Chart by Model shows which model (Opus, Sonnet, Haiku) you're spending the most on. The Cache Efficiency Panel reveals how often Claude Code is reusing cached responses instead of generating new ones—a direct lever for reducing token usage. Use this data to adjust your claude code model flags or project settings.

2. The Projects Dashboard (/projects)
This card grid shows session count, cost per session, most-used tools, and even git branches per project. Click into any project for a detail page with a cost trend chart. This helps you identify which codebases are the most expensive to work on with Claude Code, prompting you to invest in better CLAUDE.md files or memory for those projects.

3. The Tools Dashboard (/tools)
This view ranks every tool Claude Code uses by category: file-io, shell, agent, web, planning, MCP, etc. You can see exactly which MCP servers are being invoked and how often. If you see a high usage of the web tool, for instance, you might want to implement the strategies from our article on stopping web fetches from burning tokens. The Feature Adoption Table and CC Version History chart also help you track your own rollout of new Claude Code features.

4. Session Replay (/sessions)
Search and filter all sessions (compacted, agent, MCP, web, thinking). Click into any session for a full replay with a per-turn token display and a token timeline chart. This is invaluable for debugging why a particular task consumed so many tokens. You can see exactly where a compaction event happened or where a web search spiraled.

5. Memory & Artifacts Browser
Browse and edit Claude Code memory files across all projects, filterable by type (user, feedback, project). View your ~/.claude/todos/ with status filters and priority badges. Read saved plan files from ~/.claude/plans/ with inline markdown rendering. This turns your ~/.claude directory from a black box into a manageable knowledge base.

Advanced: Running from Source

If you want to contribute or customize, run from source:

git clone https://github.com/Arindam200/cc-lens
cd cc-lens
npm install
npm run dev
# Open http://localhost:3000

The stack is Next.js 15, React 19, TypeScript, Tailwind CSS, Recharts, and SWR.

cc-board CLI

Source: gentic.news · · author= · citation.json

AI-assisted reporting. Generated by gentic.news from multiple verified sources, fact-checked against the Living Graph of 4,300+ entities. Edited by Ala SMITH.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

**Immediately run `npx cc-lens` in your terminal.** Let it open in your browser and leave it running in a tab for your next Claude Code session. Watch the real-time data flow in. **Use the Cost Dashboard to audit your model usage.** If you're burning too many Opus tokens on simple tasks, you now have the data to justify switching to `--model sonnet` for those projects. The cache efficiency metric is a direct prompt to use Claude Code's `--compact` flag more aggressively to boost cache hits. **Treat the Projects and Tools dashboards as a weekly review.** Which project had the highest cost per session? Invest 15 minutes improving its `CLAUDE.md` context. Which MCP server is never used? Uninstall it. Which tool category (like 'web') is dominating? Implement stricter prompting rules to control it. This turns anecdotal feelings about Claude Code's performance into data-driven workflow optimization.
Compare side-by-side
Claude Code vs RSES CLI

Mentioned in this article

Enjoyed this article?
Share:

AI Toolslive

Five one-click lenses on this article. Cached for 24h.

Pick a tool above to generate an instant lens on this article.

Related Articles

From the lab

The framework underneath this story

Every article on this site sits on top of one engine and one framework — both built by the lab.

More in Open Source

View all
Google logo and Gemma 4 branding on a dark gradient background, representing the new open-weight AI model family…
Open SourceBreakthrough
100

Google Releases Gemma 4 Family Under Apache 2.0, Featuring 2B to 31B Models with MoE and Multimodal Capabilities

Google has released the Gemma 4 family of open-weight models, derived from Gemini 3 technology. The four models, ranging from 2B to 31B parameters and including a Mixture-of-Experts variant, are available under a permissive Apache 2.0 license and feature multimodal processing.

engadget.com/Apr 2, 2026/3 min read/Widely Reported
product launchopen sourcegoogle
A sleek interface shows a waveform graph with a transcription panel, highlighting Cohere's ASR model achieving top…
Open Source
95

Cohere Transcribe: 2B-Parameter Open-Source ASR Model Achieves 5.42% WER, Topping Hugging Face Leaderboard

Cohere released Transcribe, a 2B-parameter open-source speech recognition model. It claims a 5.42% average word error rate, beating OpenAI Whisper v3 and topping the Hugging Face Open ASR Leaderboard.

the-decoder.com/Mar 27, 2026/3 min read/Widely Reported
open-sourcespeech-aibenchmarks
Students and instructors collaborate around a workstation in a modern classroom at ENS Paris-Saclay, with code and…
Open Source
65

ENS Paris-Saclay Publishes Full-Stack LLM Course: 7 Sessions Cover torchtitan, TorchFT, vLLM, and Agentic AI

Edouard Oyallon released a comprehensive open-access graduate course on training and deploying large-scale models. It bridges theory and production engineering using Meta's torchtitan and torchft, GitHub-hosted labs, and covers the full stack from distributed training to agentic AI.

admin/Mar 27, 2026/3 min read
open sourcellmsai engineering