Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

CCmeter: The Open-Source Dashboard That Reveals Exactly Why Your Claude
Open SourceScore: 78

CCmeter: The Open-Source Dashboard That Reveals Exactly Why Your Claude

CCmeter parses Claude Code's local session logs to surface cache-busting patterns, cost leaks, and model-swap simulations. Free, local-first, zero telemetry.

·5h ago·4 min read··5 views·AI-Generated·Report error
Share:
Source: github.comvia hn_claude_codeSingle Source
TL;DR

Run `npx ccmeter` to see per-session spend, cache hit rates, and personalized fix recommendations — no setup, no API keys.

What Changed

Anthropic quietly shortened Claude Code's default prompt-cache TTL from 1 hour to 5 minutes in early March 2026. The rollout was staggered — different users saw it on different days. Anthropic's official line: this shouldn't increase costs because most cached context is one-shot anyway.

User analysis of actual session JSONLs disagrees. The typical impact reported by heavy users? A 30–60% bill increase with zero change in usage.

Enter CCmeter — a local-first dashboard that reads the session files Claude Code already writes to ~/.claude/projects and tells you exactly what's costing you. No telemetry, no API key, no setup.

What It Means For You

The Anthropic Console gives you one number per day. That's useless for debugging. You need per-session, per-project, per-cache-bust breakdowns to:

  • Verify which side of the TTL-change argument your data falls on.
  • Find which sessions are the new expensive ones.
  • Apply the specific behavioral fix that recovers most of the money.

CCmeter is that breakdown. It runs against data already on your machine, surfaces the patterns eating your spend, and recommends concrete fixes ranked by estimated monthly savings.

Try It Now

Quickest start:

npx ccmeter@latest

Permanent install:

npm i -g ccmeter
ccmeter

Requires Node 20+. Works on macOS, Linux, Windows. Reads ~/.claude/projects by default — set CCMETER_LOG_DIR if your logs live elsewhere.

A 30–60% bill increase with zero change in usage.

Key Commands

ccmeter Summary: total spend, cache hits, today's biggest leak ccmeter recommend Personalized fixes ranked by $/mo saved ccmeter compare Last 7d vs prior 7d — quantify what changed ccmeter tools Which tool calls cost the most (Bash, Read, …) ccmeter cache Cache hit rate trend + TTL-change callout ccmeter whatif --swap opus->sonnet Simulate model swaps on your data ccmeter dashboard Local web UI, no network ccmeter live Full-screen ticker

Real Example Output

ccmeter — last 30 days
────────────────────────────────────────────────────────────────────────────────
Total spend       $284.10 (↑ +43% vs prior period)
Daily average     $9.47 ≈ $284.10/month
Sessions          127
Cache hit rate    47.3%
Cache busts       89 wasted $24.18
Daily spend       ▁▂▂▃▅▇█▆▄▅█▇▆▄▃
Suggestions:
● Idle sessions are busting your cache 41×/week (save $43/mo)
● 6 long sessions (>90 min) bled cache value (save $18/mo)
+ 4 more — run `ccmeter recommend`

How to Use CCmeter to Cut Your Bill

  1. Run ccmeter to see your baseline.
  2. Run ccmeter recommend for prioritized fixes.
  3. Run ccmeter cache to check if the TTL change is hitting you.
  4. Run ccmeter whatif --swap opus->sonnet to see how much you'd save by switching models for specific projects.
  5. Tag expensive sessions with ccmeter tag $SID "auth-refactor" and group by tag in reports.

gentic.news Analysis

This tool arrives at a critical moment. Anthropic's Claude Code ecosystem is exploding — 653 articles have mentioned it on gentic.news, and it appeared in 31 articles just this week. The product is maturing fast, but with that maturity comes cost complexity that the official console doesn't address.

CCmeter fills a gap Anthropic hasn't prioritized: granular, local-first cost observability. It's notable that the tool's creator explicitly calls out the TTL change — a quiet config tweak that, according to user data, can inflate bills 30–60%. This follows a pattern we've seen before: Anthropic makes model-side optimizations (like the recent Claude Opus 4.6 1M context window launch on April 28) that have downstream cost implications for users.

The tool's whatif command is particularly clever — it lets you simulate model swaps (e.g., Opus 4.6 → Sonnet 4.6) against your actual usage data. Given Anthropic's aggressive model release cadence — Claude Opus 4.6, Claude Sonnet 4.6, and the Claude Agent framework all launched recently — this kind of forward-looking cost modeling is essential.

For Claude Code users, the actionable takeaway is clear: run npx ccmeter today. Even if you're not seeing a bill jump, the cache-hit and idle-session data alone will likely reveal patterns you can fix. The tool is local-first and reads only your existing logs — no data leaves your machine.

Source: gentic.news · · author= · citation.json

AI-assisted reporting. Generated by gentic.news from multiple verified sources, fact-checked against the Living Graph of 4,300+ entities. Edited by Ala AYADI.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

**What should Claude Code users do differently?** First, stop relying on the Anthropic Console for cost debugging. It gives you one aggregate number per day — useless for identifying which sessions, projects, or behaviors are driving spend. Install CCmeter and run `ccmeter` immediately to get a per-session breakdown. The `ccmeter recommend` command is your new best friend: it analyzes your actual usage patterns and suggests behavioral fixes ranked by monthly savings. Second, pay attention to the cache hit rate. If you see numbers below 40-50%, the TTL change is likely hitting you hard. The fix isn't to fight the TTL — it's to stop leaving idle sessions open, avoid long sessions (>90 min), and be deliberate about when you start new sessions vs. continuing existing ones. CCmeter's `ccmeter cache` command will show you the trend and quantify the waste. Third, use `ccmeter whatif --swap opus->sonnet` before deciding which model to use for a project. Don't guess — simulate. If you're using Claude Opus 4.6 for everything, you may be overpaying by 2-3x for tasks that Sonnet handles fine. The data is already on your machine; let CCmeter surface the insight.

Mentioned in this article

Enjoyed this article?
Share:

AI Toolslive

Five one-click lenses on this article. Cached for 24h.

Pick a tool above to generate an instant lens on this article.

Related Articles

More in Open Source

View all
Google Releases Gemma 4 Family Under Apache 2.0, Featuring 2B to 31B Models with MoE and Multimodal Capabilities
Open SourceBreakthrough
100

Google Releases Gemma 4 Family Under Apache 2.0, Featuring 2B to 31B Models with MoE and Multimodal Capabilities

Google has released the Gemma 4 family of open-weight models, derived from Gemini 3 technology. The four models, ranging from 2B to 31B parameters and including a Mixture-of-Experts variant, are available under a permissive Apache 2.0 license and feature multimodal processing.

engadget.com/Apr 2, 2026/3 min read/Widely Reported
product launchopen sourcegoogle
Cohere Transcribe: 2B-Parameter Open-Source ASR Model Achieves 5.42% WER, Topping Hugging Face Leaderboard
Open Source
95

Cohere Transcribe: 2B-Parameter Open-Source ASR Model Achieves 5.42% WER, Topping Hugging Face Leaderboard

Cohere released Transcribe, a 2B-parameter open-source speech recognition model. It claims a 5.42% average word error rate, beating OpenAI Whisper v3 and topping the Hugging Face Open ASR Leaderboard.

the-decoder.com/Mar 27, 2026/3 min read/Widely Reported
open-sourcespeech-aibenchmarks
ENS Paris-Saclay Publishes Full-Stack LLM Course: 7 Sessions Cover torchtitan, TorchFT, vLLM, and Agentic AI
Open Source
65

ENS Paris-Saclay Publishes Full-Stack LLM Course: 7 Sessions Cover torchtitan, TorchFT, vLLM, and Agentic AI

Edouard Oyallon released a comprehensive open-access graduate course on training and deploying large-scale models. It bridges theory and production engineering using Meta's torchtitan and torchft, GitHub-hosted labs, and covers the full stack from distributed training to agentic AI.

admin/Mar 27, 2026/3 min read
open sourcellmsai engineering