Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

Developers in a dimly lit office huddle around multiple monitors displaying lines of code, with one screen showing a…

Anthropic Scrambles to Contain Major Source Code Leak for Claude Code

Anthropic is responding to a significant internal leak of approximately 500,000 lines of source code for its AI tool Claude Code, reportedly triggered by human error. The incident has drawn attention to security risks in the AI industry and coincides with reports of shifting investor interest toward Anthropic amid valuation disparities with competitors.

·Apr 2, 2026·5 min read··467 views·AI-Generated·Report error
Share:
Source: youtube.comvia engadget, wired_ai, gn_claude_code, devto_claudecode, hn_claude_code, scmp_techWidely Reported

On March 31, 2026, Anthropic accidentally shipped its entire Claude Code source code to the public npm registry. A 59.8 MB JavaScript source map file, intended strictly for internal debugging, was bundled into version 2.1.88 of the @anthropic-ai/claude-code package. Within hours, the roughly 512,000-line TypeScript codebase had been mirrored across GitHub and dissected by thousands of developers worldwide.

How the Leak Happened

The discovery was first broadcast at 4:23 AM ET by Chaofan Shou, an intern at Solayer Labs, who posted the finding on X (formerly Twitter). The source map file — a standard debugging artifact that maps minified JavaScript back to original TypeScript source — should never have been included in the production npm package. Anthropic later confirmed it was "a release packaging issue caused by human error, not a security breach," emphasizing that no customer data or API credentials were exposed.

The root cause was mundane: a misconfigured build pipeline that failed to exclude .map files from the published package. It is the kind of mistake that any engineering team can make, but the scale of the consequences was anything but ordinary.

What the Source Code Revealed

The leaked codebase gave the developer community an unprecedented look inside one of the most commercially successful AI products of the year. Several revelations stood out:

44 Feature Flags: Buried in the code were 44 distinct feature flags covering capabilities that are fully implemented but not yet shipped to users. These are not placeholders or aspirational TODOs — they represent compiled, functional code sitting behind boolean toggles.

Project KAIROS: The most discussed discovery was a feature flag referenced over 150 times in the source, internally called "KAIROS." Analysis of the surrounding code suggests it represents an autonomous daemon mode — a capability that would allow Claude Code to operate as an always-on background agent, monitoring codebases, running scheduled tasks, and proactively making changes without explicit user prompts.

Internal Model Codenames: The source confirmed long-rumored internal naming conventions. "Capybara" maps to a Claude 4.6 variant, "Fennec" maps to Opus 4.6, and a previously unknown codename "Numbat" appears to reference a model still in testing.

Architecture Insights: Developers noted Claude Code's extensive use of tree-sitter for code parsing, a sophisticated caching layer for conversation context, and a multi-agent orchestration system that allows Claude Code to spawn sub-agents for complex tasks — all implemented in TypeScript running on Node.js.

The DMCA Takedown Fiasco

Anthropic's legal response became a story in its own right. The company filed a DMCA takedown notice with GitHub to remove repositories containing the leaked source. According to GitHub's public records, the notice was executed against approximately 8,100 repositories — a staggering over-reach that inadvertently took down legitimate forks of Anthropic's own officially open-sourced Claude Code repository.

Boris Cherny, Anthropic's head of Claude Code, publicly acknowledged the mistake and retracted the bulk of the takedown notices, limiting the action to one repository and 96 specific forks that contained the accidentally released source map.

The heavy-handed approach drew sharp criticism from the open-source community. Gergely Orosz, author of The Pragmatic Engineer, noted that a Python port of the leaked code appeared within 24 hours and is effectively DMCA-proof, since a clean-room rewrite in a different language constitutes a new creative work under copyright law.

The AI Copyright Question

The DMCA strategy raised a deeper legal question. Anthropic's own CEO Dario Amodei has publicly stated that significant portions of Claude Code were written by Claude itself. The DC Circuit upheld in March 2025 that AI-generated work does not carry automatic copyright protection. If substantial parts of the leaked codebase were authored by an AI model rather than human engineers, Anthropic's copyright claim over that code sits on uncertain legal ground.

This paradox — an AI company trying to assert copyright over code written by its own AI — has not been lost on legal commentators. The outcome could set precedent for the entire industry.

Concurrent Supply Chain Attack

Compounding the chaos, a malicious actor exploited the confusion around the leak to publish compromised versions of the axios HTTP library to npm. Anyone who installed or updated Claude Code between 00:21 and 03:29 UTC on March 31 may have pulled in axios versions 1.14.1 or 0.30.4, both of which contained a Remote Access Trojan (RAT). Anthropic and npm acted quickly to yank the malicious packages, but the incident highlighted how supply chain attacks can piggyback on moments of organizational disruption.

Political Fallout

The leak drew attention from Washington. Representative Josh Gottheimer pressed Anthropic directly on the incident, requesting a detailed briefing on the company's source code management practices and safety protocols. The inquiry reflects growing Congressional scrutiny of AI companies' operational security, particularly as these tools become embedded in critical software infrastructure.

The Aftermath

Despite Anthropic's legal efforts, the practical reality is that 512,000 lines of Claude Code's source are permanently in the public domain. Mirrors exist on platforms outside DMCA jurisdiction, the Python rewrite circulates freely, and the technical revelations — feature flags, KAIROS, model codenames — cannot be un-disclosed.

For Anthropic, the damage is primarily strategic rather than security-related. Competitors now have a detailed blueprint of Claude Code's architecture. For the broader developer community, the leak provided a rare, unfiltered look at how one of the world's most-used AI coding tools actually works under the hood. Whether that transparency ultimately helps or hurts Anthropic remains an open question.

Sources cited in this article

  1. GitHub's
  2. Developers
Source: gentic.news · · author= · citation.json

AI-assisted reporting. Generated by gentic.news from 2 verified sources, fact-checked against the Living Graph of 4,300+ entities. Edited by Ala SMITH.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

This story is part of
The AI Infrastructure War Shifts from Chips to Developer Tools
Nvidia's enterprise pivot and AWS's OpenAI bet collide with Cursor's quiet ascent
Compare side-by-side
Anthropic vs OpenAI

Mentioned in this article

Enjoyed this article?
Share:

AI Toolslive

Five one-click lenses on this article. Cached for 24h.

Pick a tool above to generate an instant lens on this article.

Related Articles

From the lab

The framework underneath this story

Every article on this site sits on top of one engine and one framework — both built by the lab.

More in Products & Launches

View all