On March 31, 2026, Anthropic accidentally shipped its entire Claude Code source code to the public npm registry. A 59.8 MB JavaScript source map file, intended strictly for internal debugging, was bundled into version 2.1.88 of the @anthropic-ai/claude-code package. Within hours, the roughly 512,000-line TypeScript codebase had been mirrored across GitHub and dissected by thousands of developers worldwide.
How the Leak Happened
The discovery was first broadcast at 4:23 AM ET by Chaofan Shou, an intern at Solayer Labs, who posted the finding on X (formerly Twitter). The source map file — a standard debugging artifact that maps minified JavaScript back to original TypeScript source — should never have been included in the production npm package. Anthropic later confirmed it was "a release packaging issue caused by human error, not a security breach," emphasizing that no customer data or API credentials were exposed.
The root cause was mundane: a misconfigured build pipeline that failed to exclude .map files from the published package. It is the kind of mistake that any engineering team can make, but the scale of the consequences was anything but ordinary.
What the Source Code Revealed
The leaked codebase gave the developer community an unprecedented look inside one of the most commercially successful AI products of the year. Several revelations stood out:
44 Feature Flags: Buried in the code were 44 distinct feature flags covering capabilities that are fully implemented but not yet shipped to users. These are not placeholders or aspirational TODOs — they represent compiled, functional code sitting behind boolean toggles.
Project KAIROS: The most discussed discovery was a feature flag referenced over 150 times in the source, internally called "KAIROS." Analysis of the surrounding code suggests it represents an autonomous daemon mode — a capability that would allow Claude Code to operate as an always-on background agent, monitoring codebases, running scheduled tasks, and proactively making changes without explicit user prompts.
Internal Model Codenames: The source confirmed long-rumored internal naming conventions. "Capybara" maps to a Claude 4.6 variant, "Fennec" maps to Opus 4.6, and a previously unknown codename "Numbat" appears to reference a model still in testing.
Architecture Insights: Developers noted Claude Code's extensive use of tree-sitter for code parsing, a sophisticated caching layer for conversation context, and a multi-agent orchestration system that allows Claude Code to spawn sub-agents for complex tasks — all implemented in TypeScript running on Node.js.
The DMCA Takedown Fiasco
Anthropic's legal response became a story in its own right. The company filed a DMCA takedown notice with GitHub to remove repositories containing the leaked source. According to GitHub's public records, the notice was executed against approximately 8,100 repositories — a staggering over-reach that inadvertently took down legitimate forks of Anthropic's own officially open-sourced Claude Code repository.
Boris Cherny, Anthropic's head of Claude Code, publicly acknowledged the mistake and retracted the bulk of the takedown notices, limiting the action to one repository and 96 specific forks that contained the accidentally released source map.
The heavy-handed approach drew sharp criticism from the open-source community. Gergely Orosz, author of The Pragmatic Engineer, noted that a Python port of the leaked code appeared within 24 hours and is effectively DMCA-proof, since a clean-room rewrite in a different language constitutes a new creative work under copyright law.
The AI Copyright Question
The DMCA strategy raised a deeper legal question. Anthropic's own CEO Dario Amodei has publicly stated that significant portions of Claude Code were written by Claude itself. The DC Circuit upheld in March 2025 that AI-generated work does not carry automatic copyright protection. If substantial parts of the leaked codebase were authored by an AI model rather than human engineers, Anthropic's copyright claim over that code sits on uncertain legal ground.
This paradox — an AI company trying to assert copyright over code written by its own AI — has not been lost on legal commentators. The outcome could set precedent for the entire industry.
Concurrent Supply Chain Attack
Compounding the chaos, a malicious actor exploited the confusion around the leak to publish compromised versions of the axios HTTP library to npm. Anyone who installed or updated Claude Code between 00:21 and 03:29 UTC on March 31 may have pulled in axios versions 1.14.1 or 0.30.4, both of which contained a Remote Access Trojan (RAT). Anthropic and npm acted quickly to yank the malicious packages, but the incident highlighted how supply chain attacks can piggyback on moments of organizational disruption.
Political Fallout
The leak drew attention from Washington. Representative Josh Gottheimer pressed Anthropic directly on the incident, requesting a detailed briefing on the company's source code management practices and safety protocols. The inquiry reflects growing Congressional scrutiny of AI companies' operational security, particularly as these tools become embedded in critical software infrastructure.
The Aftermath
Despite Anthropic's legal efforts, the practical reality is that 512,000 lines of Claude Code's source are permanently in the public domain. Mirrors exist on platforms outside DMCA jurisdiction, the Python rewrite circulates freely, and the technical revelations — feature flags, KAIROS, model codenames — cannot be un-disclosed.
For Anthropic, the damage is primarily strategic rather than security-related. Competitors now have a detailed blueprint of Claude Code's architecture. For the broader developer community, the leak provided a rare, unfiltered look at how one of the world's most-used AI coding tools actually works under the hood. Whether that transparency ultimately helps or hurts Anthropic remains an open question.









