On the morning of 31 March 2026, someone at Anthropic — the company behind the Claude AI — made a mistake that developers have been warned about for years. A routine software release accidentally included a debug file that exposed the entire source code of Claude Code, the company's popular AI coding tool. Around 512,000 lines of code, never meant to be public, were suddenly sitting on a public software registry for anyone to download.
Anthropic acknowledged it quickly. They called it human error, pulled the release, and confirmed that no customer data or user credentials were involved. The kind of mistake that happens when a single configuration file is overlooked. Embarrassing, but contained.
What happened next was not Anthropic's mistake. It was someone else's decision.
What actually happened
Claude Code is a software tool that lets developers use Claude's AI capabilities directly from their computer's command line. Like most modern software, it is distributed through a public registry — in this case npm, where developers download packages to build their applications.
Version 2.1.88 of Claude Code was published in the early hours of 31 March. Included by accident was a source map file — a type of debug file that bridges minified production code back to the original readable source. It is the software equivalent of accidentally attaching your working notes to a finished document before sending it out. A security researcher spotted it within hours and shared the discovery publicly. Within minutes, tens of thousands of developers had downloaded and mirrored it across GitHub.
Anthropic confirmed the cause: a misconfigured build file meant the debug file was included in the public release when it should have been excluded. The company described it as a release packaging issue caused by human error. No model weights, safety systems, or user data were in the leak — just source code that competitors and researchers were not meant to see.
Who was affected, and how?
Anthropic's own users were not directly harmed by the original leak. No passwords, payment details, or personal data were exposed. The damage to Anthropic was reputational and competitive — proprietary code, internal feature flags, and product roadmap details now visible to anyone who wanted them.
The people who were harmed were those who went looking for the leaked code and downloaded it from the wrong place.
Within a short time of the leak becoming public, criminals created a fake GitHub repository designed to look like an unofficial mirror of the leaked Claude Code source. It appeared near the top of search results for anyone searching for the leaked code. It had hundreds of forks and stars, giving it the appearance of legitimacy. Its README even claimed the code had been rebuilt into a working fork with enterprise features unlocked and message limits removed — exactly what a curious developer might want to find.
Anyone who downloaded and ran the file inside received two pieces of malware alongside it. The first was Vidar, a credential stealer that collects saved passwords, browser history, and payment card data. The second was GhostSocks, which quietly turns an infected computer into a relay point for criminal network traffic — using the victim's machine and internet connection to mask the attackers' real location.
What the headlines got wrong
Some coverage framed this as Anthropic being hacked, or as a security breach that put users at risk. Neither is accurate.
The original leak was human error, not a breach. No attacker got into Anthropic's systems. No user data was taken. The code that leaked was proprietary software — valuable intellectually, but not a risk to the people who use Claude.
The malware campaign was a completely separate act, by completely separate people, who chose to exploit the public attention around the leak. The two stories share a timeline but not a cause. Anthropic made a mistake. Criminals made a decision.
Why does this kind of thing happen?
Accidental leaks of this kind are more common than most people realise, and the mechanism is almost always the same: a build configuration file that was supposed to exclude certain files from a public release was misconfigured, incomplete, or not updated after a tooling change. One overlooked line in one file. That is all it takes.
What makes this incident instructive is what followed. The moment something becomes publicly notable — a leaked file, a high-profile breach, a well-known tool — criminals move fast. The same two-malware combination used in the fake Claude Code repository had already been deployed weeks earlier in a fake installer for a different AI tool. The lure was new. The trap was ready and waiting.
This is a pattern worth understanding. Criminals do not need to create the story. They just need to attach themselves to it. A leaked file, a news event, a product launch — anything that makes people search for something urgently creates an opportunity. The urgency lowers people's guard. The official-looking repository, the promising README, the high fork count — all of it is designed to make the trap look like the solution.
What does this mean for me?
If you use Claude Code, update it through official channels only — Anthropic's own website or the official installer. Do not download it from GitHub repositories claiming to offer the leaked source, unlocked features, or anything other than the official release.
For anyone curious about leaked or unofficial software more broadly: the period immediately after a high-profile leak is exactly when criminals are most active. Fake mirrors, trojanised forks, and unofficial builds are most dangerous in the first hours and days, when search results are full of them and people are actively looking. If something is not from the official source, treat it as untrusted regardless of how many stars or forks it has — those can be manufactured or accumulated by curious people who did not check what they were downloading.
The broader point: the leak itself was an accident. The malware campaign was a choice. Two very different kinds of human behaviour, happening in the same moment, producing very different kinds of harm. The first was a mistake that a checklist could have prevented. The second required someone to deliberately decide to use other people's curiosity against them.
The broader lesson
There is a recurring theme in the stories we have covered over recent weeks. The supply chain attack on litellm started with a misconfigured security scanner. The M&S breach started with a phone call to a helpdesk. The Stryker attack rippled into NHS hospitals through a supplier's logistics systems. And now a misconfigured build file at one of the world's most prominent AI companies created the conditions for a criminal campaign within hours.
In each case, technology is not the villain. People are — sometimes through error, sometimes through deliberate choice. The accidental leak and the deliberate trap are two ends of the same human story.
The lesson for organisations is the same one that keeps appearing: build processes matter. A checklist that verifies what is included in a software release would have prevented the original leak. The lesson for individuals is equally consistent: when something is in the news and you go looking for it, you are doing exactly what criminals are counting on.
🧠 The Human Factor
| Technology involved | npm software registry, source map debug files, GitHub repositories, and malware payloads (Vidar credential stealer and GhostSocks proxy tool) |
| Root cause | Two separate human decisions: first, a build configuration oversight that accidentally exposed proprietary source code; second, criminals choosing to weaponise public curiosity about the leak by creating convincing fake repositories |
| What was at risk | For Anthropic: proprietary code and product roadmap. For people who downloaded the fake repository: saved passwords, browser history, payment card data, and the use of their machine as criminal proxy infrastructure |
| Prevention | Build pipeline checklists that verify what is included in public releases; downloading software only from official verified sources; treating unofficial mirrors with extreme caution, particularly in the immediate aftermath of a public leak or news event |
References and sources
- Anthropic statement confirming the leak as human error — via The Register (31 March 2026)
- Zscaler ThreatLabz analysis of the trojanised Claude Code repository — zscaler.com (2 April 2026)
- Fake Claude Code source downloads actually delivered malware — The Register (2 April 2026)
- Claude Code Source Leaked via npm Packaging Error, Anthropic Confirms — The Hacker News (April 2026)
Last updated: 5 April 2026 We update breaking stories as new information becomes available.