Anthropic, the AI company behind the widely used Claude Code, has suffered a significant data leak due to human error during an npm packaging process, exposing a staggering 512,000 lines of internal source code and nearly 2,000 files.
This incident has sparked serious security concerns, marking a troubling second leak within a short span, which puts the company's reputation for prioritizing safety at risk.
The exposed code may give competitors an unprecedented glimpse into Claude Code's inner workings, potentially undermining Anthropic's market position.
Among the intriguing findings in the leak are references to new features, including a "Tamagotchi" element and a "Proactive" mode, hinting at exciting advancements in the AI tool’s capabilities.
While Anthropic assures that no customer data was compromised, the company is now in a race to manage the fallout and restore trust among its users.
The leak raises essential questions about the security practices of AI companies and highlights the complex challenges of safeguarding sensitive information in the fast-evolving tech landscape.