Anthropic Is Learning That There Are No Take-backs On The Internet
illustration by Michael M. Santiago/Getty Images
- This post originally appeared in the Business Insider Today newsletter.
- You can sign up for Business Insider's daily newsletter here.
AI giant Anthropic accidentally exposed some of Claude Code's internal source code, and developers quickly noticed.
BI's Henry Chandonnet and Brent D. Griffiths have the play-by-play on how it all happened.
With so much going on (and at stake), let's unpack it all:
Give me a quick summary of the Claude Code leak. Anthropic accidentally leaked some of its source code for its popular AI coding tool during an update. And before you get any ideas about rogue AI or a hack, a spokesperson told BI it was "human error, not a security breach."
How bad was the leak? Customer data and parts of the core model (the secret sauce) weren't leaked. (Good!) But it's giving rivals a look at Anthropic's product roadmap. (Bad!)
Don't just take my word for it. I asked the big three chatbots — ChatGPT, Gemini, and Claude — for their takes. ChatGPT described it as a "meaningful but not catastrophic leak." Gemini said it was a "major reputational and intellectual property blow" but "low-risk event for users." Claude said it was a "moderately significant incident."
That doesn't sound so bad. True, but Anthropic has used its safety pledge to set it apart in a crowded, competitive field. That makes this a tough look, and it's arguably not the worst of it.
It gets worse?? Anthropic is still feuding with the Pentagon. After the company refused certain military uses of its AI, the Defense Department labeled it a "supply chain risk," effectively blacklisting it from some federal work. The AI giant responded by suing, and a federal judge temporarily blocked the designation. This is bound to add more fuel to the fire.
How has Anthropic responded to the leak? Not great considering it's trying to delete something off the internet, which … good luck. It has issued a copyright takedown request for copies of the code on GitHub, but developers have stayed a step ahead by reposting it in different programming languages. An AI giant upset about others using its proprietary data is also just a tad ironic.
What did we actually learn from the leak? Building a powerful AI system is about a lot more than just having a strong model. It shows the work Anthropic has put into building a robust framework for the model to run smoothly. Think of it like a race car: A powerful engine is crucial, but it only works if you build the rest of the car to get the most out of it.
Is there a bright side for Anthropic in all this? It could be the next Supreme Court Justice?
Popular Products
-
Wireless Health Tracker Smart Ring - R11$131.56$65.78 -
Electric Hair Straightener and Curlin...$161.56$80.78 -
Pet Oral Repair Toothpaste Gel$59.56$29.78 -
Opove M3 Pro 2 Electric Massage Gun$901.56$450.78 -
Portable Electric Abdominal Massager ...$45.56$22.78