
Irony passed away quietly this week while Anthropic CEO Dario Amodei was in Australia lecturing local politicians on the importance of trust and safety in the artificial intelligence age.
The source code for Anthropic’s AI chatbot, Claude, leaked.
It was apparently “caused by human error, not a security breach”, the company said, reassuring everyone they can feel better about trust and safety with the tech giants in charge.
But it did sound a little like someone walked out of Fort Knox, while telling everyone how safe their gold was, and left the front door open.
The code quickly found its way to GitHub and the curious were suddenly trawling the more than 2,000 files containing 512,000 lines of code.
So developers did something very Anthropic – gobbled it up and began sharing it with others, often in reworked versions.
Anthropic’s response was a “come to copyright” moment. The company began issuing takedown notices to GitHub, using the Clinton era Digital Millennium Copyright Act (DMCA).
“We issued a DMCA takedown against one repository hosting leaked Claude Code source code and its forks,” the company said.
All’s fair use in love and AI
It seems that to Anthropic, one man’s AI meat is another’s poison, all of a sudden.
Just two years ago Amodei was using the “fair use” argument to avoid copyright restrictions in a New York Times interview.
“I think everyone agrees the models shouldn’t be verbatim outputting copyrighted content. For things that are available on the web, for publicly available, our position — and I think there’s a strong case for it — is that the training process, again, we don’t think it’s just hoovering up content and spitting it out, or it shouldn’t be spitting it out,” he said
“It’s really much more like the process of how a human learns from experiences. And so, our position that that is sufficiently transformative, and I think the law will back this up, that this is fair use.”
In that respect he was right. In June 2025, a US court found in Anthropic’s favour, concluding while it was trained on copyrighted books, the generative AI was “quintessentially transformative” and qualifies as copyright “fair use”.
But the judge also concluded that Anthropic deliberately chose to “steal” books rather than seek licensing agreements because Amodei considered doing things the proper way a “legal/practice/business slog”.
TL;DR: CBF. Just do it, and to hell with the consequences. I’m a billionaire and it’s cheaper this way.
His cofounder, Ben Mann, downloaded nearly 200,000 books in 2021, knowing they were unauthorised copies – “that is, pirated”, the judge said.
Three months later Anthropic agreed to settle a class action involving more than 500,000 authors for US$1.5 billion — the largest copyright settlement in US history. That’s around $3000 an author – and the total cost is around 20% of Amodei’s estimated $7bn net worth.
The ‘arrangement’
His message to Canberra’s legislators about copyright was conciliatory. He shtick nowadays is a combination of have your AI cake and eat it too, with few dark clouds on the horizon amid the goodies and baddies – and guess which side Anthropic’s on.
“On copyright, in particular, I know there has been a robust debate in this country, and we’re not here to try and convince you to change your mind on this,” Amodei told the politicians.
“I do think rights holders do have legitimate claims here. We’re more here to talk about how can we arrive at an arrangement that works for everyone and leaves everyone better off.”
It’s worth recalling that Tech Council of Australia chair Scott Farquhar said last August in a discussion about copyright that he was with people using Atlassian’s IP without paying for it if we all get better software.
So surely if I get the hands on Anthropic’s Claude code and rework it in a “quintessentially transformative” way, I get do keep the puppy, don’t I Dario?
I’ll call him Fair Use. And even keep the front gate shut so he doesn’t get out and start humping the neighbour’s AI.




