To be very clear on this point - this is not related to model training.
It’s important in the fair use assessment to understand that the training itself is fair use, but the pirating of the books is the issue at hand here, and is what Anthropic “whoopsied” into in acquiring the training data.
Buying used copies of books, scanning them, and training on it is fine.
Rainbows End was prescient in many ways.
> Buying used copies of books, scanning them, and training on it is fine.
But nobody was ever going to that, not when there are billions in VC dollars at stake for whoever moves fastest. Everybody will simply risk the fine, which tends to not be anywhere close to enough to have a deterrent effect in the future.
That is like saying Uber would have not had any problems if they just entered into a licensing contract with taxi medallion holders. It was faster to just put unlicensed taxis on the streets and use investor money to pay fines and lobby for favorable legislation. In the same way, it was faster for Anthropic to load up their models with un-DRM'd PDFs and ePUBs from wherever instead of licensing them publisher by publisher.
> Rainbows End was prescient in many ways.
Agreed. Great book for those looking for a read: https://www.goodreads.com/book/show/102439.Rainbows_End
The author, Vernor Vinge, is also responsible for popularizing the term 'singularity'.
I think the jury is still out on how fair use applies to AI. Fair use was not designed for what we have now.
I could read a book, but its highly unlikely I could regurgitate it, much less months or years later. An LLM, however, can. While we can say "training is like reading", its also not like reading at all due to permanent perfect recall.
Not only does an LLM have perfect recall, it also has the ability to distribute plagiarized ideas at a scale no human can. There's a lot of questions to be answered about where fair use starts/ends for these LLM products.
To be even more clear - this is a settlement, it does not establish precedent, nor admit wrongdoing. This does not establish that training is fair use, nor that scanning books is fine. That's somebody else's battle.
> Buying used copies of books
It remains deranged.
Everyone has more than a right to freely have read everything is stored in a library.
(Edit: in fact initially I wrote 'is supposed to' in place of 'has more than a right to' - meaning that "knowledge is there, we made it available: you are supposed to access it, with the fullest encouragement").
I wonder what Aaron Swartz would think if he lived to see the era of libgen.
I don't believe that's true. Most work I've read on fair use suggests it has to be a small amount, selectively used, substantially transformed, and not compete with content creators. These AI's training are the opposite of all that. I was surprised of a ruling like this but Alsup is a unique judge.
Additionally, sharing copyrighted works without permission... the data sets or data lakes... is its own tort. You're guilty just for sharing copies before even training. Some copyrighted works are also commercial, copyright with ban on others' commercial use, and patented. Some are NDA'd but 3rd party leaked them. Sources like Common Crawl probably have plenty of such content.
Additionally, there's often contractual terms of use on accessing the content. Even Singapore's and others laws allowing training on copyrighted content usually require that you lawfully accessed that content in the first place. The terms of use are the weakest link there.
I'd like to see these two issues turned by law into a copyright exception that no contract can override. It needs to specifically allow sharing scraped, publicly-visible content. Anything you can just view or download which the copyright owner put up. The law might impose or allow limits on daily scraping quantity, volume, etc to avoid damage scrapers are doing.
Google scanned many books quite a while ago, probably way more than LibGen. Are they good to use them for training?
> pirating of the books is the issue
I have an author friend who felt like this was just adding insult to injury.
So not only had his work been consumed into this machine that is being used to threaten his day job as a court reporter, not only was that done without seeking his permission in any way, but they didn’t even pay for a single copy.
Really embodies raising your middle finger to the little guy while you steamroll him.
Yes, the ruling was a massive win for generative AI companies.
The settlement was a smart decision by anthropic to remove a huge uncertainty. 1.5 is not small, but it won't stop them or slow them significantly.
> It’s important in the fair use assessment to understand that the training itself is fair use
IIUC this is very far from settled, at least in US law.
The Librareome project was about simply scanning books, not training AI with them. And it was a matter of trying to stop corporations from literally destroying the physical books in the process. I don't know that this is applicable.
This is excellent news because it means that folks who pay for printed books and scan them also can train with their content. It's been said already that we've already trained on "the entire (public) internet." Printed books still hold a wealth of knowledge that could be useful in training models. And cheap, otherwise unwanted copies make great fodder for "destructive" scanning where you cut the spine off and feed it to a page scanner. There are online services that offer just that.
> It’s important in the fair use assessment to understand that the training itself is fair use
Is this completely settled legally? It is not obvious to me it would be so
> Buying used copies of books, scanning them, and training on it is fine.
Awesome, so I just need enough perceptrons to overfit every possible copyrighted works then?
It should not be fine to train on them, because you are creating derivative works, exactly like when you deal with music.
> It’s important in the fair use assessment to understand that the training itself is fair use,
I think that this is a distinction many people miss.
If you take all the works of Shakespeare, and reduce it to tokens and vectors is it Shakespeare or is it factual information about Shakespeare? It is the latter, and as much as organizations like the MLB might want to be able to copyright a fact you simply cannot do that.
Take this one step further. IF you buy the work, and vectorize it, thats fine. But if you feed it in the vectors for Harry Potter so many times that it can reproduce half of the book, it becomes a problem when it spits out that copy.
And what about all the other stuff that LLM's spit out? Who owns that. Well at present, no one. If you train a monkey or an elephant to paint, you cant copyright that work because they aren't human, and neither is an LLM.
If you use an LLM to generate your code at work, can you leave with that code when you quit? Does GPL3 or something like the Elastic Search license even apply if there is no copyright?
I suspect we're going to be talking about court cases a lot for the next few years.
> the training itself is fair use
Sure, training by itself isn't worth anything.
Distributing and collecting payment for the usage of a trained model which may violate copyright, etc. that's still an open legal question and worth billions as well.
The RIAA should step in and get the money that publishers deserve. Talking millions per book and extra to make sure the pirates learned their lesson. And prison for the management.
I keep thinking,if they bought ebooks,would that be fine or is this required to be paper books? If it doesn't work with ebooks, the world is going to be a nightmare
META did pirate basically all books in Anna’s archive but if I remember correctly they just whispered a a cried sorry and it ended up as that. Why are they also not asked to pay?
Nevertheless, a crime is a crime.
I'm so over this shift in America's business model.
Original Silicon Valley model, and generally the engine of American innovation/growth/wealth equality for 200 years: Come up with a cool technology, build it in your garage, get people to fund it and sell it because it's a better mousetrap.
New model: Still come up with a cool idea, still get it funded and sold, but the idea involves committing crime at a staggering scale (Uber, Google, AirBnB, all AI companies, long list here), and then paying your way out of the consequences later.
Look some of these laws may have sucked, but having billionaires organize a private entity that systematically breaks them and gets off with a slap on the wrist, is not the solution. For one thing, if innovation requires breaking the law, only the rich will be able to innovate because only they can pay their way out of the law. For another, obviously no one should be able to pay their way out of following the law! This is basic "foundations of society" stuff that the vast majority of humans agree on in terms of what feels fair and just, and what doesn't.
Go to a country which has really serious corruption problems, like is really high on the corruption index, and ask the people there what they think about it. I mean I live in one and have visited many others so I can tell you, they all hate it. It not only makes them unhappy, it fills them with hopelessness about their future. They don't believe that anything can ever get better, they don't believe they can succeed by being good, they believe their own life is doomed to an unappealing fate because of when and where they were born, and they have no agency to change it. 25 years ago they all wanted to move to America, because the absence of that crushing level of corruption was what "the land of opportunity" meant. Now not so much, because America is becoming more like their country.
This timeline ends poorly for all of us, even the corrupt rich who profit from it, because in the future America will be more like a Latin American banana republic where they won't be able to leave their compounds for fear of getting Luigi'ed. We normal people get poverty, they get fear and death, everyone loses. The social contract is collapsing in front of our eyes.
Then shouldn’t they be liable for at least 25 times this amount?
Yes, but the cat is out of the bag now. Welcome to the era of every piece of creative work coming with an EULA that you cannot train on it. It will be like clearing samples.
Has it been decided that training models is fair use? Has it been decided in all jurisdictions?
You can't grab pirated stuff and then hope fair use magically sanitizes it
Wdym Rainbows End was prescient?
Do they actually need to scan the book?
Or can they buy the book, and then use the pirated copy?
It is related to scalable mode training, however. Chopping the spine off books and putting the pages in an automated scanner is not scalable. And don't forget about the cost of 1) finding 2) purchasing 3) processing and 4) recycling that volume of books.
Its not settled whether AI training is fair use.
Okay, so the blame for the offense was laundered..
Paying $3,000 for pirating a ~$30 book seems disproportionate.
Thanks for the reminder that what the Internet Archive did in its case would have been legal if it was in service of an LLM.
I guess they must delete all models since they acquired the source illegally and benefitted from it, right? Otherwise it just encourages others to keep going and pay the fines later.
> Buying used copies of books, scanning them, and training on it is fine.
Buying used copies of books, scanning them, and printing them and selling them: not fair use
Buying used copies of books, scanning them, and making merchandise and selling it: not fair use
The idea that training models is considered fair use just because you bought the work is naive. Fair use is not a law to leave open usage as long as it doesn’t fit a given description. It’s a law that specifically allows certain usages like criticism, comment, news reporting, teaching, scholarship, or research. Training AI models for purposes other than purely academic fits into none of these.
Settlement Terms (from the case pdf)
1. A Settlement Fund of at least $1.5 Billion: Anthropic has agreed to pay a minimum of $1.5 billion into a non-reversionary fund for the class members. With an estimated 500,000 copyrighted works in the class, this would amount to an approximate gross payment of $3,000 per work. If the final list of works exceeds 500,000, Anthropic will add $3,000 for each additional work.
2. Destruction of Datasets: Anthropic has committed to destroying the datasets it acquired from LibGen and PiLiMi, subject to any legal preservation requirements.
3. Limited Release of Claims: The settlement releases Anthropic only from past claims of infringement related to the works on the official "Works List" up to August 25, 2025. It does not cover any potential future infringements or any claims, past or future, related to infringing outputs generated by Anthropic's AI models.
Don't forget: NO LEGAL PRECEDENT! which means, anybody suing has to start all over. You only settle in this scenario/point if you think you'll lose.
Edit: I'll get ratio'd for this- but its the exact same thing google did in it's lawsuit with Epic. They delayed while the public and courts focused in apple (oohh, EVIL apple)- apple lost, and google settled at a disadvantage before they had a legal judgment that couldn't be challenged latter.
So they can also keep models trained on the datasets? That seems pretty big too, unless the half life of models is so low it doesn't matter.
So... it would be a lot cheaper to just buy all of the books?
Thank you. I assumed it would be quicker to find the link to the case PDF here, but your summary is appreciated!
Indeed, it is not only payout, but the destruction of the datasets. Although the article does quote:
> “Anthropic says it did not even use these pirated works,” he said. “If some other generative A.I. company took data from pirated source and used it to train on and commercialized it, the potential liability is enormous. It will shake the industry — no doubt in my mind.”
Even if true, I wonder how many cases we will see in the near future.
Only 500,000 copyrighted works?
I was under the impression they had downloaded millions of books.
I’m an author, can I get in on this?
If you are an author here are a couple of relevant links:
You can search LibGen by author to see if your work is included. I believe this would make you a member of the class: https://www.theatlantic.com/technology/archive/2025/03/searc...
If you are a member of the class (or think you are) you can submit your contact information to the plaintiff's attorneys here: https://www.anthropiccopyrightsettlement.com/
It’s pretty incredible that the vast majority of authors will make more money for their books from this settlement that they ever have from selling their books.
Thank you for posting this!
I suspected my work was in the dataset and it looks like it is! I reached out via the form.
Wild - I searched my name out of curiosity and my PhD research papers turned up. Worth submitting my contact details I guess
Thank you! I hadn't even thought that I could be affected, but I have written some programming books, and some of them show up on libgen. I've submitted my contact info, maybe something will come out of this...
wow, I found 8 of my books!
I can't help but feel like this is a huge win for Chinese AI. Western companies are going to be limited in the amount of data they can collect and train on, and Chinese (or any foreign AI) is going to have access to much more and much better data.
The West can end the endless pain and legal hurdles to innovation by limiting the copyright. They can do it if there is will to open up the gates of information to everyone. The duration of 70 years after death of the author or 90 years for companies is excessively long. It should be ~25 years. For software it should be 10 years.
And if AI companies want recent stuff, they need to pay the owners.
However, the West wants to infinitely enrich the lucky old people and companies who benefited from the lax regulations at the start of 20th century. Their people chose to not let the current generations to acquire equivalent wealth, at least not without the old hags get their cut too.
I think western companies will be just fine -- Anthropic is settling because they illegally pirated books from LibGen back in 2021 and subsequently trained models on them. They realized this was an issue internally and pivoted to buying books en masse and scanning them into digital formats, destroying the original copies in the process (they actually hired a former lead in the Google Books project to help them in this endeavor!). And a federal judge ruled a couple months ago that training on these legally-acquired scanned copies does not constitute fair use -- that the LLM training process is sufficiently transformative.
So the data/copyright issue that you might be worried about is actually completely solved already! Anthropic is just paying a settlement here for the illegal pirating that they did way in the past. Anthropic is allowed to train on books that they legally acquire.
And sure, Chinese AI companies could probably scrape from LibGen just like Anthropic did without getting in hot water, and potentially access a bit more data that way for cheap, but it doesn't really seem like the buying/scanning process really costs that much in the grand scheme of things. And Anthropic likely already has legally acquired most of the useful texts on LibGen and scanned them into its internal library anyways.
(Furthermore, the scanning setup might actually give Anthropic an advantage, as they're able to digitize more niche texts that might be hard to find outside of print form)
This isn’t is a race to the bottom. They could have bought these books instead of pirating them.
It's naive to think Chinese models have a free pass. Local censorship, language/data biases, and export restrictions cut both ways.
But most marginal training of Anthropic, OpenAI and Google models is done on LLM paraphrased user data on those platforms. That user data is proprietary and obviously way more valuable than random books.
Good, if AI is such a great thing, why wouldn't we want the 2+ billion Chinese to have it also?
True enough, but training on synthetic data now seems to be pushing SOTA.
Wait so they raised all that money just to give it to publishers?
Can only imagine the pitch, yes please give us billions of dollars. We are going to make a huge investment like paying of our lawsuits.
From the article:
> Although the payment is enormous, it is small compared with the amount of money that Anthropic has raised in recent years. This month, the start-up announced that it had agreed to a deal that brings an additional $13 billion into Anthropic’s coffers. The start-up has raised a total of more than $27 billion since its founding in 2021.
Isn't that how the whole system operates? Everyone is a conduit to allow rich people to enrich themselves further. The amount and quality of opportunities any individual receives are proportional to how well it serves existing capital.
So long as there is an excuse to justify money flows, that's fine, big capital doesn't really care about the excuse; so long as the excuse is just persuasive enough to satisfy the regulators and the judges.
Money flows happen independently, then later, people try to come up with good narratives. This is exactly what happened in this case. They paid the authors a lot of money as a settlement and agreed on a narrative which works for both sets of people; that training was fine, it's the pirating which was a problem...
It's likely why they settled; they preferred to pay a lot of money and agree on some false narrative which works for both groups rather than setting a precedent that AI training on copyrighted material is illegal; that would be the biggest loss for them.
You're joking, but that's actually a good pitch. There was a significant legal issue hanging over their heads, with some risk of a potentially business-ending judgment down the line. This makes it go away, which makes the company a safer, more valuable investment. Both in absolute terms and compared to peers who didn't settle.
They wanted to move fast and break things. No one made them.
Everything talks about settlement to the 'authors'; is that meant to be shorthand for copyright holders? Because there are a lot of academic works in that library where the publisher holds exclusive copyright and the author holds nothing.
By extension, if the big publishers are getting $3000 per article, that could be a fairly significant windfall.
very unsurprisingly, new york times is going to frame this as a win for "the little guy" when in reality it's just multi-billion dollar publishers, with a long rich history of their own exploitive practices, hanging on for dear life against generative AI
Dunno if this matters but I thought the copyright always remains with the creator/author but they end up assigning the rights contractually. At least generally for books. Movies will be copyrighted by the studio.
Kinda how like patents will state the human “inventor” but Apple or whichever corp is assigned the rights.
After their recent change in tune to retain data for longer and to train on our data, I deleted my account.
Try to do that. There is no easy way to delete your account. You need to reach out to their support via email. Incredibly obnoxious dark pattern. I hate OpenAI, but everything with Anthropic also smells fishy.
We need more and better players. I hope that XAi will give them all some good competition, but I have my doubts.
X doesn't smell fishy to you??
I think it's fair to call out a dark pattern for account deletion (which, for better or worse, is common practice) - but the data training and data retention thing can both be disabled...I was much more surprised that they DIDN'T train on data as long as they did, when every other LLM provider was sucking in as much data as they could (OpenAI, Google, Meta, and xAI - although Meta gets a pass for providing the open-weight models in my head).
Anthropic has made AI safety a central pillar of their ethos and have shared a lot of information about what they're doing to responsibly train models...personally I found a lot of corporate-speak on this topic from OpenAI, but very little information.
Their logo is a butt hole.
This is sad for open source AI, piracy for the purpose of model training should also be fair use because otherwise only the big companies who can afford to pay off publishers like Anthropic will be able to do so. There is no way to buy billions of books just for model training, it simply can't happen.
Fair use isn't about how you access the material, its about what you can do with it after you legally access it. If you don't legally access it, the question of fair use is moot.
This implies training models is some sort of right.
I wonder how much it would cost to buy every book that you'd want to train a model.
I don't know if I agree with it, but you could argue that if a model was built for purely academic purposes, and then used for purely academic purposes, it could meet requirements for fair use.
Setting aside whether or not I think it should be fair use, you’re only going to be training a new foundation model these days if you have billions of dollars to spend on the endeavor anyway. Nobody is training Llama 5 in their garage.
(Half joking but) I wonder if musicians need to worry if they learned to play by listening to cassette mixtapes.
This is a settlement. It does not set a precedent nor even admit to wrongdoing.
> otherwise only the big companies who can afford to pay off publishers like Anthropic will be able to do so
Only well funded companies can afford to hire a lot of expensive engineers and train AI models on hundreds of thousands of expensive GPUs, too.
Something tells me many the grassroots LLM training people are less concerned about legality of their source training set than the big companies anyway.
I wish the hn rules were more flexible because I would write the best comment to you right now.
See kids? Its okay to steal if you steal more money than the fine costs.
They're paying $3000 per book. It would've been a lot cheaper to buy the books (which is what they actually did end up doing too).
That metaphor doesn't really work. It's a settlement, not a punishment, and this is payment, not a fine. Legally it's more like "The store wasn't open, so I took the items from the lot and paid them later".
It's not the way we expect people to do business under normal circumstances, but in new markets with new products? I guess I don't see much actually wrong with this. Authors still get paid a price they were willing to accept, and Anthropic didn't need to wait years to come to an agreement (again, publishers weren't actually selling what AI companies needed to buy!) before training their LLMs.
The Silicon Valley dream: If you’re not getting sued left and right by people with every right to, you didn’t disrupt hard enough.
After the book publishers burned Google Book's Library of Alexandria, they are now making it impossible to train a LLM unless you engage in the medieval process of manually buying paper-copies of work just to scan & destroy them...
If they wanted a copyright free world maybe they should publish all their models as copyright free as well. But they are not doing it are they?
They are nondestructive methods of scanning. I bought an edge scanner to scan collectible public domain books for Project Gutenberg.
for recent books, they could buy digital version of the books and use them for training, though.
It will be interesting to see how this impacts the lawsuits against OpenAI, Meta, and Microsoft. Will they quickly try to settle for billions as well?
It’s not precedent setting but surely it’ll have an impact.
I’m sure this’ll be misreported and wilfully misinterpreted because of the current fractious state of the AI discourse, but given the lawsuit was to do with piracy, not the copyright-compliance of LLMs, and in any case, given they settled out of court, thus presumably admit no wrongdoing, conveniently no legal precedent is established either way.
I would not be surprised if investors made their last round of funding contingent on settling this matter out of court precisely to ensure no precedents are set.
Anthropic certainly seems to be hoping that their competitors will have to face some consequences too:
>During a deposition, a founder of Anthropic, Ben Mann, testified that he also downloaded the Library Genesis data set when he was working for OpenAI in 2019 and assumed this was “fair use” of the material.
Per the NYT article, Anthropic started buying physical books in bulk and scanning them for their training data, and they assert that no pirated materials were ever used in public models. I wonder if OpenAI can say the same.
Maybe, though this lawsuit is different in respect to the piracy issue. Anthropic is paying the settlement because they pirated the books, not because training on copyrighted books isn’t fair use which isn’t necessarily true with the other cases.
That was my first though. While not legal precedent, it does sort of open the flood gates for others.
One thing that comes to mind is...
Is there a way to make your content on the web "licensed" in a way where it is only free for human consumption?
I.e. effectively making the use of AI crawlers pirating, thus subject to the same kind of penalties here?
Yes to the first part. Put your site behind a login wall that requires users to sign a contract to that effect before serving them the content... get a lawyer to write that contract. Don't rely on copyright.
I'm not sure to what extent you can specify damages like these in a contract, ask the lawyer who is writing it.
I'd argue you don't actually want this! You're suggesting companies should be able to make web scraping illegal.
That curl script you use to automate some task could become infringing.
Maybe some kind of captcha like system could be devised that could be considered a security measure under the DMCA and not allowed to be circumvented. Make the same content available under a licence fee through an API.
I'm sure one can try, but copyright has all kinds of oddities and carve-outs that make this complicated. IANAL, but I'm fairly certain that, for example, if you tried putting in your content license "Free for all uses public and private, except academia, screw that ivory tower..." that's a sentiment you can express but universities are under no obligation legally to respect your wish to not have your work included in a course presentation on "wild things people put in licenses." Similarly, since the court has found that training an LLM on works is transformative, a license that says "You may use this for other things but not to train an LLM" couldn't be any more enforceable than a musician saying "You may listen to my work as a whole unit but God help you if I find out you sampled it into any of that awful 'rap music' I keep hearing about..."
The purpose of the copyright protections is to promote "sciences and useful arts," and the public utility of allowing academia to investigate all works(1) exceeds the benefits of letting authors declare their works unponderable to the academic community.
(1) And yet, textbooks are copyrighted and the copyright is honored; I'm not sure why the academic fair-use exception doesn't allow scholars to just copy around textbooks without paying their authors.
No. Neither legally or technically possible.
Is this legal: scan billions of pirated books, train a LLM on them and generate billion public domain books with it so that nobody ever needs copyrighted books anymore?
Also if there is a software library with annoying Stallman-style license, can one use LLM to generate a compatible library but in a public domain or with commercial license? So that nobody needs to respect software licenses anymore? Can we also generate a free Photoshop, Linux kernel and Windows this way?
Maybe I would think differently if I was a book author but I can't help but think that this is ugly but actually quite good for humanity in some perverse sense. I will never, ever, read 99.9% of these books presumably but I will use claude.
That's the worst AI news I read ever.
Even might AI with billions must kneel to copyright industry. We are forever doomed. Human culture will never be free from the grasp of rent seeking.
I wonder who will be the first country to make an exception to copyright law for model training libraries to attract tax revenue like Ireland did for tech companies in the EU. Japan is part of the way there, but you couldn't do a common crawl type thing. You could even make it a library of congress type of setup.
This is already a thing in several places.
EU has copyright exemptions for AI training. You don't need to respect opt outs if you are doing research.
South Korea, Japan has some exemptions too I think?
Singapore has very strong copyright exemptions for AI training. You can completely ignore opt-outs legally, even if doing it commercially.
Just search up "TDM laws globally".
As long as you're not distributing, it's legal in Switzerland to download copyrighted material. (Switzerland was on the naughty US/MPAA list for a while, might still be)
How do legal penalties and settlements work internationally? Are entities in other countries somehow barred from filing similar suits with more penalties?
I think that one under-discussed effect for settlements like this is the additional tax on experimentation. The largest players can absorb a $1.5B hit or negotiate licensing at scale. Smaller labs and startups, which often drive breakthroughs, may not survive the compliance burden.
That could push the industry toward consolidation—fewer independent experiments, more centralized R&D inside big tech. I feel that, this might slow the pace of unexpected innovations and increase dependence on incumbents.
This def. raises the question: how do we balance fair compensation for creators with keeping the door open for innovation?
"That could push the industry toward consolidation"
Based on history this is not a possibility but a certainty.
The larger players - who grew because of limited regulations - will start supporting stricter regulation and compliance structures in order to increase the barrier of entry with the excuse of "Oh we learned our lesson, you are right". The hypocrisy is crazy but it makes sense from a capitalistic perspective.
This was a very tactical decision by Anthropic. They have just received Series F funding, and they can now afford to settle this lawsuit.
OpenAI and Google will follow soon now that the precedent has been set, and will likely pay more.
It will be a net win for Anthropic.
As a published author who had works in the training data, can I take my settlement payout in the form of Claude Code API credits?
TBH I'm just going to plow all that money back into Anthropic... might was well cut out the middleman.
I wonder if Antrhopic's lawyers have enough of a sense of humor to take you up on that if you sent them an email asking...
So… when can I expect my cheque?
Seriously, how will this money propagate to the authors (if at all) or will it just stay with the publishers?
This is exactly what could imped LLM training dataset in the western world, which will mechanically lead to "richer" LLM training dataset in countries where some PI is not walling that data for training.
But then, the countries with the freedom to add everything to the training dataset will have to distribute for free the weights in PI walled countries (because they would be plain 'illegal' and will be "blocked" over there, unless free as in free beer I guess), basically only what deepseek could work.
If powerfull LLM hardware becomes somewhat affordable (look at nvidia omega push on LLM specific hardware), "local" companies may run at reasonable speed those 'foreign trained LLM models', but "here".
Wooo, I sure could use $3k right now and I've got something in the pirate libraries they scraped. Nice.
It is a good opportunity to ask: is it true, that Anthropic can get indemnification from user actions that end up in the company being sued? User actions that are related to the use of Claude. Even just for the accusation. The user needs to cover their bills of lawyers and proceedings. Also they take control of the legal process, can do the way they please, settle or what, user footing the bill. Without limit. Be the user an individual or organization, doesn't matter.
Sounds harsh, if true. Making its use practical only for hobby projects basically where the results of Claude kept for yourself completely (be it information, product using Claude, or product is made by using Claude). Difficult to believe, I hope I heard it wrong.
> the law allowed the company to train A.I. technologies using the books because this transformed them into something new.
Unless, of course, the transformation malfunctioned and you got the good old verbatim source, with many of examples compiled in similar lawsuits
This notably wasn't one of the allegations levied against Anthropic, as Claude was accompanied by software that filtered any infringing outputs. From the relevant opinion finding Anthropic's use of the books to be fair use:
> When each LLM was put into a public-facing version of Claude, it was complemented by other software that filtered user inputs to the LLM and filtered outputs from the LLM back to the user. As a result, Authors do not allege that any infringing copy of their works was or would ever be provided to users by the Claude service.
(from Bartz v. Anthropic in the Northern District of California)
$3000 per work isn't a bad price. It seems insulting to the copy write holder.
It's better to ask for forgiveness than for permission.
Taken right from the VC's handbook.
"$3,000 per work" seems like an incredibly good deal to license a book.
If the case is about piracy rather than use (which is I think the case?), wouldn’t the comparison be to buying all the books? $3000 each would be a pretty bad deal for that.
Illegal with a fine is legal with a fee.
Silicon Valley's unofficial motto for the last 15 years
I get the "Welcome to nginx!" error when I try to visit the archive.ph site, here is an archive.org version: https://web.archive.org/web/20250906042130/http://web.archiv...
What about OpenAI and Meta? Are they going to face similar lawsuits?
In related thought, when I listen to Suno, when I create "Epic Power Metal", the singer is very-often indistinguishible from the famous Hansi Kursch, of Blind Guardian.
https://en.wikipedia.org/wiki/Hansi_K%C3%BCrsch
I'm not sure if he even knows, but that is almost certainly his tracks they trained on.
I'm wondering, if they could purchase all the books that had been in the pirate stash, in physical or DRM-free ebook form, could they have been out of trouble? Use the stash because it's already pre-digitized and accessible, give money to publishers.
It would take time, sure, to compile the lists and make bulk orders, but wouldn't it be cheaper in the end than the settlement?
This settlement highlights the growing pains of the AI industry as it scales rapidly. While $1.5B is significant, it's a fraction of Anthropic's valuation and funding. It underscores the need for better governance in AI development to avoid future legal pitfalls. Interesting to see how this affects their competition with OpenAI.
From a systems design perspective, $3,000 per book makes this approach completely unscalable compared to web scraping. It's like choosing between a O(n) and O(n²) algorithm - legally compliant data acquisition has fundamentally different scaling characteristics than the 'move fast and break things' approach most labs took initially.
I don't know if anyone has actually read the article or the ruling, but this is about pirating books.
Anthropic went back and bought->scanned->destroyed physical copies of them afterward... but they pirated them first, and that's what this settlement is about.
The judge also said:
> “The training use was a fair use,” he wrote. “The technology at issue was among the most transformative many of us will see in our lifetimes.”
So you don't need to pay $3,000 per book you train on unless you pirate them.
Isn't a flat price per book quite plainly O(n)? If not, what's n?
more of a large difference in constant factor, like a galactic algorithm for data trawling
This will paid to rights holders, not authors. Published authors sign away the rights to financial exploitation of their books under the terms of contracts offered. I expect some authors suing publishers in turn. This has happened before when authors realised that they were not getting paid royalties on sales of ebooks.
So this is a straight-up victory for Anthropic, right?
They pay out (relative) chump change as a penalty for explicitly pirating a bunch of ebooks, and in return they get a ruling that they can train on copyrighted works forever, for the purchase price of the book (not the price that would be needed to secure the rights!)
I thought the opposite - they set a precedent indicating that reproduction of a copyrighted text by an LLM is infringement. If authors refuse to sell to them (via legal terms indicating LLMs aren't allowed), it's infringement. No?
I'd be curious to hear from a legal professional...
They also agreed to destroy the pirated books. I wonder how large of a portion of their training data comes from these shadow libraries, and if AI labs in countries that have made it clear they won't enforce anti-piracy laws against AI companies will get a substantial advantage by continuing to use shadow libraries.
Perhaps they'll quickly rent the whole contents of a few physical libraries and then scan them all
They already, prior to this lawsuit, prior to serving public models, replaced this data set with one they made by scanning purchased books. Destroying the data set they aren't even using should have approximately zero effect.
So the article notes Anthropic states they never publicly released a frontier model that was trained on the downloaded copyright material. So were Claude 2 and 3 only trained on legally purchased and scanned books, or do they now use a different training system that does not rely on books at all ?
I assumed they were literally just lying.
it sounds like the former
(Sorry, meta question: how do we insert in submissions that "'Also' <link> <link>..." below the title and above the comment input? The text field in the "submit" page creates a user's post when the "url" field is also filled. I am missing something.)
I do not believe authors will see any of this money. I will change my mind when I see an email or check.
So if a startup wants to buy book PDFs legally to use for AI purposes, any suggestions on how to do that?
Reach the publishers or resellers (like amazon for instance)
Give them this order : "I want to buy all your books as epub"
Pay and fetch the stuff
That's all
It is a very good deal for them, did not have to acquire books and had them in a very convenient format (no digitalization), saved tons of time (5+ years), got access to rare books and the LLM is not considered derived work, when it is actually clearly one
> "The technology at issue was among the most transformative many of us will see in our lifetimes"
A judge making on a ruling based on his opinion of how transformative a technology will be doesn't inspire confidence. There's an equivocation on the word "transformative" here -- not just transformative in the fair use sense, but transformative as in world-changing, impactful, revolutionary. The latter shouldn't matter in a case like this.
> Companies and individuals who willfully infringe on copyright can face significantly higher damages — up to $150,000 per work
Settling for 2% is a steal.
> In June, the District Court issued a landmark ruling on A.I. development and copyright law, finding that Anthropic’s approach to training A.I. models constitutes fair use,” Aparna Sridhar, Anthropic’s deputy general counsel, said in a statement.
This is the highest-order bit, not the $1.5B in settlement. Anthropic's guilty of pirating.
Printing press, audio recording, movies, radio, television were also transformative. Did not get rid of copyright or actually brought them.
I feel it is insane that authors do not receive some sort of standard compensation for each training use. Say a few hundred to a few thousand depending on complexity of their work.
I feel like there could a business opportunity for authors here - selling their books to LLM companies. For the LLM companies, it could be cheaper than a lawsuit and the authors get paid.
I don’t understand how training an LLM on a book and then selling its contents via subscriptions is fine but using a probabilistic OCR to read a book and then selling its contents is a crime that deserves jail time.
It's not a crime. It is civil lawsuit.
> A trial was scheduled to begin in December to determine how much Anthropic owed for the alleged piracy, with potential damages ranging into the hundreds of billions of dollars.
It has been admitted and Anthropic knew that this trial would totally bankrupt them had they said they were innocent and continued to fight the case.
But of course, there's too much money on the line, which means even though Anthropic settled (admitting guilt and profiting off of pirated books) they (Anthropic) knew there was no way they could win that case, and was not worth taking that risk.
> The pivotal fair-use question is still being debated in other AI copyright cases. Another San Francisco judge hearing a similar ongoing lawsuit against Meta ruled shortly after Alsup's decision that using copyrighted work without permission to train AI would be unlawful in "many circumstances."
The first of many.
If it was a sure thing, then the rights holders wouldn't have accepted a settlement deal for a measly couple billion. Both sides are happier to avoid risking losing the suit.
Wait, DID they admit guilt? A lot of times companies settle without admitting guilt.
They would only be wiped out if the court awarded the maximum statutory damages (or close to it). There was never any chance of that happening.
I wonder how many author will see real money out of this (if any). The techbros prayed to the new king of America with the best currency they had: money - so the king may intervene, like he did many times.
Any models trained on the ill-gotten data should now be public domain.
Quid of the already neural-network already feed by those books ? In case the court choose to protect the writers they should be deleted and retrain without all of this materials removed.
It doesn't set precedent, but the message to other AI companies is clear: if you're going to bet your model on gray-area data, have a few billion handy for settlements
You or I would go to jail.
OT: Is anybody else seeing that Datadome is blocking their IP?
I haven't had this in a while, but I always hate it when I'm blocked by Cloudflare/Datadome/etc.
I hope this leads to the big AI companies pushing for copyright reform that makes access to DRM-free digital content better for everyone.
It's the concentration of power, monopolies driving this trend of ignoring the fines and punishments. The fine system was not designed for these monstrous beasts. Legal code was designed to deter the common man from wrong doing. It did not anticipate the technological super powers doing winner-take-it-all in a highly connected world, and growing beyond the control of law. Basically, it's law of jungle for these companies. Law and punishment is never going to have any effect on them, as long as they can grab enough market share and customer base. Same as any mafia.
We are entering a world which is filled with corporate mafia that is above law (due to insignificant damage it can cause). These mafia would grip the world providing the essential services that make up the future world. The State would become much weaker, as policy makers could be bought by lobbying, punishments can be offset by VC funding.
It is all part of the playbook.
So that's 10% of their latest series?
How did Meta get away without a scratch?
Does anyone know which models were trained on the pirated books? I would like to avoid using those models.
Anyone have a link to the class action? I published a book and would love to know if I'm in the class.
Deep research on Claude perhaps for some irony if you will.
I thought 1.5 B is the penalty for one torrent, not for a couple million torrents.
At least if you're a regular citizen.
Make sure to grab the mother-of-all-torrents I guess if you're going to go that path. That way you get more bang for your 1.5B penalty.
$150,000 statutory damages for willful infringement.
A million torrents would cost 1,500 each.
Why are they paying $3000 per book. Does anyone think these authors srll their books for that amount?
Copies of these books are for sale for much less than that - very very few books demand a price that high.
They're paying much more than the actual damages because US copyright law comes with statutory damages for infringement of registered works on top of actual damages, between $200 and $150,000 per work. And the two sides negotiated this as a fair settlement to reduce the risk of an unfavourable outcome.
If you acquire something illegally of course the judgement against you has to be much higher than the legal price. Why would anyone purchase anything if the worst thing that could happen to you for stealing it was just paying the retail price?
They are not paying for reading the book, they are paying for redistributing the book in perpetuity presumably.
For legal observers, Judge William Haskell Alsup’s razor-sharp distinction between usage and acquisition is a landmark precedent: it secures fair use for transformative generative AI while preserving compensation for copyright holders. In a just world, this balance would elevate him to the highest court of the land, but we are far from a just world.
(Everyone say it with me)
Thats a weird way for Anthropic to announce they're going out of business.
So if you buy the content legally and fine tune using it that's fair use?
Yes. Or download it legally (e.g. web content not behind a paywall).
This shouldn't be allowed to be settled outside courts
Fair use just in legal terms, not ethical.
This weirdly seems like its the best mechanism to buy this much data.
Imagine going to 500k publishers to buy it individually. 3k per book is way cheaper. The copyright system is turning into a data marketplace in front of our eyes
I suspect you could acquire and scan every readily purchasable book for much less than $3k each. Scanhouse for instance charges $0.15 per page for regular unbound (disassembled) books, plus $0.25 for supervised OCR, plus another dollar if the formatting is especially complex; this comes out to maybe $200-300 for a typical book. Acquiring, shipping, and disposing of them all would of course cost more, but not thousands more.
The main cost of doing this would be the time - even if you bought up all the available scanning capacity it would probably take months. In the meantime your competition who just torrented everything would have more high-quality training data than you. There are probably also a fair number of books in libgen which are out of print and difficult to find used.
It's a tiny amount of data relatively speaking. Much more expensive per token than almost any data source imaginable
Isn't this basically what Spotify did originally?
I wrote a book, can I get my 1 dollar cheque?
You can check, see: https://news.ycombinator.com/item?id=45144261
Wait, I’m a published author, where’s my check
The court has to give preliminary approval to the settlement first. After that there should be a notice period during which the lawyers will attempt to reach out and tell you what you need to do to receive your money. (Not a lawyer, not legal advice).
You can follow the case here: https://www.courtlistener.com/docket/69058235/bartz-v-anthro...
You can see the motion for settlement (what the news article is about) here: https://storage.courtlistener.com/recap/gov.uscourts.cand.43...
Here's some money, now piss off and let us get back to taking everyone else's.
Same racket the media cartels and patent trolls have been forcing for 40-50 years.
Reminder that just recently, Anthropic raised a $13 billion series F at a $183 billion post-money evaluation.
In March, they were worth $61.5 billion
In six months they've created $120 billion in value. That's almost 700 million dollars per day. Avoiding being slowed down by even a few days is worth a billion dollar payout when you are on this trajectory. This lawsuit, and any lawsuit AI model companies are likely to get, will be a rounding error at the end of the fiscal year.
They know that superintelligent AI is far larger than money, and even so, the money they'll make on the way there is hefty enough for copyright law to not be an issue.
Do they even have that much cash on hand?
They just raised $13B, so yes
https://www.anthropic.com/news/anthropic-raises-series-f-at-...
I can see a price hike incoming.
It was coming regardless of the case results.
... in one economy and for specific authors and publishers. But the offence is global in impact on authors worldwide, and the consequences for other IPR laws remains to be seen.
“Agrees” is a funny old word
Smart move: now that they're an established player, and that they have a few billions of investors' money to spend, they comfort a jurisprudence that stealing IP to train your models is a billion dollar offense.
What a formidable moat against newcomers, definitely worth the price!
Why only Anthropic?
was the latest round for paying off fines?
5 days ago GamesNexus did a piece on Meta having the same problems, but resolving it differently:
https://www.youtube.com/watch?v=sdtBgB7iS8c
Somehow excuses like "we torrented it, but we configured low seeding" "temptation was too strong because there was money to be made" "we tried getting a licenses, but then ignored it" and more ludicrous excuses actually worked.
Internal meta emails seemed to point to people knowing the blatant breach of copyright, and yet Meta won the case.
I guess there are tiers of laws even between billionaire companies.
Honestly, this is a steal for Anthropic.
Ha, this gave me a ripping good laugh.
$1.5B is a nothing but a handslap for the big gold rush companies.
It's less than 1% Anthropic's valuation -- a valuation utterly dependent on all the hoovering up of others' copyrighted works.
AFAICT, if this settlement signals that the typical AI foundation model company's massive-scale commercial theft doesn't result in judgments that wipe out a company (and its execs), then we have confirmation that is a free-for-all for all the other AI gold rush companies.
Then making deals to license rights, in sell-it-to-us-or-we'll-just-take-it-anyway deals, becomes only a routine and optional corporate cost reduction exercise, but not anything the execs will lose sleep over if it's inconvenient.
> It's less than 1% Anthropic's valuation
The settlement is real money though. Valuation is imaginary.
There’s alternatives to wiping out the company that could be fair. For example, a judgement resulting in a shares of the company or revenue shares in the future rather than a one time pay off.
Writers were the true “foundational” piece of LLMs, anyway.
A terrible precedent that guarantees China a win in the AI race
Nobody is winning the AI race.
Because everyone is expecting AGI now and it's not happening with our current tech.
Now how about Meta and their questionable means of acquiring tons of content?
Maybe it's time to get some Llama models copied before an overzealous court rules badly.
Let us not forget that this one is the good, ethical AI company. The one founded by splinter AI safety cultists who thought that OpenAI wasn't deep enough in the safety cult for their liking. And here they are, keeping the humans safe. By robbing them.
Because it turns out that nobody in the whole safety cult cares a whit for the human mind, the human experience, human art. Maybe for something they call "human values" in some abstract thought experiment, but never for any human decency. No, the human mind is just ones and zeros, just like a computer, no soul and no spark, to people in the cult. The cult thinks that an LLM reading a book is just the same mechanically as a human reading it.
Your brain is just emergence, your honor. Fair use. Blah blah Dennett Hofstadter Yudkowsky.
Do you feel safe?
I don’t think that copyright is related to safety at all. Copyright is something that doesn’t really exist, except as a social agreement, while safety is something that exists whether or not society believes it does.
I'm excited for the moment where these models are able to treat using copyrighted work in a fair-use way that pays out to authors the way Spotify does when you listen to a song. Why? Because authors recieving royalties for their works when they get used in some prompt would likely encourage them to become far more accepting towards LLMs.
Also passing on the cost to consumers of generated content since companies now would need to pay royalties on the back-end should also likely increase the cost of generating slop and hopefully push back against that trend.
This shouldn't just be books, but all written content, like scholarly journals and essays, news articles and blogs, etc.
I realize this is just wishful thinking, but there's got to be some nugget of aspirational desire to pay it forward.
Great. Which rich person is going to jail for breaking the law?
This isn't a criminal case so zero people of any financial position would end up in prison.
Well their seed funder SBF went to jail, but not for bankrolling this particular theft. He did a theft of his own. Still, SBF and the Anthropic guys got their "ethics" from the same shitty blogs, and it shows.
No one, rich or poor, goes to jail for downloading books.
This settlement I guess could be a landmark moment. $1.5 billion is a staggering figure and I hope it sends a clear signal that AI companies can’t just treat creative work as free training data.
All the ai companies are still using books as training data. Theyre just finding the cheapest scanned copies they can get their hands on to cover their ass
I mean the ruling does in fact find that treating this particular kind of creative work qualifies as fair use.
I'm gonna say one thing. If you agree that something was unfairly taken from book authors, then the same thing was taken from people publishing on the web, and on a larger scale.
Book authors may see some settlement checks down the line. So might newspapers and other parties that can organize and throw enough $$$ at the problem. But I'll eat my hat if your average blogger ever sees a single cent.
The blogger’s content was freely available, this fine is for piracy.
Books aren't hosted publicly online free for anyone to access. The court seems to think buying a book and scanning it is fair use. Just using pirated books is forbidden. Blogs weren't accessed via pirating.
The settlement was for downloading the pirated books, not training from them. Unless they're paywalled it would be hard to argue the same for a blog.