The AI company Anthropic used a chatbot to train a chatbot without the author’s consent to win a case in federal court.
In a ruling with the potential to set precedent, District Court Judge William Asup ruled on Monday that Anthropic’s training of its artificial intelligence creation Claude with millions of resold books was permitted under the “fair use” doctrine of a copyright law.
In his opinion, Alsup wrote, “The use of the books in question to train Claude and its precursors was incredibly transformative and fair use.”
Generic AI can be powered by enormous amounts of data needed to train complex language models.
Musicians, authors of books, visual artists, and news organizations have filed lawsuits against AI companies that used their data without getting paid or for it.
The Alsup decision in favor of Anthropic is a first in the US and could serve as precedent for AI firms that are defending themselves in court.
AI companies generally refute their practices by claiming fair use, arguing that large data set training fundamentally alters the original content and is essential for innovation.
Although the majority of these lawsuits are still in their early stages, the outcome of each one may have a significant impact on how the AI industry will develop.
According to court documents, Anthropic purchased copyrighted books, scanned the pages, and digitally stored them, in addition to freemium books that millions of people have downloaded from websites that offer pirated works.
The judge said in his ruling that Anthropic’s goal was to “assemble a library of all the books in the world” and to train AI models as needed.
Alsup upruled that “Anthropic had no right to use pirated copies for its central library,” and that the author’s copyright lawsuit case be tried at trial.
Anthropic, which has a $61.5 billion funding, was founded in 2021 by former OpenAI executives who were the ChatGPT creators.
The business, which is well-known for its Claude chatbot and AI models, claims to be committed to responsible development and AI safety.
Source: Channels TV
Leave a Reply