șŁœÇֱȄ

Anthropic to pay authors $1.5 billion to settle lawsuit over pirated books used to train AI chatbots

Anthropic to pay authors $1.5 billion to settle lawsuit over pirated books used to train AI chatbots
Anthropic logo is seen in this illustration created on May 20, 2024. (REUTERS/Illustration/File Photo)
Short Url
Updated 20 sec ago

Anthropic to pay authors $1.5 billion to settle lawsuit over pirated books used to train AI chatbots

Anthropic to pay authors $1.5 billion to settle lawsuit over pirated books used to train AI chatbots
  • The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement
  • Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments

NEW YORK: Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.
The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.
The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.
“As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”
A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.




Thriller novelist Andrea Bartz is photographed in her home, in the Brooklyn borough of New York, on Sept. 4, 2025 (AP)

A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.
If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.
“We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said Thomas Long, a legal analyst for Wolters Kluwer.
US District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.
Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.”
“We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.
As part of the settlement, the company has also agreed to destroy the original book files it downloaded.
Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.
Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.
Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.
Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.
The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.
On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”
The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the US Copyright Office.
“On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.
On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.
“It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.
The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.
Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.
The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Metaand Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.
“This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.
The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under US copyright law because it was “quintessentially transformative.”
Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”
But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.
With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.
Ìę


Fire breaks out at former BBC headquarters in west London, broadcaster reports

Fire breaks out at former BBC headquarters in west London, broadcaster reports
Updated 06 September 2025

Fire breaks out at former BBC headquarters in west London, broadcaster reports

Fire breaks out at former BBC headquarters in west London, broadcaster reports
  • The London Fire Brigade said firefighters were mobilized after it was called to a fire at a nine-story building soon after 3 a.m. UK time (0200 GMT)

Around 100 firefighters were called to a blaze on Saturday at the former BBC headquarters Television Center in London’s White City, the British public broadcaster reported.
The London Fire Brigade said firefighters from Hammersmith, North Kensington, Kensington, Chiswick and surrounding fire stations were mobilized after it was called to a fire at a nine-story building soon after 3 a.m. UK time (0200 GMT).
The blaze is currently affecting floors toward the top of the structure, with a restaurant, external decking and ducting all currently alight, the fire brigade said. An unknown number of flats have also potentially been affected, it added.
The cause of the fire is currently unknown. There are no reports of injuries or deaths. 


British TV presenter Adil Ray reveals death threats amid rising anti-Muslim sentiment in UK

British TV presenter Adil Ray reveals death threats amid rising anti-Muslim sentiment in UK
Updated 05 September 2025

British TV presenter Adil Ray reveals death threats amid rising anti-Muslim sentiment in UK

British TV presenter Adil Ray reveals death threats amid rising anti-Muslim sentiment in UK
  • The 51-year-old said prominent Muslim politicians had also received violent threats, and members of the public told him they were living in fear.

LONDON: British TV presenter Adil Ray revealed on Friday that he has received “horrendous” threats and racist abuse amid a reported surge in Islamophobic incidents across the UK.

Ray, who co-hosts ITV’s “Good Morning Britain,” said the wave of anti-Muslim hostility came as tensions escalated over the government’s handling of asylum-seekers and immigration.

“I’ve experienced it, I’ve had people DM me on Instagram, talk about remigration,” said Ray, who is of Pakistani Muslim background, during his show appearance. “I’ve had threats to watch myself on the streets.”

The 51-year-old said prominent Muslim politicians had also received violent threats, and members of the public told him they were living in fear.

“People who work here, and several friends of theirs who are Muslim, don’t want to go to the mosque anymore,” he added.

His remarks follow a rise in anti-Muslim incidents reported across the country. Several mosques have been vandalized in recent weeks.

Ray criticized the lack of political response. “The thing that strikes me about this is no one seems to be talking about it. These are anti-Muslim hate crimes,” he said. “There doesn’t seem to be any politician that’s standing up and reassuring millions of Muslims in this country that the country is behind them.”

He warned that this silence was “deeply concerning,” adding: “We’re seeing a rise in anti-Muslim hate crime.”


șŁœÇֱȄ orders Roblox to suspend in-game chats, company to appoint Arabic moderators

șŁœÇֱȄ orders Roblox to suspend in-game chats, company to appoint Arabic moderators
Updated 04 September 2025

șŁœÇֱȄ orders Roblox to suspend in-game chats, company to appoint Arabic moderators

șŁœÇֱȄ orders Roblox to suspend in-game chats, company to appoint Arabic moderators
  • Roblox: This step reflects our commitment to working closely with GCAM to build a platform that serves the needs of the gaming and creative community in the Kingdom
  • GCAM emphasized that these measures are part of șŁœÇֱȄ’s broader efforts to protect children and society from the negative effects of online gaming

RIYADH: Roblox has confirmed that it has complied with the requirements of the General Commission for Audiovisual Media in șŁœÇֱȄ, which include suspending voice and text chats in the game throughout the Kingdom. The move is intended to enhance digital safety for children and young users.

In a statement, the company said: “Following discussions with several government entities, including the General Commission for Audiovisual Media in șŁœÇֱȄ, we are committed to enhancing our communication and content moderation capabilities in Arabic to ensure a safer experience for players.”

Roblox noted that it will rely on advanced artificial intelligence technologies as well as specialized Arabic-speaking moderators to oversee content, while the suspension of chat features in the Kingdom will remain in place temporarily until more effective tools are developed.

“This step reflects our commitment to working closely with GCAM to build a platform that serves the needs of the gaming and creative community in the Kingdom,” the company added, “while also supporting skills development, education, and the growing creator economy.”

GCAM emphasized that these measures are part of șŁœÇֱȄ’s broader efforts to protect children and society from the negative effects of online gaming. The commission said the agreement with Roblox has resulted in providing a safer digital environment for children, teenagers, and youth in the Kingdom and the wider region, fostering positive values and preventing harmful behavioral outcomes.

It emphasized that șŁœÇֱȄ has demonstrated its regional and global influence through Roblox’s compliance, including the blocking of inappropriate and indecent search results in the platform. Protecting young people from digital risks, GCAM added, is a top priority for the Kingdom, which continues to advance strategic plans for monitoring content, developing reporting mechanisms, and encouraging positive uses of modern technologies.

The commission pointed out that these measures reflect the Kingdom’s strong regulatory impact in removing harmful content while at the same time preserving the creative features that help young people develop skills, learn, and contribute to the growth of the creative economy. The move is part of an ongoing series of initiatives aimed at securing a safe digital environment that empowers future generations to innovate, create, and manage content effectively.

GCAM further explained that the suspension of chat features was introduced as a temporary measure until more effective tools for the moderation of Arabic digital content are finalized. It said the actions taken demonstrate șŁœÇֱȄ’s commitment to building effective partnerships with global platforms to create a digital space that aligns with Saudi and Arab culture while meeting the needs of players and creators.

The decision sparked mixed reactions among parents, with some welcoming the move and others expressing reservations.

Noor Fadel, a mother of two, said: “I have a different perspective. The game’s beauty lies in the interaction — voice, visuals, and writing. With proper parental supervision, children can learn communication, language, and writing. But I do understand this decision for the greater good.”

Mashael Al Sahli, whose daughters are in elementary school, supported the measure, saying: “Children spend long hours on these games, and suspending chats reduces risks, especially since many parents cannot monitor everything all the time.”

Haneen Said, a mother of two teenagers, considers the move a positive one, but that it should remain temporary: “I support regulation, not banning. I hope chat features will return once effective monitoring tools are in place, because our kids also need interaction and learning through these platforms.”


Google hit with $425M fine in the US for invading users’ privacy, $381M in France for a similar offense

Google hit with $425M fine in the US for invading users’ privacy, $381M in France for a similar offense
Updated 04 September 2025

Google hit with $425M fine in the US for invading users’ privacy, $381M in France for a similar offense

Google hit with $425M fine in the US for invading users’ privacy, $381M in France for a similar offense
  • US juryÌęfound the tech giant of guilty of violationsÌęby continuing to collect data for millions of users who had switched off a tracking feature in their Google account
  • France's data protection authority In Paris gives Google six months to ensure ads are no longer displayed between emails in Gmail users’ inboxes without prior consent

SAN FRANCISCO/PARIS: Alphabet’s Google was told by a federal jury in the US on Wednesday to pay $425 million for invading users’ privacy and slapped with a fine of 325 million euros ($381 million) in France for a similar offense.

The jury found the tech giant of guilty of violations by continuing to collect data for millions of users who had switched off a tracking feature in their Google account.

The verdict comes after a trial in the federal court in San Francisco over allegations that Google over an eight-year period accessed users’ mobile devices to collect, save, and use their data, violating privacy assurances under its Web & App Activity setting.
The users had been seeking more than $31 billion in damages.
The jury found Google liable on two of the three claims of privacy violations brought by the plaintiffs. The jury found that Google had not acted with malice, meaning it was not entitled to any punitive damages.
A spokesperson for Google confirmed the verdict. Google had denied any wrongdoing.
The class action lawsuit, filed in July 2020, claimed Google continued to collect users’ data even with the setting turned off through its relationship with apps such as Uber, Venmo and Meta’s Instagram that use certain Google analytics services.
At trial, Google said the collected data was “nonpersonal, pseudonymous, and stored in segregated, secured, and encrypted locations.” Google said the data was not associated with users’ Google accounts or any individual user’s identity.
US District Judge Richard Seeborg certified the case as a class action covering about 98 million Google users and 174 million devices.
Google has faced other privacy lawsuits, including one earlier this year where it paid nearly $1.4 billion in a settlement with Texas over allegations the company violated the state’s privacy laws.
Google in April 2024 agreed to destroy billions of data records of users’ private browsing activities to settle a lawsuit that alleged it tracked people who thought they were browsing privately, including in “Incognito” mode. 

In France, the data protection authority Commission Nationale de l’Informatique et des LibertĂ©s (CNIL) said ordered Google to pay 325 million euros ($381 million) for improperly displaying ads to Gmail users and using cookies, both without Google account users’ consent.
The CNIL also gave Google six months to ensure ads are no longer displayed between emails in Gmail users’ inboxes without prior consent, and that users give valid consent to the creation of a Google account for the placement of ad trackers.
Failing that, Google and its Irish subsidiary would both have to pay a penalty of 100,000 euros per day of delay, CNIL said in a statement.
A Google spokesperson said the company was reviewing the decision and said that users have always been able to control the ads they see in their products.
In the past two years, Google has made updates to address the commission’s concerns, including an easy way to decline personalized ads when creating a Google account, and changes to the way ads are presented in Gmail, the spokesperson said.

 


Musa Al-Sadr family rejects BBC’s AI image claim in disappearance investigation

Musa Al-Sadr family rejects BBC’s AI image claim in disappearance investigation
Updated 03 September 2025

Musa Al-Sadr family rejects BBC’s AI image claim in disappearance investigation

Musa Al-Sadr family rejects BBC’s AI image claim in disappearance investigation
  • AI facial recognition analysis carried out by the BBC compared a 2011 photograph of a decomposed corpse from Tripoli’s Al-Zawiya hospital with archived images of Musa Al-Sadr, indicating a ‘high probability’ of resemblance
  • Leader of the Amal Movement, Al-Sadr disappeared on Aug. 31, 1978, in Libya alongside two companions, a week after they arrived to meet with then-Libyan leader Muammar Qaddafi

LONDON: The family of the missing Lebanese cleric Imam Musa Al-Sadr has dismissed a recent BBC documentary that suggested he died in Libya, condemning the use of an artificial intelligence-generated image claimed to be him.

In a statement released Tuesday by the Imam Musa Al-Sadr Research and Studies Center, the family said the BBC shared the AI facial recognition analysis — a comparison between a decomposed corpse photo from Tripoli’s Al-Zawiya hospital in 2011 and archival images of Al-Sadr and relatives — with them and Lebanon’s official follow-up committee without consent.

“During filming, as the Imam’s family and the follow-up committee, we confirmed that the image is not of the Imam due to evident differences in the shape of the face, hair color, and other obvious distinctions,” the family said, adding that they had the confirmation “the moment we saw the video clip.”

Al-Sadr, founder of the Amal Movement, disappeared on Aug. 31, 1978, in Libya alongside two companions, a week after they arrived to meet with Libyan government officials. They were last seen leaving a Tripoli hotel in a government vehicle.

Despite various claims, including Libyan assertions that he traveled to Rome — which have been widely disproved — his fate remains unknown.

Many Lebanese Shia believe that then-Libyan leader Muammar Qaddafi ordered Al-Sadr’s killing, a claim that Libya has consistently denied.

The case has fueled deep political tensions between Lebanon and Libya and remains a highly sensitive and unresolved matter.

Some experts contend that Al-Sadr, an influential Iranian-Lebanese cleric, was on the verge of using his influence to guide Iran — and by extension the region — toward a more moderate path when he vanished on the eve of the Iranian revolution.

The BBC film, part of its Eye Investigations series, centers on testimony from Swedish-Lebanese reporter Kassem Hamade, who during the uprising of the Arab Spring in 2011 claimed to have photographed a tall corpse in a secret Tripoli morgue resembling Al-Sadr. He argued that despite decomposition, the skin tone, hair and facial features the dead body still resembled Sadr’s — who stood at 1.98 meters.

He also took a hair sample reportedly handed to the office of Lebanese parliamentary speaker, Nabih Berri, but officials later said that he sample was lost due to a “technical error.”

With what the family described as “full cooperation” with the BBC by providing photos, documents and resources, the outlets submitted Hamade’s photograph for AI analysis.

According to Professor Ugail, the software indicated a “high probability” that the body was either Al-Sadr or a close relative, a claim firmly denied by Al-Sadr’s family.

His son, Sayyed Sadreddine Sadr, said the 2011 morgue photograph was “evident(ly)” not his father.

“It also contradicts the information we have after this date 
 that he is still alive, held in a Libyan jail,” he said — though no evidence was ever offered to support this claim.

To address further questions, Judge Hassan Shami, representing the official committee and the family, is scheduled to appear on BBC Arabic to provide clarification.