ֱ

DeepSeek: Chinese AI firm sending shock waves through US tech

DeepSeek: Chinese AI firm sending shock waves through US tech
DeepSeek was developed by a start-up based in the eastern Chinese city of Hangzhou, known for its high density of tech firms. (AFP)
Short Url
Updated 28 January 2025

DeepSeek: Chinese AI firm sending shock waves through US tech

DeepSeek: Chinese AI firm sending shock waves through US tech
  • The program has shaken up the tech industry and hit US titans including Nvidia, the AI chip juggernaut that saw nearly $600 billion of its market value erased, the most ever for one day on Wall Street

BEIJING: Chinese firm DeepSeek’s artificial intelligence chatbot has soared to the top of the Apple Store’s download charts, stunning industry insiders and analysts with its ability to match its US competitors.
The program has shaken up the tech industry and hit US titans including Nvidia, the AI chip juggernaut that saw nearly $600 billion of its market value erased, the most ever for one day on Wall Street.
Here’s what you need to know about DeepSeek:
DeepSeek was developed by a start-up based in the eastern Chinese city of Hangzhou, known for its high density of tech firms.
Available as an app or on desktop, DeepSeek can do many of the things that its Western competitors can do — write song lyrics, help work on a personal development plan, or even write a recipe for dinner based on what’s in the fridge.
It can communicate in multiple languages, though it told AFP that it was strongest in English and Chinese.
It is subject to many of the limitations seen in other Chinese-made chatbots like Baidu’s Ernie Bot — asked about leader Xi Jinping or Beijing’s policies in the western region of Xinjiang, it implored AFP to “talk about something else.”
But from writing complex code to solving difficult sums, industry insiders have been astonished by just how well DeepSeek’s abilities match the competition.
“What we’ve found is that DeepSeek... is the top performing, or roughly on par with the best American models,” Alexandr Wang, CEO of Scale AI, told CNBC.
That’s all the more surprising given what is known about how it was made.
In a paper detailing its development, the firm said the model was trained using only a fraction of the chips used by its Western competitors.
Analysts had long thought that the United States’ critical advantage over China when it comes to producing high-powered chips — and its ability to prevent the Asian power from accessing the technology — would give it the edge in the AI race.
But DeepSeek researchers said they spent only $5.6 million developing the latest iteration of their model — peanuts when compared with the billions US tech giants have poured into AI.
Shares in major tech firms in the United States and Japan have tumbled as the industry takes stock of the challenge from DeepSeek.
Chip making giant Nvidia — the world’s dominant supplier of AI hardware and software — closed down seventeen percent on Wall Street on Monday.
And Japanese firm SoftBank, a key investor in US President Donald Trump’s announcement of a new $500 billion venture to build infrastructure for artificial intelligence in the United States, lost more than eight percent.
Venture capitalist Marc Andreessen, a close adviser to Trump, described it as “AI’s Sputnik moment” — a reference to the Soviet satellite launch that sparked the Cold War space race.
“DeepSeek R1 is one of the most amazing and impressive breakthroughs I’ve ever seen,” he wrote on X.
Like its Western competitors Chat-GPT, Meta’s Llama and Claude, DeepSeek uses a large-language model — massive quantities of texts to train its everyday language use.
But unlike Silicon Valley rivals, which have developed proprietary LLMs, DeepSeek is open source, meaning anyone can access the app’s code, see how it works and modify it themselves.
“We are living in a timeline where a non-US company is keeping the original mission of OpenAI alive — truly open, frontier research that empowers all,” Jim Fan, a senior research manager at Nvidia, wrote on X.
DeepSeek said it “tops the leaderboard among open-source models” — and “rivals the most advanced closed-source models globally.”
Scale AI’s Wang wrote on X that “DeepSeek is a wake up call for America.”
Beijing’s leadership has vowed to be the world leader in AI technology by 2030 and is projected to spend tens of billions in support for the industry over the next few years.
And the success of DeepSeek suggests that Chinese firms may have begun leaping the hurdles placed in their way.
Last week DeepSeek’s founder, hedge fund manager Liang Wenfeng, sat alongside other entrepreneurs at a symposium with Chinese Premier Li Qiang — highlighting the firm’s rapid rise.
Its viral success also sent it to the top of the trending topics on China’s X-like Weibo website Monday, with related hashtags pulling in tens of millions of views.
“This really is an example of spending a little money to do great things,” one user wrote.


ֱ orders Roblox to suspend in-game chats, company to appoint Arabic moderators

ֱ orders Roblox to suspend in-game chats, company to appoint Arabic moderators
Updated 04 September 2025

ֱ orders Roblox to suspend in-game chats, company to appoint Arabic moderators

ֱ orders Roblox to suspend in-game chats, company to appoint Arabic moderators
  • Roblox: This step reflects our commitment to working closely with GCAM to build a platform that serves the needs of the gaming and creative community in the Kingdom
  • GCAM emphasized that these measures are part of ֱ’s broader efforts to protect children and society from the negative effects of online gaming

RIYADH: Roblox has confirmed that it has complied with the requirements of the General Commission for Audiovisual Media in ֱ, which include suspending voice and text chats in the game throughout the Kingdom. The move is intended to enhance digital safety for children and young users.

In a statement, the company said: “Following discussions with several government entities, including the General Commission for Audiovisual Media in ֱ, we are committed to enhancing our communication and content moderation capabilities in Arabic to ensure a safer experience for players.”

Roblox noted that it will rely on advanced artificial intelligence technologies as well as specialized Arabic-speaking moderators to oversee content, while the suspension of chat features in the Kingdom will remain in place temporarily until more effective tools are developed.

“This step reflects our commitment to working closely with GCAM to build a platform that serves the needs of the gaming and creative community in the Kingdom,” the company added, “while also supporting skills development, education, and the growing creator economy.”

GCAM emphasized that these measures are part of ֱ’s broader efforts to protect children and society from the negative effects of online gaming. The commission said the agreement with Roblox has resulted in providing a safer digital environment for children, teenagers, and youth in the Kingdom and the wider region, fostering positive values and preventing harmful behavioral outcomes.

It emphasized that ֱ has demonstrated its regional and global influence through Roblox’s compliance, including the blocking of inappropriate and indecent search results in the platform. Protecting young people from digital risks, GCAM added, is a top priority for the Kingdom, which continues to advance strategic plans for monitoring content, developing reporting mechanisms, and encouraging positive uses of modern technologies.

The commission pointed out that these measures reflect the Kingdom’s strong regulatory impact in removing harmful content while at the same time preserving the creative features that help young people develop skills, learn, and contribute to the growth of the creative economy. The move is part of an ongoing series of initiatives aimed at securing a safe digital environment that empowers future generations to innovate, create, and manage content effectively.

GCAM further explained that the suspension of chat features was introduced as a temporary measure until more effective tools for the moderation of Arabic digital content are finalized. It said the actions taken demonstrate ֱ’s commitment to building effective partnerships with global platforms to create a digital space that aligns with Saudi and Arab culture while meeting the needs of players and creators.

The decision sparked mixed reactions among parents, with some welcoming the move and others expressing reservations.

Noor Fadel, a mother of two, said: “I have a different perspective. The game’s beauty lies in the interaction — voice, visuals, and writing. With proper parental supervision, children can learn communication, language, and writing. But I do understand this decision for the greater good.”

Mashael Al Sahli, whose daughters are in elementary school, supported the measure, saying: “Children spend long hours on these games, and suspending chats reduces risks, especially since many parents cannot monitor everything all the time.”

Haneen Said, a mother of two teenagers, considers the move a positive one, but that it should remain temporary: “I support regulation, not banning. I hope chat features will return once effective monitoring tools are in place, because our kids also need interaction and learning through these platforms.”


Google hit with $425M fine in the US for invading users’ privacy, $381M in France for a similar offense

Google hit with $425M fine in the US for invading users’ privacy, $381M in France for a similar offense
Updated 04 September 2025

Google hit with $425M fine in the US for invading users’ privacy, $381M in France for a similar offense

Google hit with $425M fine in the US for invading users’ privacy, $381M in France for a similar offense
  • US jury found the tech giant of guilty of violations by continuing to collect data for millions of users who had switched off a tracking feature in their Google account
  • France's data protection authority In Paris gives Google six months to ensure ads are no longer displayed between emails in Gmail users’ inboxes without prior consent

SAN FRANCISCO/PARIS: Alphabet’s Google was told by a federal jury in the US on Wednesday to pay $425 million for invading users’ privacy and slapped with a fine of 325 million euros ($381 million) in France for a similar offense.

The jury found the tech giant of guilty of violations by continuing to collect data for millions of users who had switched off a tracking feature in their Google account.

The verdict comes after a trial in the federal court in San Francisco over allegations that Google over an eight-year period accessed users’ mobile devices to collect, save, and use their data, violating privacy assurances under its Web & App Activity setting.
The users had been seeking more than $31 billion in damages.
The jury found Google liable on two of the three claims of privacy violations brought by the plaintiffs. The jury found that Google had not acted with malice, meaning it was not entitled to any punitive damages.
A spokesperson for Google confirmed the verdict. Google had denied any wrongdoing.
The class action lawsuit, filed in July 2020, claimed Google continued to collect users’ data even with the setting turned off through its relationship with apps such as Uber, Venmo and Meta’s Instagram that use certain Google analytics services.
At trial, Google said the collected data was “nonpersonal, pseudonymous, and stored in segregated, secured, and encrypted locations.” Google said the data was not associated with users’ Google accounts or any individual user’s identity.
US District Judge Richard Seeborg certified the case as a class action covering about 98 million Google users and 174 million devices.
Google has faced other privacy lawsuits, including one earlier this year where it paid nearly $1.4 billion in a settlement with Texas over allegations the company violated the state’s privacy laws.
Google in April 2024 agreed to destroy billions of data records of users’ private browsing activities to settle a lawsuit that alleged it tracked people who thought they were browsing privately, including in “Incognito” mode. 

In France, the data protection authority Commission Nationale de l’Informatique et des Libertés (CNIL) said ordered Google to pay 325 million euros ($381 million) for improperly displaying ads to Gmail users and using cookies, both without Google account users’ consent.
The CNIL also gave Google six months to ensure ads are no longer displayed between emails in Gmail users’ inboxes without prior consent, and that users give valid consent to the creation of a Google account for the placement of ad trackers.
Failing that, Google and its Irish subsidiary would both have to pay a penalty of 100,000 euros per day of delay, CNIL said in a statement.
A Google spokesperson said the company was reviewing the decision and said that users have always been able to control the ads they see in their products.
In the past two years, Google has made updates to address the commission’s concerns, including an easy way to decline personalized ads when creating a Google account, and changes to the way ads are presented in Gmail, the spokesperson said.

 


Musa Al-Sadr family rejects BBC’s AI image claim in disappearance investigation

Musa Al-Sadr family rejects BBC’s AI image claim in disappearance investigation
Updated 03 September 2025

Musa Al-Sadr family rejects BBC’s AI image claim in disappearance investigation

Musa Al-Sadr family rejects BBC’s AI image claim in disappearance investigation
  • AI facial recognition analysis carried out by the BBC compared a 2011 photograph of a decomposed corpse from Tripoli’s Al-Zawiya hospital with archived images of Musa Al-Sadr, indicating a ‘high probability’ of resemblance
  • Leader of the Amal Movement, Al-Sadr disappeared on Aug. 31, 1978, in Libya alongside two companions, a week after they arrived to meet with then-Libyan leader Muammar Qaddafi

LONDON: The family of the missing Lebanese cleric Imam Musa Al-Sadr has dismissed a recent BBC documentary that suggested he died in Libya, condemning the use of an artificial intelligence-generated image claimed to be him.

In a statement released Tuesday by the Imam Musa Al-Sadr Research and Studies Center, the family said the BBC shared the AI facial recognition analysis — a comparison between a decomposed corpse photo from Tripoli’s Al-Zawiya hospital in 2011 and archival images of Al-Sadr and relatives — with them and Lebanon’s official follow-up committee without consent.

“During filming, as the Imam’s family and the follow-up committee, we confirmed that the image is not of the Imam due to evident differences in the shape of the face, hair color, and other obvious distinctions,” the family said, adding that they had the confirmation “the moment we saw the video clip.”

Al-Sadr, founder of the Amal Movement, disappeared on Aug. 31, 1978, in Libya alongside two companions, a week after they arrived to meet with Libyan government officials. They were last seen leaving a Tripoli hotel in a government vehicle.

Despite various claims, including Libyan assertions that he traveled to Rome — which have been widely disproved — his fate remains unknown.

Many Lebanese Shia believe that then-Libyan leader Muammar Qaddafi ordered Al-Sadr’s killing, a claim that Libya has consistently denied.

The case has fueled deep political tensions between Lebanon and Libya and remains a highly sensitive and unresolved matter.

Some experts contend that Al-Sadr, an influential Iranian-Lebanese cleric, was on the verge of using his influence to guide Iran — and by extension the region — toward a more moderate path when he vanished on the eve of the Iranian revolution.

The BBC film, part of its Eye Investigations series, centers on testimony from Swedish-Lebanese reporter Kassem Hamade, who during the uprising of the Arab Spring in 2011 claimed to have photographed a tall corpse in a secret Tripoli morgue resembling Al-Sadr. He argued that despite decomposition, the skin tone, hair and facial features the dead body still resembled Sadr’s — who stood at 1.98 meters.

He also took a hair sample reportedly handed to the office of Lebanese parliamentary speaker, Nabih Berri, but officials later said that he sample was lost due to a “technical error.”

With what the family described as “full cooperation” with the BBC by providing photos, documents and resources, the outlets submitted Hamade’s photograph for AI analysis.

According to Professor Ugail, the software indicated a “high probability” that the body was either Al-Sadr or a close relative, a claim firmly denied by Al-Sadr’s family.

His son, Sayyed Sadreddine Sadr, said the 2011 morgue photograph was “evident(ly)” not his father.

“It also contradicts the information we have after this date … that he is still alive, held in a Libyan jail,” he said — though no evidence was ever offered to support this claim.

To address further questions, Judge Hassan Shami, representing the official committee and the family, is scheduled to appear on BBC Arabic to provide clarification.


ChatGPT to get parental controls after teen’s death

ChatGPT to get parental controls after teen’s death
Updated 04 September 2025

ChatGPT to get parental controls after teen’s death

ChatGPT to get parental controls after teen’s death
  • Parents Matthew and Maria Raine have filed a lawsuit alleging that a chatbot helped their 16-year-old son steal vodka and provided instructions for a noose he used to take his own life
  • OpenAI announced new safety tools, including age-appropriate response controls and notifications for detecting acute distress in children

PARIS: American artificial intelligence firm OpenAI said Tuesday it would add parental controls to its chatbot ChatGPT, a week after an American couple said the system encouraged their teenaged son to kill himself.
“Within the next month, parents will be able to... link their account with their teen’s account” and “control how ChatGPT responds to their teen with age-appropriate model behavior rules,” the generative AI company said in a blog post.
Parents will also receive notifications from ChatGPT “when the system detects their teen is in a moment of acute distress,” OpenAI added.
Matthew and Maria Raine argue in a lawsuit filed last week in a California state court that ChatGPT cultivated an intimate relationship with their son Adam over several months in 2024 and 2025 before he took his own life.
The lawsuit alleges that in their final conversation on April 11, 2025, ChatGPT helped 16-year-old Adam steal vodka from his parents and provided technical analysis of a noose he had tied, confirming it “could potentially suspend a human.”
Adam was found dead hours later, having used the same method.
“When a person is using ChatGPT it really feels like they’re chatting with something on the other end,” said attorney Melodi Dincer of The Tech Justice Law Project, which helped prepare the legal complaint.
“These are the same features that could lead someone like Adam, over time, to start sharing more and more about their personal lives, and ultimately, to start seeking advice and counsel from this product that basically seems to have all the answers,” Dincer said.
Product design features set the scene for users to slot a chatbot into trusted roles like friend, therapist or doctor, she said.
Dincer said the OpenAI blog post announcing parental controls and other safety measures seemed “generic” and lacking in detail.
“It’s really the bare minimum, and it definitely suggests that there were a lot of (simple) safety measures that could have been implemented,” she added.
“It’s yet to be seen whether they will do what they say they will do and how effective that will be overall.”
The Raines’ case was just the latest in a string that have surfaced in recent months of people being encouraged in delusional or harmful trains of thought by AI chatbots — prompting OpenAI to say it would reduce models’ “sycophancy” toward users.
“We continue to improve how our models recognize and respond to signs of mental and emotional distress,” OpenAI said Tuesday.
The company said it had further plans to improve the safety of its chatbots over the coming three months, including redirecting “some sensitive conversations... to a reasoning model” that puts more computing power into generating a response.
“Our testing shows that reasoning models more consistently follow and apply safety guidelines,” OpenAI said.


CNN launches new series spotlighting global trends and innovation

CNN launches new series spotlighting global trends and innovation
Updated 03 September 2025

CNN launches new series spotlighting global trends and innovation

CNN launches new series spotlighting global trends and innovation
  • ‘Seasons’ explores evolving tastes shaping global culture across fashion, travel, food, technology, design and art
  • First season focusing on Japan’s cultural influence debuts on Sept. 6

LONDON: CNN announced on Wednesday the launch of Seasons, a new series exploring shifting trends shaping global culture across fashion, travel, food, technology, design and art.

The series will highlight some of the world’s most sought-after products and experiences, going behind the brands to examine the craftsmanship, innovation and strategies driving demand.

Ellana Lee, CNN’s group senior vice president and global head of productions, said that Seasons aimed to capture “what’s resonating right now,” reflecting evolving tastes that prioritize rarity, relevance and storytelling.

“Encapsulating the trends sweeping the world, audiences can stay up-to-date through short video explainers of what’s capturing the moment or enjoy a beautiful, in-depth TV show.”

Hosted by Japanese model and creative director, Hikari Mori, the first season focuses on Japan’s cultural influence, blending its pop art heritage with traditional crafts that have found a place in luxury fashion.

Early episodes will explore the art of haiku, local food culture and traditional fabrics and materials, as well as an interview with Japanese pop artist Takashi Murakami and an exclusive visit to his Tokyo studio.

The series includes short video explainers designed for social media alongside a deeper 30-minute show premiering on Sept. 6 on CNN International.