ֱ

Investigators find black boxes from crashed Russia plane

Investigators find black boxes from crashed Russia plane
In this photo taken from video released by Russian Investigative Committee on July 24, 2025, a view of the place of the crashed Russian An-24 passenger plane of the Siberia-based Angara Airlines while carrying 49 passengers in 15 kilometers south of Tynda, Far Eastern Amur region of Russia. (AP)
Short Url
Updated 25 July 2025

Investigators find black boxes from crashed Russia plane

Investigators find black boxes from crashed Russia plane
  • Investigators are looking into whether the crash was caused by technical malfunction or human error
  • Russian authorities have also launched an investigation into the plane’s operator

MOSCOW: Investigators have recovered flight data recorders from the wreckage of a plane that crashed in Russia’s far east, killing 48 people, and will send them for analysis, Russian authorities said Friday.

The aircraft, an Antonov-24 operated by Angara Airlines, was making a second attempt to land in the remote Siberian town of Tynda when it disappeared from radar around 1:00 p.m. local time (0400 GMT) on Thursday.

A rescue helicopter later spotted the burning fuselage of the plane on a forested mountain slope about 15 kilometers (nine miles) south of Tynda’s airport.

Prosecutors have not commented on what may have caused the crash, but a rescuer quoted by the TASS news agency said the twin-propeller plane — almost 50 years old — was attempting to land in thick cloud.

Investigators are looking into whether the crash was caused by technical malfunction or human error, the agency reported.

“The flight recorders have been found at the crash site and will be delivered to Moscow for decryption in the near future,” Russia’s transport ministry said in a statement.

Russian authorities have also launched an investigation into the plane’s operator, Angara Airlines, and whether it complied with regulations, it added.

“Based on the findings, a decision will be made on the company’s future operations,” the ministry said.

Angara Airlines, a small regional carrier based in the Siberian city of Irkutsk, said it was doing “everything possible to investigate the circumstances of the accident.”

The company’s CEO, Sergei Salamanov, told Russia’s REN TV channel on Thursday that it was the plane’s captain — an experienced pilot with 11,000 hours of flight time — who decided to make the flight.

“The weather forecast was unfavorable,” he said.

The plane came down in a hard-to-reach area and it took a ground rescue team hours to reach the site.

Russia’s transport ministry said the families of the 48 killed — six of whom were crew — would receive five million rubles’ ($63,000) compensation each.

The number killed could have risen to 49 if the Marina Avalyan, who was already sitting on the plane, had not been asked by her daughter to urgently get off and return home, according to a story reported by Argumenty i Fakty newspaper.

The daughter wanted Avalyan to look after her newborn baby, as she was taking her second child to a hospital, the daily said.

“I have no words to describe it: is this a miracle? Thank God she returned! My child has saved my mother,” Zimina told Argumenty i Fakty.


Denmark eyes new law to protect citizens from AI deepfakes

Updated 8 sec ago

Denmark eyes new law to protect citizens from AI deepfakes

Denmark eyes new law to protect citizens from AI deepfakes
COPENHAGEN: In 2021, Danish video game live-streamer Marie Watson received an image of herself from an unknown Instagram account.
She instantly recognized the holiday snap from her Instagram account, but something was different: Her clothing had been digitally removed to make her appear naked. It was a deepfake.
“It overwhelmed me so much,” Watson recalled. “I just started bursting out in tears, because suddenly, I was there naked.”
In the four years since her experience, deepfakes — highly realistic artificial intelligence-generated images, videos or audio of real people or events — have become not only easier to make worldwide but also look or sound exponentially more realistic. That’s thanks to technological advances and the proliferation of generative AI tools, including video generation tools from OpenAI and Google.
These tools give millions of users the ability to easily spit out content, including for nefarious purposes that range from depicting celebrities Taylor Swift and Katy Perry to disrupting elections and humiliating teens and women.
Copyright law
In response, Denmark is seeking to protect ordinary Danes, as well as performers and artists who might have their appearance or voice imitated and shared without their permission. A bill that’s expected to pass early next year would change copyright law by imposing a ban on the sharing of deepfakes to protect citizens’ personal characteristics — such as their appearance or voice — from being imitated and shared online without their consent.
If enacted, Danish citizens would get the copyright over their own likeness. In theory, they then would be able to demand that online platforms take down content shared without their permission. The law would still allow for parodies and satire, though it’s unclear how that will be determined.
Experts and officials say the Danish legislation would be among the most extensive steps yet taken by a government to combat misinformation through deepfakes.
Henry Ajder, founder of consulting firm Latent Space Advisory and a leading expert in generative AI, said that he applauds the Danish government for recognizing that the law needs to change.
“Because right now, when people say ‘what can I do to protect myself from being deepfaked?’ the answer I have to give most of the time is: ‘There isn’t a huge amount you can do,’” he said, ”without me basically saying, ‘scrub yourself from the Internet entirely.’ Which isn’t really possible.”
He added: “We can’t just pretend that this is business as usual for how we think about those key parts of our identity and our dignity.”
Deepfakes and misinformation
US President Donald Trump signed bipartisan legislation in May that makes it illegal to knowingly publish or threaten to publish intimate images without a person’s consent, including deepfakes. Last year, South Korea rolled out measures to curb deepfake porn, including harsher punishment and stepped up regulations for social media platforms.
Danish Culture Minister Jakob Engel-Schmidt said that the bill has broad support from lawmakers in Copenhagen, because such digital manipulations can stir doubts about reality and spread misinformation.
“If you’re able to deepfake a politician without her or him being able to have that product taken down, that will undermine our democracy,” he told reporters during an AI and copyright conference in September.
The right balance
The law would apply only in Denmark, and is unlikely to involve fines or imprisonment for social media users. But big tech platforms that fail to remove deepfakes could face severe fines, Engel-Schmidt said.
Ajder said Google-owned YouTube, for example, has a “very, very good system for getting the balance between copyright protection and freedom of creativity.”
The platform’s efforts suggest that it recognizes “the scale of the challenge that is already here and how much deeper it’s going to become,” he added.
Twitch, TikTok and Meta, which owns Facebook and Instagram, didn’t respond to requests for comment.
Engel-Schmidt said that Denmark, the current holder of the European Union’s rotating presidency, had received interest in its proposed legislation from several other EU members, including France and Ireland.
Intellectual property lawyer Jakob Plesner Mathiasen said that the legislation shows the widespread need to combat the online danger that’s now infused into every aspect of Danish life.
“I think it definitely goes to say that the ministry wouldn’t make this bill, if there hadn’t been any occasion for it,” he said. “We’re seeing it with fake news, with government elections. We are seeing it with pornography, and we’re also seeing it also with famous people and also everyday people — like you and me.”
The Danish Rights Alliance, which protects the rights of creative industries on the Internet, supports the bill, because its director says that current copyright law doesn’t go far enough.
Danish voice actor David Bateson, for example, was at a loss when AI voice clones were shared by thousands of users online. Bateson voiced a character in the popular “Hitman” video game, as well as Danish toymaker Lego’s English advertisements.
“When we reported this to the online platforms, they say ‘OK, but which regulation are you referring to?’” said Maria Fredenslund, an attorney and the alliance’s director. “We couldn’t point to an exact regulation in Denmark.”
‘When it’s online, you’re done’
Watson had heard about fellow influencers who found digitally-altered images of themselves online, but never thought it might happen to her.
Delving into a dark side of the web where faceless users sell and share deepfake imagery — often of women — she said she was shocked how easy it was to create such pictures using readily available online tools.
“You could literally just search ‘deepfake generator’ on Google or ‘how to make a deepfake,’ and all these websites and generators would pop up,” the 28-year-old Watson said.
She is glad her government is taking action, but she isn’t hopeful. She believes more pressure must be applied to social media platforms.
“It shouldn’t be a thing that you can upload these types of pictures,” she said. “When it’s online, you’re done. You can’t do anything, it’s out of your control.”