‘Universal’ detector spots AI deepfake videos with record accuracy


A Deepfake video of Australian Prime Minister Anthony Albanese on a smartphone

Australian Associated Press / Alamy

A universal deepfake detector has reached the best precision to date to identify several types of videos handled or completely generated by artificial intelligence. Technology can help report non -consensual pornography generated by AI, deep scams or electoral disinformation videos.

The widespread availability of Deepfake Deepfake creation tools, supplied by AI cheap, has fueled online spread out of control of synthetic videos. Many represent women – including celebrities and even schoolgirls – in non -consensual pornography. And Deepfakes were also used to influence political elections, as well as to improve financial scams targeting both ordinary consumers and business managers.

But most models of AI trained to detect synthetic video focus on faces – which means that they are the most effective in locating a specific Defake Deep type, where the face of a real person is exchanged in an existing video. “We need a model that will be able to detect videos handled opposite as well as videos managed by the background or entirely generated by AI,” explains Rohit Kundu At the University of California, Riverside. “Our model approaches exactly this concern – we assume that the whole video can be generated synthetically.”

Kundu and his colleagues trained their universal detector powered by AI to monitor several elements of videos training, as well as the faces of people. It can identify subtle signs of spatial and temporal inconsistencies in deep braids. Consequently, it can detect inconsistent lighting conditions on people who have been artificially inserted into face exchange videos, differences in the background of videos completely generated by AI and even signs of manipulation of AI in synthetic videos that do not contain any human face. The detector also reports realistic scenes of video games, such as Grand Theft Auto V, which are not necessarily generated by AI.

“Most existing methods manage the face videos generated by AI – such as facial samples, lip synchronization videos or facial reconstructions that animate a single image,” said Siwei Lyu at the University of Buffalo in New York. “This method has a wider applicability range.”

The universal detector has reached precision between 95% and 99% to identify four sets of test videos involving-face-manipulated. It is better than all other methods published to detect this type Defake Deep. When monitoring completely synthetic videos, he also had more precise results than any other detector evaluated to date. Researchers presented their work In the 2025 IEEE / Conference on computer vision and recognition of models in Nashville, Tennessee on June 15.

Several Google researchers have also participated in the development of the new detector. Google did not answer questions about the question of whether this detection method could help identify Deepfakes on its platforms, such as YouTube. But the company is one of those who support a watermark tool that facilitates the identification of the content generated by their AI systems.

The universal detector could also be improved in the future. For example, it would be useful if it could detect Deepfakes deployed during live videoconference calls, a tip that some crooks have already started to use.

“How do you know that the person on the other side is authentic, or is it a video generated in depth, and this can be determined even that the video moves on a network and is affected by the characteristics of the network, such as the bandwidth available?” said Amit Roy-Chowdhury At the University of California, Riverside. “This is another direction that we examine in our laboratory.”

Subjects:

Leave a Reply

Your email address will not be published. Required fields are marked *