Fooling deepfake detectors is possible according to American scientists | Inquirer Technology

Fooling deepfake detectors is possible according to American scientists

/ 05:16 PM February 11, 2021

deepfake

Image: Ekkasit919/Getty Images

Deepfakes still have a bright future ahead of them, it would seem. It is still possible to thwart the recognition of deepfakes by even the most highly developed detectors, according to scientists at the University of San Diego. By inserting “adversarial examples” into each frame, artificial intelligence can be fooled. An alarming observation for scientists who are pushing to improve detection systems to better detect these faked videos.

During the last WACV 2021 (Winter Conference on Applications of Computer Vision), which took place from January 5 to 9, scientists from the University of San Diego have demonstrated that deepfake detectors have a weak point. According to these professionals, by using “adversarial examples” in each shot of the video, artificial intelligence could make a mistake and designate a deepfake video as true. These “adversarial examples” are in fact slightly manipulated inputs that can cause the artificial intelligence to make mistakes. To recognize deepfakes, the detectors focus on facial features, especially eye movement like blinking, which is usually poorly reproduced in these fake videos.

ADVERTISEMENT

This method can even be applied to videos that have been compressed, which until now has been able to remove these false elements, the American scientists said. Even without having access to the detector model, the deepfake creators who used these “adversarial examples” were able to thwart the vigilance of the most sophisticated detectors. This is the first time such actions have successfully attacked deepfake detectors, the scientists said.

FEATURED STORIES

Scientists are sounding an alarm and recommending improved training of the software to better detect these specific modifications and thus these new deepfakes: “To use these deepfake detectors in practice, we argue that it is essential to evaluate them against an adaptive adversary who is aware of these defenses and is intentionally trying to foil these defenses. We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector,” the researchers wrote. NVG

RELATED STORIES:

A deepfake bot generated ‘nude’ pictures of over 100,000 women

Maine Mendoza denies involvement in scandalous video; camp to sue perpetrators of ‘deepfake’ manipulation

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

TOPICS: Artificial Intelligence, Deepfake
TAGS: Artificial Intelligence, Deepfake

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.