How to spot AI deepfake scams | Inquirer Technology

How to spot AI deepfake scams

10:30 AM October 05, 2023

What do Tom Hanks, Gayle King, and Mr. Beast have in common? They were the recent victims of AI deepfake scams impersonating them to fool others into buying products and services. In response, they swiftly warned the public they had nothing to do with these fraudulent ads. However, similar schemes will inevitably emerge in the near future.

Creating these convincing scam videos and images is much easier due to the rise of generative artificial intelligence. You have numerous free apps that let you make these materials in a few seconds and word prompts. Fortunately, there are ways to distinguish between AI-generated and genuine content. Even better, we’ll cover them in this article!

I will discuss how to detect AI deepfake scams based on experts worldwide. Later, I will explain how they could significantly harm society to illustrate the importance of these tips.

ADVERTISEMENT

How to detect AI deepfake scams

FEATURED STORIES

A recent study published in Proceedings of the National Academy of Sciences USA says detecting AI-generated scams is becoming more difficult. Yet, telecom firm Telefonica says you can still spot them if you look for the following characteristics:

  • Number of flashes: Focus on how often an image in a video flashes. Deepfakes typically flash more frequently than people.
  • Face and Body: Simulating a human body with AI takes time and effort, so scammers usually focus on the face to save time. If the body has weird proportions, that’s another sign it could be an AI deepfake scam.
  • Video length: Most AI-generated scams are only a few seconds long, like TikToks.
  • Video sound: AI scams usually have lip movements and audio out of sync.
  • Inside the mouth: AI image generators struggle to simulate mouths, so they blur them.

You may also like: Filipinos must be wary of online scams

Singaporean experts also shared their insights on Channel News Asia. Trend Micro’s Singapore country manager, David Ng, tells people to watch out for atypical facial movements of blinking patterns around the face.

National University of Singapore (NUS) Associate Professor Terence Sim to listen for inconsistent audio. Also, he told CNA that people should be wary of three factors:

  • Physical artifacts: Professor Sim says these are visual imperfections or glitches.
  • Semantic features: These are unnatural behaviors exhibited by a video’s subject. For example, a speaker might look away from the camera when stating facts.
  • Content: The Professor cited a video where an AI-generated Elon Musk said, “working for the company” while sharing his experiences. “You don’t say this if you’re the owner” of the business.

What are the effects of AI deepfakes?

Image depicting the societal impact of AI deepfakes.

Photo Credit: voicebot.ai

Celebrities receive the immediate impacts of AI deepfake scams as they could potentially ruin their reputations. That is why these impersonated people clarify such situations immediately.

For example, Tom Hanks warned on Instagram, “There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it.”

ADVERTISEMENT

The first-ever US Senate hearing for artificial intelligence pointed out the potential of AI deepfake scams to spread misinformation. Senator Blumenthal played a deepfake of himself to illustrate that threat.

TechTalks said AI scams may also trigger a “Mandela Effect.” It refers to beliefs prevailing despite overwhelming evidence. That term started when a fake story about Nelson Mandela died in prison.

In reality, he got out of prison in 1990 and became South Africa’s president. The fake story stuck in peoples’ minds, so most ignored the real one. AI deepfake scams could have a similar effect.

You may also like: How to avoid Gmail verification scams

They are so lifelike that some may struggle to doubt their authenticity. Also, AI scams could facilitate social engineering, manipulating societal factors to make people do specific actions.

For example, a convincing AI-generated video could fool folks into sending money to fraudsters. More importantly, it may erode public trust in any information.

If you live in a world without reliable information, you will likely ignore any source. As a result, AI could destroy healthy public discourse and disrupt collaboration for solving pressing issues.

Conclusion

AI deepfakes are becoming more prevalent worldwide. Nowadays, creating fake images and videos is easy using numerous free online services.

Fortunately, a trained eye can spot fakes from genuine content. Refer to the list above whenever you find a suspicious picture or video to avoid being misled.

Protecting yourself from the latest online scams requires constant learning as hackers develop new tricks. Follow Inquirer Tech for the latest digital tips and trends.

Frequently asked questions about AI deepfake scams

What are the negative effects of AI deepfakes?

Deepfakes could spread misinformation or defame prominent personalities. They could interrupt elections, public discourse, and other important aspects of society. Also, they could make it harder to distinguish between truth and falsehood. Believe it or not, Pope Francis issued a global message with those same warnings.

How do I spot deepfakes?

Look for strange behaviors and movements in the video’s subject to see if it’s fake. If only one person is moving his face and nothing else, the clip is likely an AI deepfake. Moreover, pay attention to what they’re saying to detect potential errors. Check out the list above to learn more tips.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

What should I do if I spot an AI deepfake scam?

If you suspect you found an AI deepfake scam, don’t share it with others. Otherwise, you would be helping that fraudster trick more people on the Internet. Nowadays, most countries have cybercrime divisions dealing with such instances. For example, Filipinos may contact the Cybercrime Investigation and Coordinating Center (CICC) for assistance.

TOPICS: AI, Cybersecurity, how-to, interesting topics
TAGS: AI, Cybersecurity, how-to, interesting topics

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.