AI propaganda as effective as the real thing – study

AI propaganda as effective as the real thing – study

/ 06:14 PM February 22, 2024

The Internet is where everyone’s ideas from around the world converge, so it’s unsurprising that a person could believe claims from someone across the globe.

However, we must be more vigilant than ever as artificial intelligence facilitates writing such falsehoods. As a result, we could protect the truth worldwide. We need to pay more attention to online content as a study finds AI propaganda can be as effective as one made by humans.

In other words, ChatGPT can write misinformation that can be compelling and persuasive enough to change a person’s perspective and behavior. In response, artificially made falsehoods could muddle their decision-making and actions.

Article continues after this advertisement

What do we know about AI propaganda?

Understanding AI propaganda
Free stock photo from Pexels

The Oxford Academic article titled, “How persuasive is AI-generated propaganda?”, says many covert foreign propaganda campaigns came from GPT-3.

FEATURED STORIES

It was the large language model of ChatGPT before it upgraded to GPT-3.5. Lead author Joshua Goldstein and his team wanted to know how effective these could be on US respondents.

The researchers instructed GPT-3 to base its outputs on false claims. These include accusations that the US produced false reports regarding Syria’s use of chemical weapons and that Saudi Arabia funded the US-Mexico border wall. 

Article continues after this advertisement

Then, the scientists gave the articles to 8,200 adults, and 24.4% of them believed the claims before reading them. They discovered that 23% more people, or 47.4% of people, believed these claims when they read propaganda from real people. 

Article continues after this advertisement

In contrast, 19.1% more, or 43.5%, believed AI-generated propaganda. As a result, they found that AI propaganda is as effective as the real thing.

Article continues after this advertisement

The researchers told the respondents they received false information after the experiment. “Our study shows that language models can generate text that is as persuasive as the content we see in real covert propaganda campaigns.” 

“AI text generation tools will likely make the production of propaganda somewhat easier in coming years,” Goldstein told The Debrief in an email.

Article continues after this advertisement

READ: Elevate your writing with ChatGPT

“A propagandist does not need to be fluent in the language of their target audience or hire a ton of people to write content if a language model can do that for them.”

In other words, a malicious actor could spread propaganda in another country despite not having writers who speak that region’s language. Instead, ChatGPT can make it for free.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

“There is evidence that AI is being used in deceptive information campaigns online,” Goldstein said. However, The Debrief said he did not know many real-life examples. 

TOPICS: AI, ChatGPT, propaganda
TAGS: AI, ChatGPT, propaganda

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.