AI propaganda as effective as the real thing – study
The Internet is where everyone’s ideas from around the world converge, so it’s unsurprising that a person could believe claims from someone across the globe.
However, we must be more vigilant than ever as artificial intelligence facilitates writing such falsehoods. As a result, we could protect the truth worldwide. We need to pay more attention to online content as a study finds AI propaganda can be as effective as one made by humans.
In other words, ChatGPT can write misinformation that can be compelling and persuasive enough to change a person’s perspective and behavior. In response, artificially made falsehoods could muddle their decision-making and actions.
Article continues after this advertisementWhat do we know about AI propaganda?
The Oxford Academic article titled, “How persuasive is AI-generated propaganda?”, says many covert foreign propaganda campaigns came from GPT-3.
It was the large language model of ChatGPT before it upgraded to GPT-3.5. Lead author Joshua Goldstein and his team wanted to know how effective these could be on US respondents.
The researchers instructed GPT-3 to base its outputs on false claims. These include accusations that the US produced false reports regarding Syria’s use of chemical weapons and that Saudi Arabia funded the US-Mexico border wall.
Article continues after this advertisementThen, the scientists gave the articles to 8,200 adults, and 24.4% of them believed the claims before reading them. They discovered that 23% more people, or 47.4% of people, believed these claims when they read propaganda from real people.
In contrast, 19.1% more, or 43.5%, believed AI-generated propaganda. As a result, they found that AI propaganda is as effective as the real thing.
The researchers told the respondents they received false information after the experiment. “Our study shows that language models can generate text that is as persuasive as the content we see in real covert propaganda campaigns.”
“AI text generation tools will likely make the production of propaganda somewhat easier in coming years,” Goldstein told The Debrief in an email.
READ: Elevate your writing with ChatGPT
“A propagandist does not need to be fluent in the language of their target audience or hire a ton of people to write content if a language model can do that for them.”
In other words, a malicious actor could spread propaganda in another country despite not having writers who speak that region’s language. Instead, ChatGPT can make it for free.
“There is evidence that AI is being used in deceptive information campaigns online,” Goldstein said. However, The Debrief said he did not know many real-life examples.