OpenAI says it stopped AI influence operations

OpenAI says it stopped AI influence operations

/ 10:23 AM June 03, 2024

People fear AI might lead to more powerful covert influence operations that may compromise essential parts of society. 

For example, state-backed agents may use artificial intelligence to spread convincing deepfakes to shift public opinion during elections.

READ: AI influencer earns $12,000 monthly

Fortunately, OpenAI and other AI firms remain vigilant in thwarting these schemes. Specifically, the former took down five schemes from different countries. 

FEATURED STORIES

What were the AI influence schemes OpenAI caught?

On May 30, 2024, OpenAI released an article detailing covert influence operations (IO) it took down over the last three months. 

IOs abuse AI systems like OpenAI’s to manipulate public opinion or influence political outcomes. The San Francisco-based firm says it disrupted the following:

  • Bad Grammar is a Russia-based operation that sends short, political comments in English and Russian on Telegram. It targets users in Ukraine, Moldova, the Baltic States, and the United States. 
  • Doppelganger is another Russia-based AI influence operation that generates comments on X and 9GAG, a popular meme website. Moreover, it generates headlines and converts news articles into Facebook posts. \
  • Spamouflage is a Chinese network that uses OpenAI’s models to research public social media activity. Then, it generates texts for X, Medium, Blogspot, and other platforms and debugs code for database management.
  • The International Union of Virtual Media (IUVM) used AI to generate and translate long articles. Then, it posts the content on the Iranian threat actor iuvmpress(.)co.
  • Zero Zeno is an AI influence operation under the Israel-based company STOIC. OpenAI makes this distinction because it only disrupted the IO, not the company. Zero Zeno used AI to generate articles and comments posted across multiple platforms.
This represents a computer for an AI influence operation.
Free stock photo from Pexels

OpenAI says it uncovered the usual AI methods for influencing the public. AI-powered online schemes are worldwide, so everyone should learn these common techniques:

  • Content generation: Threat actors use AI to generate text for nefarious schemes.
  • Mixing old and new: Online criminals merge novel and conventional methods, such as mixing manually written texts with AI-generated memes.
  • Faking engagement: Some AI influence networks use the technology to seem they have many viewers and users. For example, these AI networks may generate replies for these posts, making it seem like it has active engagement.
  • Productivity gains: Threat actors use AI to facilitate tasks like summarizing social media posts or debugging code.

Fortunately, AI firms gaining new techniques to beat back these online threats. Here’s how they keep AI users safe worldwide:

  • Defensive design: AI firms design their AI models to refuse to generate potentially harmful texts and media.
  • AI-enhanced investigation: AI companies also adjusted their models to make it easier to detect and analyze abusive usage.
  • Distribution: OpenAI says these IOs failed to raise a substantial audience despite posting on multiple platforms.
  • Industry sharing: AI firms share findings to keep the industry updated on the newest threats. 
  • Human element: OpenAI explained the humans behind these online schemes still make mistakes unchanged by artificial intelligence.
Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

TOPICS: AI, technology
TAGS: AI, technology

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.