OpenAI says it stopped AI influence operations
People fear AI might lead to more powerful covert influence operations that may compromise essential parts of society.
For example, state-backed agents may use artificial intelligence to spread convincing deepfakes to shift public opinion during elections.
READ: AI influencer earns $12,000 monthly
Article continues after this advertisementFortunately, OpenAI and other AI firms remain vigilant in thwarting these schemes. Specifically, the former took down five schemes from different countries.
What were the AI influence schemes OpenAI caught?
On May 30, 2024, OpenAI released an article detailing covert influence operations (IO) it took down over the last three months.
IOs abuse AI systems like OpenAI’s to manipulate public opinion or influence political outcomes. The San Francisco-based firm says it disrupted the following:
Article continues after this advertisement- Bad Grammar is a Russia-based operation that sends short, political comments in English and Russian on Telegram. It targets users in Ukraine, Moldova, the Baltic States, and the United States.
- Doppelganger is another Russia-based AI influence operation that generates comments on X and 9GAG, a popular meme website. Moreover, it generates headlines and converts news articles into Facebook posts. \
- Spamouflage is a Chinese network that uses OpenAI’s models to research public social media activity. Then, it generates texts for X, Medium, Blogspot, and other platforms and debugs code for database management.
- The International Union of Virtual Media (IUVM) used AI to generate and translate long articles. Then, it posts the content on the Iranian threat actor iuvmpress(.)co.
- Zero Zeno is an AI influence operation under the Israel-based company STOIC. OpenAI makes this distinction because it only disrupted the IO, not the company. Zero Zeno used AI to generate articles and comments posted across multiple platforms.
Offensive and defensive AI influence trends
OpenAI says it uncovered the usual AI methods for influencing the public. AI-powered online schemes are worldwide, so everyone should learn these common techniques:
- Content generation: Threat actors use AI to generate text for nefarious schemes.
- Mixing old and new: Online criminals merge novel and conventional methods, such as mixing manually written texts with AI-generated memes.
- Faking engagement: Some AI influence networks use the technology to seem they have many viewers and users. For example, these AI networks may generate replies for these posts, making it seem like it has active engagement.
- Productivity gains: Threat actors use AI to facilitate tasks like summarizing social media posts or debugging code.
Fortunately, AI firms gaining new techniques to beat back these online threats. Here’s how they keep AI users safe worldwide:
- Defensive design: AI firms design their AI models to refuse to generate potentially harmful texts and media.
- AI-enhanced investigation: AI companies also adjusted their models to make it easier to detect and analyze abusive usage.
- Distribution: OpenAI says these IOs failed to raise a substantial audience despite posting on multiple platforms.
- Industry sharing: AI firms share findings to keep the industry updated on the newest threats.
- Human element: OpenAI explained the humans behind these online schemes still make mistakes unchanged by artificial intelligence.