Tech firms unite against deceptive AI in elections
Artificial intelligence will usher in a new technological revolution that will enhance our productivity and push our potential further than ever. However, AI can also become a powerful tool for spreading falsehoods and disrupting essential systems. Specifically, malicious actors could use AI to share fake disparaging content against election candidates.
Elections are the backbone of democracies because they allow us to change our societies based on a majority vote. That is why the Munich Security Conference announced the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections.” It involves 20 prominent technology companies working against AI-generated misinformation.
How will tech firms fight against AI in elections?
The Tech Accord to Combat Deceptive Use of AI in 2024 Elections involves 20 large tech corporations that will fight against AI-generated election misinformation. These include Adobe, Google, IBM, and OpenAI. They outlined eight commitments:
Article continues after this advertisement- Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate.
- Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content.
- Seeking to detect the distribution of this content on their platforms.
- Seeking to appropriately address this content detected on their platforms.
- Fostering cross-industry resilience to deceptive AI election content.
- Providing transparency to the public regarding how the company addresses it.
- Continuing to engage with a diverse set of global civil society organizations, and academics.
- Supporting efforts to foster public awareness, media literacy, and all-of-society resilience.
READ: Mis/disinformation tagged as No.1 threat to global stability
These will cover AI-generated audio, video, and pictures that change the appearance, voice, or actions of political candidates and other key figures. Also, its scope will cover AI-generated content that disseminates information about voting logistics.
What are the accord’s critical pillars?
Microsoft’s Vice Chair and President, Brad Smith, wrote a blog post that explains these accords further. He listed and elaborated on three critical pillars:
Article continues after this advertisement- First, the accord’s commitments will make it more difficult for bad actors to use legitimate tools to create deepfakes. It will deploy safety measures in AI services and AI-generated content, such as metadata and watermarking.
- Second, the accord brings the tech sector together to detect and respond to deepfakes in elections. Microsoft will deploy its AI for Good Lab and Threat Analysis Center to detect AI in elections better than before. Also, the Microsoft-2024 Elections webpage will let political candidates report on their AI deepfakes.
- Third, the accord will help advance transparency and build societal resilience to deepfakes in elections. Microsoft will publish an annual transparency report and support public awareness campaigns to help people spot deceptive AI in elections.
READ: China suspected of using AI on social media to sway US voters, Microsoft says
It might seem artificial intelligence is only harmful to societies, but it can improve government services. Check out my other article to learn more about this AI application.