AI tools easily create poll lies from politician’s voices – study

NEW YORK — As high-stakes elections approach in the United States (US) and European Union (EU), publicly available artificial intelligence (AI) tools can be easily weaponized to churn out convincing election lies in the voices of leading political figures, a digital civil rights group said Friday.

Researchers at the Washington, D.C.-based Center for Countering Digital Hate tested six of the most popular AI voice-cloning tools to see if they would generate audio clips of five false statements about elections in the voices of eight prominent American and European politicians.

In a total of 240 tests, the tools generated convincing voice clones in 193 cases, or 80% of the time, the group found. In one clip, a fake US President Joe Biden says election officials count each of his votes twice. In another, a fake French President Emmanuel Macron warns citizens not to vote because of bomb threats at the polls.

The findings reveal a remarkable gap in safeguards against the use of AI-generated audio to mislead voters, a threat that increasingly worries experts as the technology has become both advanced and accessible. While some of the tools have rules or tech barriers in place to stop election disinformation from being generated, the researchers found many of those obstacles were easy to circumvent with quick workarounds.

Only one of the companies whose tools were used by the researchers responded after multiple requests for comment. ElevenLabs said it was constantly looking for ways to boost its safeguards.

READ: Comelec wants AI ban on campaign materials ahead of 2025 polls

With few laws in place to prevent abuse of these tools, the companies’ lack of self-regulation leaves voters vulnerable to AI-generated deception in a year of significant democratic elections around the world. EU voters head to the polls in parliamentary elections in less than a week, and US primary elections are ongoing ahead of the presidential election this fall.

“It’s so easy to use these platforms to create lies and to force politicians onto the back foot denying lies again and again and again,” said the center’s CEO, Imran Ahmed. “Unfortunately, our democracies are being sold out for naked greed by AI companies who are desperate to be first to market … despite the fact that they know their platforms simply aren’t safe.”

FILE PHOTO: President Joe Biden speaks during a campaign event at Girard College, May 29, 2024, in Philadelphia. A group that monitors for misinformation found deep problems when it tested the most popular artificial intelligence voice-cloning tools and asked them to create audio of some of the world’s leading political figures. (AP Photo/Evan Vucci, File)

The center – a nonprofit with offices in the US, United Kingdom (UK), and Belgium – conducted the research in May. Researchers used the online analytics tool Semrush to identify the six publicly available AI voice-cloning tools with the most monthly organic web traffic: ElevenLabs, Speechify, PlayHT, Descript, Invideo AI, and Veed.

Next, they submitted real audio clips of the politicians speaking. They prompted the tools to impersonate the politicians’ voices making five baseless statements.

One statement warned voters to stay home amid bomb threats at the polls. The other four were various confessions – of election manipulation, lying, using campaign funds for personal expenses and taking strong pills that cause memory loss.

In addition to Biden and Macron, the tools made lifelike copies of the voices of US Vice President Kamala Harris, former US President Donald Trump, UK Prime Minister Rishi Sunak, UK Labour Leader Keir Starmer, European Commission President Ursula von der Leyen and EU Internal Market Commissioner Thierry Breton.

“None of the AI voice cloning tools had sufficient safety measures to prevent the cloning of politicians’ voices or the production of election disinformation,” the report said.

Some of the tools – Descript, Invideo AI and Veed – require users to upload a unique audio sample before cloning a voice, a safeguard to prevent people from cloning a voice that isn’t their own. Yet the researchers found that barrier could be easily circumvented by generating a unique sample using a different AI voice cloning tool.

READ: Tech firms unite against deceptive AI in elections

One tool, Invideo AI, not only created the fake statements the center requested but extrapolated them to create further disinformation.

When producing the audio clip instructing Biden’s voice clone to warn people of a bomb threat at the polls, it added several of its own sentences.

“This is not a call to abandon democracy but a plea to ensure safety first,” the fake audio clip said in Biden’s voice. “The election, the celebration of our democratic rights, is only delayed, not denied.”

Overall, in terms of safety, Speechify and PlayHT performed the worst of the tools, generating believable fake audio in all 40 of their test runs, the researchers found.

ElevenLabs performed the best and was the only tool that blocked the cloning of UK and US politicians’ voices. However, the tool still allowed for the creation of fake audio in the voices of prominent EU politicians, the report said.

FILE PHOTO: European Commission President Ursula von der Leyen speaks in the ballroom of Muenster Town Hall, in Muenster, Germany, May 28, 2024. A group that monitors for misinformation found deep problems when it tested the most popular artificial intelligence voice-cloning tools and asked them to create audio of some of the world’s leading political figures. (Jana Rodenbusch/Pool Photo via AP)

Aleksandra Pedraszewska, Head of AI Safety at ElevenLabs, said in an emailed statement that the company welcomes the report and the awareness it raises about generative AI manipulation.

She said ElevenLabs recognizes there is more work to be done and is “constantly improving the capabilities of our safeguards,” including the company’s blocking feature.

“We hope other audio AI platforms follow this lead and roll out similar measures without delay,” she said.

The other companies cited in the report didn’t respond to emailed requests for comment.

The findings come after AI-generated audio clips already have been used in attempts to sway voters in elections across the globe.

In fall 2023, just days before Slovakia’s parliamentary elections, audio clips resembling the voice of the liberal party chief were shared widely on social media. The deepfakes purportedly captured him talking about hiking beer prices and rigging the vote.

Earlier this year, AI-generated robocalls mimicked Biden’s voice and told New Hampshire primary voters to stay home and “save” their votes for November. A New Orleans magician who created the audio for a Democratic political consultant demonstrated to the AP how he made it, using ElevenLabs software.

Experts say AI-generated audio has been an early preference for bad actors, in part because the technology has improved so quickly. Only a few seconds of real audio are needed to create a lifelike fake.

Yet other forms of AI-generated media also are concerning experts, lawmakers and tech industry leaders. OpenAI, the company behind ChatGPT and other popular generative AI tools, revealed on Thursday that it had spotted and interrupted five online campaigns that used its technology to sway public opinion on political issues.

Ahmed, the CEO of the Center for Countering Digital Hate, said he hopes AI voice-cloning platforms will tighten security measures and be more proactive about transparency, including publishing a library of audio clips they have created so they can be checked when suspicious audio is spreading online.

He also said lawmakers need to act. The US Congress has not yet passed legislation regulating AI in elections. While the EU has passed a wide-ranging artificial intelligence law set to go into effect over the next two years, it does not address voice-cloning tools specifically.

“Lawmakers need to work to ensure there are minimum standards,” Ahmed said. “The threat that disinformation poses to our elections is not just the potential of causing a minor political incident, but making people distrust what they see and hear, full stop.”

Read more...