Google AI research says AI may distort reality

You’d think that tech companies would always have glowing opinions on their latest innovations. Yet, a recent Google AI research reported that artificial intelligence may ruin how we see reality.

The arXiv paper’s title is “Generative AI Misuse; A Taxonomy of Tactics and Insights from Real-World Data.”

It says artificial intelligence is enabling people to spread misinformation, whether or not intended.

READ: How to boost learning with ChatGPT

Modern AI programs are far from perfect, but they’re becoming highly effective in creating convincing text, photos, videos, and other media.

As a result, people find it harder to identify what’s real and fabricated. 

What were the Google AI research’s findings?

Google researchers learned more about the impacts of artificial intelligence by studying existing reports on Generative AI (GenAI) misuse.

Then, they plotted patterns common among these datasets.

“The widespread availability, accessibility, and hyperrealism of GenAI outputs across modalities have also enabled new, lower-level forms of misuse that blur the lines between authentic presentation and deception.” 

In layman’s terms, AI tools have become so prevalent that more people can misuse the technology. After all, you can use most of these powerful tools without paying a centavo.

AI programs have become so advanced that they can make photorealistic images and videos.

However, most people do not properly indicate that their AI-generated images and other media came from artificial intelligence.

As a result, they make it difficult to distinguish between real and fake information and content. Worse, these have more profound effects on our society.

For instance, AI-generated media on political candidates makes it difficult for the public to identify the truth regarding their potential leaders. 

The Google AI research says more resort to the “liar’s dividend.” It is the phenomenon where high-profile individuals can dismiss any unfavorable information as AI-fabricated.

More importantly, artificial intelligence could make it harder for everyone to gain reliable information on anything. As a result, it may impede society from addressing its problems. 

If we cannot agree on facts regarding any issue, how can we start planning solutions?

That is why the Google AI research paper concludes by recommending a “multi-faceted approach to mitigating GenAI misuse.” 

We need “policymakers, researchers, industry leaders, and civil society” to collaborate regarding this issue.

Fortunately, the Philippines has a pending AI Bill that may help the country do so. Click here to read its corresponding Inquirer Tech story.

Read more...