Can artificial intelligence encourage good behavior among internet users? | Inquirer Technology

Can artificial intelligence encourage good behavior among internet users?

/ 05:44 PM September 25, 2020

woman, laptop

According to a study, nearly 30% of internet users modified potentially offensive comments after receiving a nudge from an algorithm. Image: Shutterstock/Khosro via AFP Relaxnews

Hostile and hateful remarks are thick on the ground on social networks in spite of persistent efforts by Facebook, Twitter, Reddit and YouTube to tone them down. Now researchers at the OpenWeb platform have turned to artificial intelligence to moderate internet users’ comments before they are even posted.

The study conducted by OpenWeb and Perspective API analyzed 400,000 comments that some 50,000 users were preparing to post on sites like AOL, Salon, Newsweek, RT and Sky Sports.

Article continues after this advertisement

Some of these users received a feedback message or nudge from a machine learning algorithm to the effect that the text they were preparing to post might be insulting, or against the rules for the forum they were using. Instead of rejecting comments it found to be suspect, the moderation algorithm then invited their authors to reformulate what they had written.

FEATURED STORIES

“Let’s keep the conversation civil. Please remove any inappropriate language from your comment,” was a message prompt or “Some members of the community may find your comment offensive. Try again?”

In response to this kind of feedback, a third of internet users (34%) immediately modified their comments, while 36% went ahead and posted their comments anyway, taking the risk that they might be rejected by the moderating algorithm. Even more surprisingly, some users made modifications that did not necessarily make their comments kinder or less hostile.

Article continues after this advertisement

Using tricks to get around the algorithm

Article continues after this advertisement

While close to 30% of users opted to accept the feedback message and delete potentially offensive text from their comments, more than a quarter (25.8%) attempted to dupe the moderating algorithm.

Article continues after this advertisement

Deliberate spelling errors and adding spaces between letters were just two of the tricks they used to modify the form of their comments while leaving their content unchanged.

The 400,000 comments analyzed in the study are, however, a mere drop in the ocean when compared to the millions that are posted daily on the internet, some of which carry offensive and insulting language.

Article continues after this advertisement

Faced with this situation, tech giants are boosting their efforts to combat online hate more effectively. It is a fight in which artificial intelligence can make a useful but, for now at least, imperfect contribution. RGA

RELATED STORIES:

Twitter broadens ban on ‘dehumanizing’ comments

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Over 100 fake accounts traced to PH police, military shut down by Facebook

Initially published on September 25, 2020. Updated on August 19, 2023.
TOPICS: Artificial Intelligence, Behavior, hate speech, machine learning, Social Media
TAGS: Artificial Intelligence, Behavior, hate speech, machine learning, Social Media

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.