With AI, Facebook can now proactively detect, remove harmful content | Inquirer Technology

With AI, Facebook can now proactively detect, remove harmful content

By: - Reporter / @KHallareINQ
/ 12:07 PM November 18, 2020

AFP

MANILA, Philippines — With the help of artificial intelligence, Facebook said it can now detect and remove content that goes against its community standards faster.

But first, what is artificial intelligence or AI?

ADVERTISEMENT

This was best explained by Facebook community engineer Chris Palow in a virtual press briefing on Tuesday.

FEATURED STORIES

“It’s largely machines — intelligent machines–making decisions in doing some things that humans can do as well,” Palow explained. “Some decision making that humans could do as well.”

“We use it in particular to problem solve whether or not a post, or an account or page or group that violates our community standards,” he added of what’s the use of AI for the social media giant.

With that, how does the use of AI make the detection and removal of harmful content faster?

According to Ryan Barnes, Facebook community integrity project manager, there are three teams behind Facebook that make sure the safety and security of the social media site is in place.

First is the content policy, which Barnes said is “the team that effectively writes our community standards. These are rules that is and isn’t allowed on Facebook.”

“This team includes people with expertise in areas like terrorism and child safety, human rights. They come from diverse fields, such as academia, law enforcement and even government,” Barnes said.

ADVERTISEMENT

Next would be the global operations, which is the team that enforce such rules through human review, Barnes said.

“We have about 15,000 content reviewers, collectively speaking 50 languages and more, and they based in sites in every major timezone so that they can review these reports 24/7,” she said of the global operations team.

And then there is the community integrity team, which Barnes said is responsible for “building the technology that enforces those community standards at scale across Facebook, Instagram, WhatsApp, Oculus and all of the different platforms that we support.”

In the early days of Facebook, Barnes said that harmful content was taken down manually, as they heavily relied on user reports.

“So people would report content if they found violating or problematic, and these were in cue chronologically so that our content reviewers would review them and if they were violated, they were taken down,” she said.

But as Facebook started to invest in AI, taking down harmful content has taken to a pro-active approach.

“We were trying to be more pro-active in our approach and find content to remove it without relying on someone or report it to us,” Barnes said of the use of AI.

“And as the AI got better, we rely on automation more and more in this pro-active approach. So technology would help us to identify these problems and help make a decision and no which violate or doesn’t violate,” she added.

And thanks to AI, it also ensures that their team won’t have to spend too much time reviewing multiple things on Facebook over and over, Barnes said.

RELATED STORIES:

AI and fact-checking: How Facebook deals with misinformation amid COVID-19

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

How to find legit COVID-19 info on Facebook, Messenger, Instagram, WhatsApp

TOPICS: AI, Artificial Intelligence, Facebook, Social Media
TAGS: AI, Artificial Intelligence, Facebook, Social Media

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.