MANILA, Philippines—The spread of the novel coronavirus and misinformation seem to come hand in hand as the world deals with the COVID-19 pandemic. As Facebook struggles with a limited workforce amid the global lockdown, the social media company has shifted to automated content moderation and tapped third-party fact-checkers.
Since mid-March when its content review workforce was sent home, Facebook has been relying more on machine learning to remove thousands of content that violate its policies, the social media giant revealed in a virtual briefing on Tuesday.
Despite this, Facebook’s internal employees remain in charge of reviewing the most sensitive types of content, which include child exploitation, terrorism, suicide and self-harm, as well as COVID-19-related misinformation that could contribute to imminent physical harm.
Facebook’s community standards have also been updated in response to COVID-19 with all pandemic-related content falling under the categories of coordinating harm, hate speech, bullying and harassment as subjects for removal.
The updated policies can be read as follows:
- Coordinating Harm: We remove content that encourages self-injury or real world harm. In COVID-19 context, this includes content that encourages the further spread of COVID-19, for example, events encouraging people to break quarantine and spread COVID-19. Content in all of these areas will be removed when we receive an escalation via news reports, trusted partners or multiple user reports.
- Hate Speech: We are removing content that claims a protected characteristic has, is spreading, or is responsible for the existence of, COVID-19 or that mocks a protected characteristic for having COVID-19.
- Bullying and harassment: We remove content that targets people maliciously so we are removing content that claims a private individual has COVID-19 or that they are not following guidance to self-isolate, when such information is not publicly available or self-declared.
- As the situation evolves, we continue to look at content on the platform, assess speech trends, and engage with experts, and will provide additional policy guidance when appropriate to keep the members of our community safe during this crisis.
Its ads policy has also been updated to prevent people from exploiting the pandemic by selling products that claim to have COVID-19 healing properties, as well as those who sell overpriced essential items such as face masks, disinfectant and hand sanitizers.
Fact-checking and machine learning
Other than working with established health authorities like the World Health Organization (WHO) and the Department of Health (DOH), the social media giant has also tapped 60 fact-checking partners that cover over 50 languages worldwide.
Facebook’s product policy manager Alice Budisatrijo noted that in March alone, the platform was able to remove “hundreds of thousands of content” that incite harm, while the fact-checkers have been monitoring less harmful content such as conspiracy theories and the like.
Once a post has been fact-checked and given a false rating, Facebook reduces its distribution on the platform leading to a fewer number of users who will be able to see these content. These posts will also be labeled with a warning saying these are false and will redirect users to a debunking article written by the fact-checker.
In March, 40 million posts related to COVID-19 have been proven to be false and were attached with warning labels, based on around 4,000 articles written by Facebook’s independent fact-checkers. In the Philippines, Facebook’s third-party fact-checking partners are Agence France-Presse, Rappler, and Vera Files.
“We make sure that they don’t go viral. So that’s how, in the end, we really limit the spread of misinformation on the platform,” Budisatrijo noted.
Facebook has also provided a $2-million grant to the Fact-Checking Network to expand its capacity in monitoring content since the outbreak of COVID-19.
“For every piece of content rated false by a fact-checker, we put it under a similarity detection method to find other content that match that kind of content, essentially duplicate content. And we apply the false ratings on those content as well,” Budisatrijo explained.
“And the way machine learning works is, it works based on what is fed into the machine. And so, with all these content that are rated false under the similarity detection method, we feed it back into the machine, so that the machine gets better in identifying other misinformation in relation to COVID-19,” she added.
Facebook’s family of apps has recently offered convenient ways of acquiring reliable information by redirecting users to legitimate resources worldwide such as the WHO, the DOH and the United Nations Children’s Fund.