Curbing disinformation: Facebook removes 3 billion fake accounts | Inquirer Technology
GRAPPLING WITH FAKE NEWS, HATE SPEECH

Curbing disinformation: Facebook removes 3 billion fake accounts

05:38 AM May 25, 2019

SAN FRANCISCO — Facebook removed more than 3 billion fake accounts from October to March, twice as many as the previous six months, the company said on Thursday.

Nearly all of them were caught before they had a chance to become “active” users of the social network.

In a new report, Facebook said it saw a “steep increase” in the creation of abusive, fake accounts. While most of these fake accounts were blocked “within minutes” of their creation, the use of computers to generate millions of accounts at a time meant not only that Facebook caught more of the fake accounts, but that more of them slipped through.

ADVERTISEMENT

As a result, the company estimates that 5 percent of its 2.4 billion monthly active users are fake accounts, or about 119 million. This is up from an estimated 3 percent to 4 percent in the previous six-month report.

FEATURED STORIES

Objectionable material

The increase shows the challenges Facebook faces in removing accounts created by computers to spread spam, fake news and other objectionable material. Even as Facebook’s detection tools get better, so do the efforts by the creators of these fake accounts.

The new numbers come as the company grapples with challenge after challenge, ranging from fake news to Facebook’s role in elections interference, hate speech and incitement to violence in the United States, Myanmar, India and elsewhere.

Facebook also said on Thursday that it removed 7.3 million posts, photos and other material because it violated its rules against hate speech. That’s up from 5.4 million in the prior six months.

The company said it found more than 65 percent of hate speech on its own, before people reported it, during the first three months of 2019. That’s an improvement from 52 percent in the third quarter of 2018.

Facebook is under growing pressure to combat hate on its platform, as material continues to slip through even with recent bans of popular extremist figures such as Alex Jones and Louis Farrakhan.

ADVERTISEMENT

Thorny issue

Facebook employs thousands of people to review posts, photos, comments and videos for violations. Some things are also detected without humans, using artificial intelligence (AI). Both humans and AI make mistakes and Facebook has been accused of political bias as well as ham-fisted removals of posts discussing—rather than promoting—racism.

A thorny issue for Facebook is its lack of procedures for authenticating the identities of those setting up accounts. Only in instances where a user has been booted off the service and won an appeal to be reinstated does it ask to see ID documents.

While some have argued for stricter authentication on social media services, the issue is thorny. People, including UN free expression rapporteur David Kaye, say it’s important to allow pseudonymous speech online for human rights activists and others whose lives could otherwise be endangered.

Dipayan Ghosh, a former Facebook employee and White House tech policy adviser who is currently a Harvard fellow, said without greater transparency from Facebook, there was no way of knowing whether its improved automated detection was doing a better job of containing the disinformation problem.

“We lack public transparency into the scale of disinformation operations on Facebook in the first place,” he said.

And even if just 5 million accounts escaped through the cracks, Ghosh added, how much hate speech and disinformation are they spreading through bots “that subvert the democratic process by injecting chaos into our political discourse?”

“The only way to address this problem in the long term is for government to intervene and compel transparency into these platform operations and privacy for the end consumer,” he said.

Zuckerberg call

Facebook CEO Mark Zuckerberg has called for government regulation to decide what should be considered harmful content and on other issues. But at least in the United States, government regulation of speech could run into First Amendment hurdles.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

And what regulation might look like—and whether the companies, lawmakers, privacy and free speech advocates and others will agree on what it should look like—is not clear. —AP

TOPICS: $3 Billion, fake news, Zuckerberg
TAGS: $3 Billion, fake news, Zuckerberg

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.