FB cracks down on harmful real networks, using playbook against fakes | Inquirer Technology

FB cracks down on harmful real networks, using playbook against fakes

/ 08:15 AM September 17, 2021

FB cracks down on harmful real networks, using playbook against fakes

FILE PHOTO: Attendees walk past a Facebook logo during Facebook Inc’s F8 developers’ conference in San Jose, California, U.S., on April 30, 2019. REUTERS/Stephen Lam/File Photo

Facebook is taking a more aggressive approach to shut down coordinated groups of real-user accounts engaging in certain harmful activities on its platform, using the same strategy its security teams take against campaigns using fake accounts, the company told Reuters.

The new approach, reported here for the first time, uses the tactics usually taken by Facebook‘s security teams for wholesale shutdowns of networks engaged in influence operations that use false accounts to manipulate public debate, such as Russian troll farms.

Article continues after this advertisement

It could have major implications for how the social media giant handles political and other coordinated movements breaking its rules, at a time when Facebook‘s approach to abuses on its platforms is under heavy scrutiny from global lawmakers and civil society groups.

FEATURED STORIES

Facebook said it now plans to take this same network-level approach with groups of coordinated real accounts that systemically break its rules, through mass reporting, where many users falsely report a target’s content or account to get it shut down, or brigading, a type of online harassment where users might coordinate to target an individual through mass posts or comments.

In a related change, Facebook said on Thursday that would be taking the same type of approach to campaigns of real users that cause “coordinated social harm” on and off its platforms, as it announced a takedown of the German anti-COVID restrictions Querdenken movement.

Article continues after this advertisement

These expansions, which a spokeswoman said were in their early stages, means Facebook‘s security teams could identify core movements driving such behavior and take more sweeping actions than the company removing posts or individual accounts as it otherwise might.

Article continues after this advertisement

In April, BuzzFeed News published a leaked Facebook internal report about the company’s role in the January 6 riot on the U.S. Capitol and its challenges in curbing the fast-growing “Stop the Steal” movement, where one of the findings was Facebook had “little policy around coordinated authentic harm.”

Article continues after this advertisement

Facebook‘s security experts, who are separate from the company’s content moderators and handle threats from adversaries trying to evade its rules, started cracking down on influence operations using fake accounts in 2017, following the 2016 U.S. election in which U.S. intelligence officials concluded Russia had used social media platforms as part of a cyber-influence campaign – a claim Moscow has denied.

Facebook dubbed this banned activity by the groups of fake accounts “coordinated inauthentic behavior” (CIB), and its security teams started announcing sweeping takedowns in monthly reports. The security teams also handle some specific threats that may not use fake accounts, such as fraud or cyber-espionage networks or overt influence operations like some state media campaigns.

Article continues after this advertisement

Sources said teams at the company had long debated how it should intervene at a network level for large movements of real user accounts systemically breaking its rules.

In July, Reuters reported on the Vietnam army’s online information warfare unit, which engaged in actions including mass reporting of accounts to Facebook but also often used their real names. Facebook removed some accounts over these mass reporting attempts.

Facebook is under increasing pressure from global regulators, lawmakers, and employees to combat wide-ranging abuses of its services. Others have criticized the company over allegations of censorship, anti-conservative bias or inconsistent enforcement.

An expansion of Facebook‘s network disruption models to affect authentic accounts raises further questions about how changes might impact types of public debate, online movements, and campaign tactics across the political spectrum.

“A lot of the time problematic behavior will look very close to social movements,” said Evelyn Douek, a Harvard Law lecturer who studies platform governance. “It’s going to hinge on this definition of harm … but obviously people’s definitions of harm can be quite subjective and nebulous.”

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

High-profile instances of coordinated activity around last year’s U.S. election, from teens and K-pop fans claiming they used TikTok to sabotage a rally for former President Donald Trump in Tulsa, Oklahoma, to political campaigns paying online meme-makers, have also sparked debates on how platforms should define and approach coordinated campaigns.

TOPICS: Facebook, Social Media
TAGS: Facebook, Social Media

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.