Tech firms struggle to police content while avoiding bias |

Tech firms struggle to police content while avoiding bias

/ 01:42 PM August 28, 2019

Photo by AFP

WASHINGTON — Take the post down. Put it back up. Stop policing speech. Start silencing extremists.

That’s just a sampling of the intense, often contradictory demands facing tech companies and their social media platforms as they try to oversee internet content without infringing on First Amendment rights. The pendulum has swung recently toward restricting hateful speech that could spawn violence, following a mass shooting in Texas where the suspect had posted a racist screed online.

FEATURED STORIES

For Facebook, Google, Twitter, and others, it’s a no-win whipsaw, amplified by a drumbeat of accusations from President Donald Trump and his allies that their platforms are steeped in anti-conservative bias. With lawmakers and regulators in Washington poring over their business practices, the tech companies are anxious to avoid missteps – but finding criticism at every turn.

“There’s a thin line between disgusting and offensive speech, and political speech you just don’t like. People are blurring the lines,” said Jerry Ellig, a professor at George Washington University’s Regulatory Studies Center who was a policy official at the Federal Trade Commission.

Companies operating social media platforms have long enjoyed broad legal immunity for posted content. Under the 1996 Communications Decency Act, they have a legal shield for both for content they carry and for removing postings they deem offensive. Be it social media posts, uploaded videos, user reviews of restaurants or doctors, or classified ads – the shelter from lawsuits and prosecution has been a tent pole of social networking, and undoubtedly contributed to its growth.

But in the current climate of hostility toward Big Tech, that legal protection is getting a second look.

ADVERTISEMENT

Legislation proposed last spring by Republican Senator Josh Hawley of Missouri, an outspoken conservative critic, would require the companies to prove to regulators that they’re not using political bias to filter content. Failing to secure a bias-free audit from the government would mean a social media platform loses its immunity from legal action. It remains to be seen whether such a system could pass muster under the First Amendment.

Hawley’s legislation drew pushback from Michael Beckerman, who heads the major trade group Internet Association. He said it forces the platforms “to make an impossible choice: either host reprehensible, but First Amendment-protected speech, or lose legal protections that allow them to moderate illegal content like human trafficking and violent extremism. That shouldn’t be a tradeoff.”

ADVERTISEMENT

The bias issue has dogged Silicon Valley for years, though there’s been no credible evidence that political leanings factor into Google’s search algorithms or what users see on Facebook, Twitter or YouTube.

That’s done little to silence critics on the right, including at the White House, where Trump promised at a “social media summit” last month to explore “all regulatory and legislative solutions to protect free speech and the free-speech rights of all Americans.”

While no details were given, the remark hinted at an approach similar to Hawley’s bill.

Some critics of Big Tech said the industry’s woes are partly of their own making. Having championed their commitment to free speech, the argument goes, their users weren’t prepared for the reality that content, at times, will be restricted.

“They were insisting they were neutral, or just technology platforms,” said Eric Goldman, a law professor at Santa Clara University and co-director of its High Tech Law Institute.

That argument was persuasive, until the disappointment set in. “It eventually blew up and caused consumers to lose trust in them,” Goldman said.

Others noted that the industry has well-documented problems that can’t be blamed on Washington. Tech companies have faced criticism over diversity, their treatment of women and how they address sexual harassment and discrimination, both online and off. Protests from tech employees, many of them highly paid engineers, have sometimes boiled over into dramatic actions like the global walkout and street demonstrations by Google employees last November. In that case, the company responded by changing the way it investigates misconduct claims and simplifying the complaint process.

Then there are also the scandals surrounding lax data privacy and rampant foreign influence, which have consumed much of Washington’s attention since the 2016 election.

A massive Russian influence campaign used phony Facebook and other social media postings, seeking to sow discord among the millions of Americans who viewed them. Under pressure from lawmakers, tech companies are now working to devise protections against “deepfake” bogus but realistic-seeming videos and other online manipulations that could be used to influence the 2020 election.

Called before Congress, executives from Facebook, Twitter and Google have detailed their policies: live streaming banned for those who have violated rules, accounts suspended for breaches related to promoting terrorism, deceptive conduct prohibited in search, news, and video.

“Our efforts include deploying multiple teams that identify and take action against malicious actors,” Derek Slater, Google’s director of information policy, told lawmakers at a House hearing. “At the same time, we have to be mindful that our platforms reflect a broad array of sources and information, and there are important free-speech considerations. There is no silver bullet, but we will continue to work to get it right.”

Perhaps no company has faced louder criticism for its content policies than Twitter, Trump’s social media platform of choice.

Faced with complaints that Trump is able to post incendiary messages that would otherwise be removed, Twitter has sought a middle ground. Under a new policy announced in June, tweets that the service deems to involve matters of public interest, but which violate its rules, will be obscured by a warning explaining the violation. Users will have to tap through the warning to see the underlying message.

It’s a fine line that may not satisfy anyone. Calling someone a “lowlife,” a “dog” or a “stone-cold LOSER,” as Trump has done, may not by itself be a violation. But repeated insults against someone might amount to prohibited harassment.

Twitter said Trump’s recent tweets questioning how people could live in a “disgusting” and “rodent-infested” Baltimore didn’t violate its rules on “dehumanizing language” targeted at specific ethnic groups, as opposed to people living in a given place.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

“It’s a step in the right direction,” said Keegan Hankes, research analyst for the Southern Poverty Law Center’s intelligence project, who focuses on far-right extremist propaganda. But, he added, Twitter is essentially arguing “that hate speech can be in the public interest. I am arguing that hate speech is never in the public interest.” /kga

TOPICS: bias, content, Data, Facebook, Google, International news, Internet, News, Rights, Social Media, technology, Twitter, US, World, World News
TAGS: bias, content, Data, Facebook, Google, International news, Internet, News, Rights, Social Media, technology, Twitter, US, World, World News

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.