Twitter moves to actively seek out terrorist supporters | Inquirer Technology

Twitter moves to actively seek out terrorist supporters

/ 02:02 PM February 06, 2016

Social Media ISIS

In this May 21, 2013 file photo, a view of an iPhone in Washington showing the Twitter apps among others. Twitter is now using spam-fighting technology to seek out accounts that might be promoting terrorist activity and is proactively looking at other accounts related to those flagged for possible removal, the company announced Friday. AP FILE PHOTO

WASHINGTON — Twitter is now using spam-fighting technology to seek out accounts that might be promoting terrorist activity and is examining other accounts related to those flagged for possible removal, the company announced Friday.

The move signaled efforts by Twitter to automatically identify tweets supporting terrorism, reflecting increased pressure placed by the US government for social media companies to respond to abuse more proactively. Child pornography has previously been the only abuse that was automatically flagged for human review on social media, using a different kind of technology that sources a database of known images.

Article continues after this advertisement

Twitter also said Friday it has suspended more than 125,000 accounts for threatening or promoting terrorist acts, mainly related to Islamic State militants, in the last eight months. Social media has increasingly become a tool for recruitment and radicalization that’s used by the Islamic State group and its supporters, who by some reports have sent tens of thousands of tweets per day.

FEATURED STORIES

Tech companies are dedicating increasingly more resources to tracking reports of violent threats. Twitter said Friday that it has increased the size of its team reviewing reports to reduce their response time “significantly.” The San Francisco-based company also changed its policy in April, adding language to make clear that “threatening or promoting terrorism” specifically counted as abusive behavior and violated its terms of use.

READ: Could Twitter stop the next terrorist attack?

Article continues after this advertisement

In January, the White House made good on President Barack Obama’s promise to reach out to Silicon Valley to tackle the use of social media by violent extremist groups. Those particularly include the Islamic State group, which inspired attackers who killed 14 in San Bernardino, California, last December.

Article continues after this advertisement

A post on one of the killers’ Facebook pages that appeared around the time of the attack included a pledge of allegiance to the leader of the Islamic State group.

Article continues after this advertisement

Facebook found the post — which was under an alias — the day after the attack. The company removed the profile from public view and informed law enforcement. But such a proactive effort is fairly uncommon.

The Obama administration sent several top officials to San Jose, California, including FBI Director James Comey, Attorney General Loretta Lynch and National Security Agency Director Mike Rogers.

Article continues after this advertisement

Among issues discussed was how to use technology to help speed the identification of “terrorist content,” according to a copy of the White House briefing memo obtained by The Associated Press.

“We recognize that identifying terrorist content that violates terms of service is far more difficult than identifying images of child pornography, but is there a way to use technology to quickly identify terrorist content? For example, are there technologies used for the prevention of spam that could be useful?” the memo stated.

READ: US recruits tech leaders to help disrupt Islamic State group

Since late 2015, Twitter began using “proprietary spam-fighting tools” to find accounts that might be violating their terms of service by promoting terrorism, sending them to be reviewed by a team at Twitter. That group also now looks into other accounts similar to those reported to them by other users.

Twitter said it has already had seen results, “including an increase in account suspensions and this type of activity shifting off of Twitter.”

But it also noted that there is no “magic algorithm” for identifying terrorist content, which is why even humans reviewing the material are ultimately making judgment calls “based on very limited information and guidance.” Free speech and local law in an area can also complicate matters.

“Like most people around the world, we are horrified by the atrocities perpetrated by extremist groups. We condemn the use of Twitter to promote terrorism,” Twitter said in a statement released Friday. It said it would continue to “engage with authorities and other relevant organizations to find solutions to this critical issue and promote powerful counter-speech narratives.”

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

READ: Twitter out to crack down on abusive tweets

TOPICS: Social Media, Social network, Spam, technology, Terrorism, Twitter
TAGS: Social Media, Social network, Spam, technology, Terrorism, Twitter

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.