How are Twitter users responding to the platform’s ‘read the news story’ prompts?

1
20200929 Twitter news prompts

Twitter revealed that a third of its users (33%) retweeted more press articles after having read them. Image: AFP/Olivier Douliery.

Twitter recently outlined that many internet users rarely read the news they retweet. That is why the tech giant started to issue banner warnings this summer to encourage users to read the stories they are about to share. This move seems to be paying off, since 40% of the users involved in the experiment clicked on the news reports’ links.

Twitter recently shared the encouraging results of an experiment it launched last May among Android users. “Headlines don’t tell the whole story. You can read the article on Twitter before Retweeting”, read the Twitter banner.

After seeing these warnings, 40% of the users clicked more often on the news links they were about to share via their account. While the fact that they clicked does not necessarily mean that the user read the article in question in its entirety, Twitter revealed that a third of those involved in the experiment (33%) retweeted more press articles after having read them.

The study also showed some users changed their mind about sharing certain stories on their account after having consulted them.

“It’s easy for articles to go viral on Twitter. At times, this can be great for sharing information, but can also be detrimental for discourse, especially if people haven’t read what they’re Tweeting,” Susanne Xie, Twitter director of product management, told Tech Crunch.

Twitter has announced this new feature will be tested by every user in the coming weeks.

Reducing harmful and misleading content

Researchers have shown in the past that Twitter users rarely read the links they share. A recent Columbia University study with Microsoft showed that 59% of the links included in tweets had not been consulted by the user who shared them.

Twitter’s new feature is rolling out at a time when the tech giants are under attack because of harmful and fake medical and conspiracy-promoting information spreading on their platforms. For instance, half a million clicks on Facebook last April were linked to fake medical information.

“This suggests that just when citizens needed credible health information the most, and while Facebook was trying to proactively raise the profile of authoritative health institutions on the platform, its algorithm was potentially undermining these efforts,” an Avaaz report said. CC

RELATED STORIES: 

Facebook shuts down fake China-based accounts backing Duterte, Sara’s possible presidential bid 

Facebook, citing virus misinformation, deletes Trump post 

Read more...