As part of a limited experiment, Twitter will be asking users if they are sure they want to post that mean tweet before they hit publish.
The company announced this on Tuesday, officially describing it as “a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.”
Such an experiment closely resembles the steps that Instagram started taking to reduce bullying on the platform. As of last year, when a user makes a comment or writes a post that could be offensive to others, Instagram will ask if they are sure they want to post the content and gives them an opportunity to edit it. Such content is identified by the platform to be harmful if it resembles messages that were previously reported.
This Twitter experiment is only taking place on iOS devices; however, depending on how successful it is at reducing the number of harmful tweets published on the platform, it could soon be compatible with other operating systems. RGA
RELATED STORIES:
Twitter broadens ban on ‘dehumanizing’ comments
Twitter lets users ‘hide’ off-course replies to tweets