SAN FRANCISCO – Facebook is rolling out technology to make it easier to find and remove intimate pictures and videos posted without the subject’s consent, often called “revenge porn.”
Currently, Facebook users or victims of revenge porn have to report the inappropriate pictures before content moderators will review them.
The company has also suggested that users send their own intimate images to Facebook so that the service can identify any unauthorized uploads.
Many users, however, balked at the notion of sharing revealing photos or videos with the social-media giant, particularly given its history of privacy failures.
The company’s new machine learning tool is designed to find and flag the pictures automatically, then send them to humans to review.
Facebook and other social media sites have struggled to monitor and contain the inappropriate posts that users upload, from violent threats to conspiracy theories to inappropriate photos.
Facebook has faced harsh criticism for allowing offensive posts to stay up too long, for not removing posts that don’t meet its standards and sometimes for removing images with artistic or historical value.
Facebook has said it’s been working on expanding its moderation efforts, and the company hopes its new technology will help catch some inappropriate posts.
The technology, which will be used across Facebook and Instagram, was trained using pictures that Facebook has previously confirmed were revenge porn.
It is trained to recognize a “nearly nude” photo — a lingerie shot, perhaps — coupled with derogatory or shaming text that would suggest someone uploaded the photo to embarrass or seek revenge on someone else.
At least 42 states have passed laws against revenge porn.
Many such laws came up in the past several years as posting of non-consensual images and videos has proliferated.
New York’s law, which passed in February, allows victims to file lawsuits against perpetrators and makes the crime a misdemeanor.
Facebook has been working to combat the spread of revenge porn on its site for years, but has largely relied on people proactively reporting the content up until now.
But that means by the time it’s reported, someone else has already seen it, chief operating officer Sheryl Sandberg said in an interview with The Associated Press.
And it’s often tough and embarrassing for a victim to report a photo of themselves.
“This is about using technology to get ahead of the problem,” Sandberg said.
Facebook still sees user-contributed photos as one way to address the problem, and says it plans to expand that program to more countries.
It allows people to send in photos they fear might be circulated through encrypted links. Facebook then creates a digital code of the image so it can tell if a copy is ever uploaded and deletes the original photo from its servers./gsg