Nightshade protects artists from AI | Inquirer Technology

Nightshade breaks AI programs to protect artists

01:52 PM October 25, 2023

University of Chicago researchers developed an open-source tool that corrupts AI image generators. Computer science professor Ben Zhao and his colleagues said artists could use it to prevent these programs from training on their artworks. As a result, it could encourage the owners of these programs to uphold artists’ intellectual property.

Many artists have been suing AI companies who claim these firms scraped their personal information and copyrighted material without permission. Such offenses are likely to become more frequent as more use AI image generators. In response, MIT and other groups are creating ways to protect artists.

This article will discuss how the Nightshade tool “poisons” AI databases. Later, I will cover other artificial intelligence projects similar to this one.

Article continues after this advertisement

How does the Nightshade tool work?

The Nightshade flower is a well-known poisonous flower. Similarly, its namesake tool corrupts AI image generators, producing inaccurate results.

FEATURED STORIES

AI image generators rely on numerous pictures in a database to turn your text prompts into your desired image. The University of Chicago researchers take advantage of this feature by feeding them with corrupted images. 

MIT Technology Review says it can cause AI programs to mistake hats for cakes and handbags for toasters. As a result, their tool can break AI image generators.

Article continues after this advertisement

MTR said it is difficult to remove the poisoned data because tech companies must find and delete the corrupted samples. The experts tested the tool on Stable Diffusion, one of the most popular AI image generators.

Article continues after this advertisement

They fed SD 50 distorted images of dogs, causing it to produce dogs with cartoonish faces and too many limbs. Later, they submitted 300 altered pictures, causing Stable Diffusion to make dog images that look like cats.

Article continues after this advertisement

Research Ben Zhao admits Nightshade may abuse the data poisoning method for malicious purposes. However, attackers must submit thousands of poisoned images to break larger, stronger AI models because they have billions of data samples.

You may also like: Why does my computer keep freezing?

Article continues after this advertisement

Nonetheless, University of Waterloo assistant professor Gautam Kamath says prominent AI models may become more serious in defending artists due to Nightshade and similar tools.

Kamath said vulnerabilities “don’t magically go away for these new models, and in fact, only become more serious.” “This is especially true as these models become more powerful and people place more trust in them since the stakes only rise over time,” he added.

Junfeng Yang, a Columbia University computer science professor, shares the same sentiment. “It is going to make [AI companies] think twice because they have the possibility of destroying their entire model by taking our work without our consent,” she said. 

What are other similar AI projects?

Comparing AI Projects
Photo Credit: techjockey.com

Ben Zhao and his team are the same folks who created a popular AI protection tool called Glaze. They created it after the success of their anti-facial recognition tool, Fawkes. 

In 2020, their SAND (Security, Algorithms, Networking, and Data) Lab was created to prevent facial recognition models from identifying personal photos.

Fawkes slightly distorts facial features to deter recognition programs. However, more characteristics define an artist’s style. That is why artists needed a tool more powerful than Fawkes.

For example, you could tell an artist made a painting based on the color choices and brushstrokes. That is why the researchers created an AI model to beat these platforms:

You may also like: How to protect your data from Google AI scraping

  1. SAND Labs created Style Transfer algorithms, which are similar to generative AI art models.
  2. Next, the researchers integrated those into Glaze.
  3. They cloaked an image with that software.
  4. The program uses Style Transfer algorithms to recreate that picture into a specific theme, such as cubism or watercolor, without changing the content.
  5. Afterward, Glaze identifies the characteristics that changed in the original photo.
  6. It distorts those features and sends them to the AI art generator.
  7. As a result, the AI model leaves little to no alterations, keeping the original intact.

Google created another anti-AI image generator tool called SynthID. “With SynthID, users can add a watermark to their image, which is imperceptible to the human eye,” said Google DeepMind.

Consequently, it helps distinguish AI-generated from manmade content. Like Nightshade, it aims to prevent AI image generators from editing an artist’s work. 

Conclusion

University of Chicago researchers created the Nighsade tool to prevent AI image generators from violating artists’ intellectual property rights. It feeds corrupted data into their databases, so they produce errors.

It may not be enough to break the more popular artificial intelligence programs. On the other hand, it may convince them to protect artists further as this technology continues to improve.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Nevertheless, everyone should learn how to use artificial intelligence to reap its benefits. Follow the latest digital tips and trends at Inquirer Tech.

TOPICS: AI, interesting topics, Trending
TAGS: AI, interesting topics, Trending

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.