Glaze program aims to protect artists from AI | Inquirer Technology

Glaze program aims to protect artists from AI

08:09 AM September 11, 2023

Many artists fear generative artificial intelligence will take over their passions and livelihoods. In response, a team of University of Chicago computer scientists created a program to protect them. Glaze “cloaks” pictures so AI tools “incorrectly learn the unique features that define an artist’s style, thwarting subsequent efforts to generate artificial plagiarisms.”

Artificial intelligence is here to stay, but we must ensure it does for the benefit of humanity. Specifically, we must ensure artists and other creative individuals retain their jobs as AI adoption expands. Fortunately, this technology also holds the key to making that possible. Soon, Glaze and other research projects could provide AI protection to all artists.

This article will discuss how the University of Chicago’s AI art protection works. Then, I will talk about a similar Google program called SynthID.

Article continues after this advertisement

How does the Glaze anti-AI program work?

FEATURED STORIES

On February 14, 2023, UChicago News reported on the Glaze project. It said Neubauer Professors of Computer Science Ben Zhao and Heather Zheng created the program to defend artists from generative art platforms.

“Artists really need this tool; the emotional impact and financial impact of this technology on them is really quite real,” Zhao said. “We talked to teachers who were seeing students drop out of their class because they thought there was no hope for the industry, and professional artists who are seeing their style ripped off left and right.”

Article continues after this advertisement

In 2020, their SAND (Security, Algorithms, Networking, and Data) Lab developed a similar software called Fawkes. It cloaked personal photos so that facial recognition models couldn’t recognize them.

Article continues after this advertisement

Fawkes became popular, earning coverage from the New York Times and other international outlets. Consequently, artists messaged SAND Labs, hoping this program could do the same against generative AI.

Article continues after this advertisement

You may also like: How to protect your data from Google AI

However, UChicago News said Fawkes isn’t enough to protect them. Fawkes slightly distorts facial features to deter recognition programs. However, more characteristics define an artist’s style.

Article continues after this advertisement

For example, you could tell an artist made a painting based on the color choices and brushstrokes. Thus, the researchers created an AI model to beat these platforms:

  1. SAND Labs created Style Transfer algorithms, which are similar to generative AI art models.
  2. Next, the researchers integrated those into Glaze.
  3. They cloaked an image with that software.
  4. The program uses Style Transfer algorithms to recreate that picture into a specific theme, such as cubism or watercolor, without changing the content.
  5. Afterward, Glaze identifies the characteristics that changed in the original photo.
  6. It distorts those features and sends them to the AI art generator.
  7. As a result, the AI model leaves little to no alterations, keeping the original intact.

How does the Google SynthID work?

Google SynthID - Insights into Its Operation

Photo Credit: bnn.network

Several companies have been developing ways to protect intellectual property. For example, Google announced SynthID, an invisible watermark for digital images.

Artists could put on their existing works to ensure their signatures remain despite AI adjustments. Also, the company says it is “detectable even if edited by common techniques like cropping and applying filters.”

You may also like: Artists defend generative AI in open letter

Pushmeet Kohli, head of DeepMind research, told the BBC the new system modifies pictures so covertly “that to you and me, to a human, it does not change.” He added, “You can change the color, you can change the contrast, you can even resize it [and DeepMind] will still be able to see that it is AI-generated.”

“With SynthID, users can add a watermark to their image, which is imperceptible to the human eye,” a Google DeepMind demo video said. Moreover, Google Cloud claims it “is the first cloud provider to offer a tool for creating AI-generated images responsibly and identifying them with confidence.”

Hence, it plans to expand to other AI models. Wider adoption of this service will make it more effective. More importantly, that could help the company spot bugs and improve the system quicker.

Conclusion

University of Chicago Ben Zhao and Heather Zheng created an AI tool that could protect artists from generative AI. Glaze feeds AI models incorrect information to prevent it from tampering with manmade images.

Google also created a similar tool called SynthID, ensuring content keeps their artist signatures, regardless of editing techniques. However, both projects are still undergoing further development.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

You could learn more about Glaze anti-AI software by reading this arXiv research paper. Moreover, check out more digital tips and trends at Inquirer Tech.

TOPICS: AI, interesting topics, Science, Trending
TAGS: AI, interesting topics, Science, Trending

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.