Glaze program aims to protect artists from AI
Many artists fear generative artificial intelligence will take over their passions and livelihoods. In response, a team of University of Chicago computer scientists created a program to protect them. Glaze “cloaks” pictures so AI tools “incorrectly learn the unique features that define an artist’s style, thwarting subsequent efforts to generate artificial plagiarisms.”
Artificial intelligence is here to stay, but we must ensure it does for the benefit of humanity. Specifically, we must ensure artists and other creative individuals retain their jobs as AI adoption expands. Fortunately, this technology also holds the key to making that possible. Soon, Glaze and other research projects could provide AI protection to all artists.
This article will discuss how the University of Chicago’s AI art protection works. Then, I will talk about a similar Google program called SynthID.
How does the Glaze anti-AI program work?
Thanks to Akane for this cool and informative tutorial! Super clear and easy to understand!
I didn't even know it was happening until an artist forwarded an earlier tweet. https://t.co/R2Dq8sMLBq
— Glaze at UChicago (@TheGlazeProject) August 7, 2023
On February 14, 2023, UChicago News reported on the Glaze project. It said Neubauer Professors of Computer Science Ben Zhao and Heather Zheng created the program to defend artists from generative art platforms.
“Artists really need this tool; the emotional impact and financial impact of this technology on them is really quite real,” Zhao said. “We talked to teachers who were seeing students drop out of their class because they thought there was no hope for the industry, and professional artists who are seeing their style ripped off left and right.”
In 2020, their SAND (Security, Algorithms, Networking, and Data) Lab developed a similar software called Fawkes. It cloaked personal photos so that facial recognition models couldn’t recognize them.
Fawkes became popular, earning coverage from the New York Times and other international outlets. Consequently, artists messaged SAND Labs, hoping this program could do the same against generative AI.
You may also like: How to protect your data from Google AI
However, UChicago News said Fawkes isn’t enough to protect them. Fawkes slightly distorts facial features to deter recognition programs. However, more characteristics define an artist’s style.
For example, you could tell an artist made a painting based on the color choices and brushstrokes. Thus, the researchers created an AI model to beat these platforms:
- SAND Labs created Style Transfer algorithms, which are similar to generative AI art models.
- Next, the researchers integrated those into Glaze.
- They cloaked an image with that software.
- The program uses Style Transfer algorithms to recreate that picture into a specific theme, such as cubism or watercolor, without changing the content.
- Afterward, Glaze identifies the characteristics that changed in the original photo.
- It distorts those features and sends them to the AI art generator.
- As a result, the AI model leaves little to no alterations, keeping the original intact.
How does the Google SynthID work?
Several companies have been developing ways to protect intellectual property. For example, Google announced SynthID, an invisible watermark for digital images.
Artists could put on their existing works to ensure their signatures remain despite AI adjustments. Also, the company says it is “detectable even if edited by common techniques like cropping and applying filters.”
You may also like: Artists defend generative AI in open letter
Pushmeet Kohli, head of DeepMind research, told the BBC the new system modifies pictures so covertly “that to you and me, to a human, it does not change.” He added, “You can change the color, you can change the contrast, you can even resize it [and DeepMind] will still be able to see that it is AI-generated.”
“With SynthID, users can add a watermark to their image, which is imperceptible to the human eye,” a Google DeepMind demo video said. Moreover, Google Cloud claims it “is the first cloud provider to offer a tool for creating AI-generated images responsibly and identifying them with confidence.”
Hence, it plans to expand to other AI models. Wider adoption of this service will make it more effective. More importantly, that could help the company spot bugs and improve the system quicker.
University of Chicago Ben Zhao and Heather Zheng created an AI tool that could protect artists from generative AI. Glaze feeds AI models incorrect information to prevent it from tampering with manmade images.
Google also created a similar tool called SynthID, ensuring content keeps their artist signatures, regardless of editing techniques. However, both projects are still undergoing further development.
You could learn more about Glaze anti-AI software by reading this arXiv research paper. Moreover, check out more digital tips and trends at Inquirer Tech.