Google SynthID helps detect AI and manmade content

Google released a new tool to help content creators protect their works against AI content generators. SynthID adds an invisible digital watermark on AI-generated images to distinguish them from manmade content. It is available for a limited number of Vertex AI customers using Google’s latest text-to-image model for turning text into photorealistic images.

The search engine company admits its SynthID solution is not foolproof. Yet, we must know how tech companies are developing ways to differentiate AI-made content from manmade ones. As a result, we can ensure future generations use authentic information and avoid falling victim to misinformation. This issue becomes more important as the world adopts AI further.

This article will explain how Google SynthID functions and elaborate on its flaws. Later, I will explain why the world needs such methods more than ever.

How does Google SynthID work?

Google DeepMind posted a short YouTube video explaining how this service works. “With SynthID, users can add a watermark to their image, which is imperceptible to the human eye,” it says.

More importantly, the company says it is “detectable even if edited by common techniques like cropping or applying filters.” As a result, SynthID could be more effective than conventional watermarking techniques.

Pushmeet Kohli, head of DeepMind research, told the BBC the new system modifies pictures so covertly “that to you and me, to a human, it does not change.” He added, “You can change the color, you can change the contrast, you can even resize it [and DeepMind] will still be able to see that it is AI-generated.”

Google knows metadata is important in spotting AI content. That is why scammers remove them manually or edit content thoroughly to destroy them.

The most common method is placing a translucent signature or layer at the bottom of electronic images, similar to paintings with handwritten signatures. However, some folks crop out the bottom of the image to claim the work as theirs.

Consequently, the search engine company designed SynthID to function without metadata. It also works with third-party image identification methods that use metadata because SynthID embeds in an image’s pixels.

You may also like: AI-generated drug begins clinical trials

Famous artists have many fans who can spot their works, so fraudsters apply color or effects filters on their art. Hopefully, that might be enough to obscure the art’s origin.

Google Cloud claims it “is the first cloud provider to offer a tool for creating AI-generated images responsibly and identifying them with confidence.” That is why it plans to expand to other AI models.

Wider adoption of this service will make it more effective for everyone. More users mean the company can spot bugs and improve the system sooner.

Why do we need to spot AI content?

Photo Credit: uxdesign.cc

Artificial intelligence can create nearly any text, image, video, or song we could imagine, so how can we distinguish our works from its results? That is a pressing question, especially for educators and artists.

Imagine you’re a teacher. You want your students to be prepared for adulthood in the real world. You must teach them basic subjects like mathematics and grammar and more abstract skills like critical thinking.

Nowadays, your pupils can make an AI program to create their homework. Consequently, they may become overly reliant on this technology, depriving them of the ability to think for themselves.

Some educators responded to this dilemma by banning AI content. Unfortunately, there is no 100% reliable AI content generator at the time of writing.

You may also like: OpenAI shuts down AI detector

Believe it or not, OpenAI, the creator of ChatGPT, pulled out its AI Classifier tool because of poor accuracy. More importantly, keeping them away from AI does not prepare them for a future dominated by its products and services.

Schools worldwide are looking for solutions, but there isn’t a guaranteed way to preserve education’s integrity during the expanding AI adoption. On the other hand, Google seems to have found a way to help artists secure their work.

SynthID may not be perfect, but it could become a dependable way for artists to maintain their passions as their livelihoods. Of course, Google and other companies need further research and development to provide more reliable and practical solutions.

Conclusion

Google released the experimental SynthID to flag AI content. It will tag images created by its text-to-image model Imagen to help people use AI-generated content responsibly.

Soon, we may standardize a reliable system so that the world can use AI-generated material safely. Claire Leibowicz from the campaign group Partnership on AI told BBC, ‘“I think standardization would be helpful for the field.”

“There are different methods being pursued. We need to monitor their impact,” she added. Learn more about the latest digital tips and trends at Inquirer Tech.

Read more...