OpenAI and Meta to label AI-generated images

ChatGPT creator OpenAI and Facebook owner Meta announced they will label AI-generated images.

They said it is their way of helping individuals and companies identify imagery fabricated with artificial intelligence. They use different methods but admit that more collaboration and innovation are necessary to tag such pictures.

AI images are becoming so realistic that they convince those with trained eyes. Nowadays, there are AI-generated phone calls and videos that could exacerbate online disinformation. These online tricks could become especially destructive in upcoming elections and impact the understanding of recent events. 

How will OpenAI and Meta label AI-generated images?

On February 6, 2024, Meta announced it would publicly identify images made with its artificial intelligence. Post an image from Meta AI on Facebook, and it will have the “Imagined with AI” tag.

It will also have a sparkle icon for quick identification. Moreover, Meta said it will work with other companies to place these tags. The owners of AI image generators must comply with these standards so that they can apply them in future photos. 

The Facebook company is also developing other AI labeling methods, such as putting visible markers and invisible watermarks. Also, it is working with OpenAI and other firms to implement the C2PA standard for AI-generated information. 

VentureBeat says OpenAI had a similar announcement hours after Meta’s statement. It discusses the C2PA standard, which stands for The Coalition for Content Provenance and Authenticity. 

It is a non-profit organization consisting of other groups funded by Adobe, Intel, Microsoft, and other large corporations. The C2PA is an open technical standard that lets publishers, companies, and others embed metadata into media for verifying their origins.

You may also like: How to beat AI detectors

You can view metadata for images by right-clicking on them and selecting the Properties option. Adding that allows people to use sites like Content Credentials Verify to check if an image came from DALL-E 3.

It is OpenAI’s image generation learning model. Unfortunately, both firms admit their methods aren’t foolproof. 

READ: DALL-E 3 is OpenAI’s secret ‘insane’ AI image generator

Meta’s technique only works on pictures made with its proprietary tools. Also, people could delete OpenAI’s metadata by uploading images to social media or taking screenshots.

How do other AI labeling methods work?

Google also has a proprietary AI labeling technique called SynthID. It adds an invisible watermark that enables AI detection programs to spot them.

Unlike Meta and OpenAI’s methods, Google says in the video above editing and applying filters won’t erase SynthID.

READ: Google SynthID helps detect AI and manmade content

The University of Chicago also made an alteration method that prevents AI tools from manipulating pictures. Here’s how the Glaze program works: 

  1. SAND Labs created Style Transfer algorithms, which are similar to generative AI art models.
  2. Next, the researchers integrated those into Glaze.
  3. They cloaked an image with that software.
  4. The program uses Style Transfer algorithms to recreate that picture into a specific theme, such as cubism or watercolor, without changing the content.
  5. Afterward, Glaze identifies the characteristics that changed in the original photo.
  6. It distorts those features and sends them to the AI art generator.
  7. As a result, the AI model leaves little to no alterations, keeping the original intact.

Neubauer Professors of Computer Science Ben Zhao and Heather Zheng created the program to defend artists from generative art platforms.

You may also like: How to join the Bluesky social network

“Artists really need this tool; the emotional impact and financial impact of this technology on them is really quite real,” Zhao said. 

“We talked to teachers who were seeing students drop out of their class because they thought there was no hope for the industry, and professional artists who are seeing their style ripped off left and right,” Zhao added.

These AI labeling methods still have flaws, but we must continue developing them to prevent rampant misinformation due to artificial intelligence.

Read more...