News / AI & GENERATIVE AI

OpenAI, Adobe & Microsoft to Require Labeling of AI Materials

The three major companies see major issues with not labeling AI-generated images and movies with their origin.

It was in May that Kamera & Bild could report that the EU's Council of Ministers had approved the proposal requiring certain tools, programs, and systems to label images and video generated by AI in the so-called EU AI Act.

Now, the three major companies working with and developing AI and generative AI - OpenAI, Adobe, and Microsoft - have themselves announced that they support a new proposal presented in California that requires watermarks to be added to metadata not only in images, but also in video clips and audio files.

One of the reasons is the problem of AI-generated material creating disinformation by being used as propaganda through various channels on social media. For example, AI-generated images have also become much more realistic in recent years, making it increasingly difficult for a viewer to determine whether the image was created with AI or is a photograph of a real event.

An example of this was seen in the Camera & Image competition "AI image or photograph?" where none of the contestants got all the answers right when they had to guess whether 8 images were AI-generated or real photos.

The image is AI-generated.

"New technology and standards can help people understand the origin of the content they find online and avoid confusion between human-generated content and photorealistic AI-generated content.", writes OpenAI's Chief Strategy Officer Jason Kwon.

The proposal known as AB 3211 will go to a vote in the Senate and may then be signed by Governor Gavin Newsom at the end of September.

A common misconception is that the publisher of the AI-generated material must be marked for the viewer in their publication, such as on social media, which is not the case. The marking in question is that the system or program providing the service must ensure to inform the viewer that the material is AI-generated, something that the EU AI Act and now this proposal also address.

Some who have taken a stance on the publication of AI-generated material are YouTube, which in its updated terms of service states that those who publish videos on YouTube must indicate if the clip contains AI-created material, as a transparency towards the viewer. The requirement applies to content that looks realistic, and as an example, they provide a generated realistic scene that actually did not happen.

The camera manufacturer Sony also sees the problems, and has been involved in creating the collaboration known as C2PA, The Coalition for Content Provenance and Authenticity, which is to be used to verify the authenticity of images through the algorithms used. The collaboration consists of Sony, Adobe, BBC, Google, Intel, Microsoft, Truepic, and Publicis, who have developed their own specification for the verification.

- The spread of false information and images has an actual social impact that not only harms our photojournalists and news agency partners, but society as a whole, stated Salmon-Legagneur in connection with the partnership with C2PA becoming official.

Exactly how you can use AI-generated material in a legal way can be read about in our article "Copyright & AI Images - What the Law Says About Generated Works".