What’s the news: Images generated with ChatGPT on the web and other OpenAI API services using DALL·E 3 model will include Coalition for Content Provenance and Authenticity (C2PA) metadata, said the company on its blog on February 6, 2024. The company added that the update will be rolled out to all mobile users by February 12.
What is C2PA? Coalition for Content Provenance and Authenticity is “an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as ‘provenance’ information,” an MIT Tech Review post explains. Via C2PA, publishers, companies, and others, embed metadata (origin details) in media for verifying its origin and related information. “C2PA isn’t just for AI-generated images – the same standard is also being adopted by camera manufacturers, news organizations, and others to certify the source and history (or provenance) of media content,” the OpenAI blog post explained further. However, currently, only images generated with ChatGPT or OpenAI’s API serving DALL·E 3 will contain the C2PA metadata.
Images to get signatures and additional information: Images produced through OpenAI’s API will contain a signature (watermark) indicating they were generated by DALL·E 3 while images produced within ChatGPT will contain an additional manifest, to show the content was created using ChatGPT. According to the blog, this creates a dual-provenance (a record of the origin, history and ownership of the content) lineage.
Users can verify images on Content Credentials: Users who want to verify which tools were used to create the image can use websites like Content Credentials Verify.
“This should indicate the image was generated through our API or ChatGPT unless the metadata has been removed,” said the blog.
C2PA is not a foolproof measure
The company added that such metadata “is not a silver bullet” in figuring out which AI tools were used to create an image. Entities like social media platforms can remove the metadata from uploaded images and actions like taking a screenshot can also remove it. This means that an image lacking this metadata may still have been generated with ChatGPT.
During MediaNama’s ‘Deep Fakes and Democracy’ event, Gautham Koorma, a machine learning engineer and researcher from UC Berkeley, talked about how every watermarking technique has been broken by miscreants. This is not to say that watermarking should not be done but that the guardrail can easily be broken by “a sophisticated adversary” like a photo-editing tool. Due to this, he advised individuals to also keep a look out for other signs in a content piece like irregular lighting, distortion of visuals at certain points, etc.
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!
- Explainer: Why Detecting Deepfakes Is A Challenging Problem #NAMA
- 11 Talking Points From MediaNama’s ‘Deep Fakes And Democracy’ Discussion #NAMA
- TikTok To Detect And Label AI-Generated Content Ahead Of US 2024 Elections
- Election Integrity In The AI Era: Open AI Lists New Measures In Preparation Of 2024