Meta has updated its policy for labeling social media posts as “AI Generated” after images posted by some users were inaccurately labeled as such when they were actually photos that had undergone some editing.
Distinguishing Edited and AI-Generated Content
In many cases, social media content has been flagged as AI-generated simply because it was edited in Adobe Photoshop but not entirely generated by a neural network like Stable Diffusion or DALL-E. When you export an image from Photoshop, it includes metadata, which essentially just acts as a disclaimer. Some image editing tools are indeed powered by generative AI, meaning the image itself can be modified by AI. Once Meta’s automated systems determined that the content was partially “AI Generated,” the user could no longer remove that label. However, a fully AI-generated image and a minor edit to a real photo are not the same, so labeling them “AI-Generated” in both cases is unlikely to help audiences.
Meta decided to separate AI editing from fully AI-generated images, notes NIXSOLUTIONS. “We found that our tagging based on these indicators didn’t always meet people’s expectations and didn’t always provide enough context. We will replace the ‘Created by AI’ label with ‘Information about AI’, which people can click on to get more information,” Meta said. We’ll keep you updated on further developments.
Implementing the C2PA Standard
Out of concern for the benefit of users, the company decided to label AI-generated content, but the system it deployed relied on the Content Credentials standard adopted by the Coalition for Content Provenance and Authenticity (C2PA). This standard involves simply adding metadata to a file—Adobe products add it to any degree of AI-assisted editing—and that data can be easily removed by, for example, taking a screenshot and pasting the image into a new document. There is still no universal solution for identifying materials created by AI, so Internet users are advised to be vigilant.