Meta, the parent company of Facebook, has unveiled significant alterations to its policies regarding digitally manipulated media, in anticipation of the upcoming U.S. elections that will test its capacity to regulate misleading content generated by emerging artificial intelligence technologies.
In a recent announcement, Meta revealed plans to introduce “Made with AI” labels starting in May for AI-generated videos, images, and audio shared across its platforms. This initiative expands on its previous efforts, which primarily targeted a limited range of manipulated videos. Monika Bickert, Vice President of Content Policy, elaborated on these changes in a blog post, stating that Meta will also implement distinct and more conspicuous labels for digitally altered media deemed to pose a “particularly high risk” of deceiving the public, regardless of the method used to create the content.
This shift in approach signifies a departure from Meta’s previous strategy of solely removing select posts to one that maintains the content while providing viewers with transparency regarding its creation process. Furthermore, Meta had previously unveiled plans to detect images generated using third-party AI tools by embedding invisible markers within the files, though no specific start date was provided at the time of the announcement. These updates will apply to content shared on Meta’s Facebook, Instagram, and Threads platforms, while other services such as WhatsApp and Quest virtual reality headsets will operate under separate guidelines.












