Helge O. Svela, CEO of Media Cluster Norway, author of this guest post.
The following is a guest post from Helge O. Svela, CEO of Media Cluster Norway. Media Cluster Norway joined the IPTC as Associate Members of IPTC in 2024 and Helge is the lead of the Provenance Best Practices and Implementation Working Group, where news publishers work together to talk about their progress in implementing C2PA and the IPTC Verified News Publisher programme within their organisations.
In this article, Helge introduces Project Reynir, an initiative to bring secure media signing technology to the Norwegian media industry.
The journalistic institution must rethink how it develops and applies technology if society is to stand a chance against the deluge of fake images and video from generative AI. Never before in human history has it been easier to produce realistic, but fake, images and video, and spread them around the world. The rapid technological development of generative artificial intelligence has turbocharged the engines of disinformation, and caught both society and journalism off-guard. Never have we been more vulnerable.
Disinformation is destabilising our democracies, and spreading erroneous information. This potentially has severe consequences for both democratic processes and for the public in the face of natural disasters and other crises. The first round of the Romanian elections in 2024 was annulled due to what was dubbed an “algorithmic invasion” of social media disinformation. In the aftermath of the earthquake in Myanmar in March, AI-generated videos of the devastating destruction shared by so called “engagement farmers”, likely with financial motives, got millions of views on social media.
Generative AI has given humanity the ability to create realistic videos and images simply by typing a few words into a website. However, these tools also quickly became a part of the arsenal for enemies of democracy. As a result, disinformation is becoming more prevalent, appearing more professional and costing almost nothing to produce. Generative AI is an industrial revolution also for the troll factories in Russia and others who seek to manipulate our perception of the world and sow doubt about what is true.
This is not a media problem. It is a democratic problem, and a dangerous one at that. Disinformation created by troll factories and generative artificial intelligence and spread by bots pose an immediate threat to our democracies. We might end up doubting absolutely everything. When anyone can claim anything is generated and fake, the liars come out on top. This could destroy the foundation of our democracies: trust in each other and in our institutions. One thing is current news, another is history. Imagine a dictator using fake historical footage of a hunger catastrophe in order to justify an ethnic cleansing of a minority. An internet flooded with claims and visual “proof” of what happened in the past, all of which looks authentic. As a result of generative AI this is no longer just a dystopian science fiction scenario. It is a real possibility. Never before have we needed editorial media more. However, the signal strength of editorial media risks being drowned out by an ever growing cacophony of junk content and disinformation.
Project Reynir is our response to the threat Generative AI poses. Because Generative AI makes it so easy to fake both content and sender, editorial media are under threat on two fronts. In Project Reynir, we aim to solve this problem using technical solutions.
The goal is to create something that makes it easier for ordinary people to distinguish between what is fake and what is real. By using cryptographically secured images and video, based on the open C2PA specification, it is possible for both newsrooms and regular media users to be confident that the images we are seeing have not been tampered with on their journey from the photographer’s lens to the mobile screen. Moreover, using the same technology, authenticity markers can be added to the images and videos from news publishers when they post stories on social media and other third party platforms. Thus guaranteeing that content that appears to be the BBC and AFP actually is from these news organisations and not someone impersonating them. If we succeed, we will be a significant step closer to solving the problem of artificially created noise for our present moment. Project Reynir unites newsrooms, media technology companies and academic researchers in the fight against disinformation. Our goal is an 80 percent adoption in the Norwegian news ecosystem, and to serve as a beacon of best practices for the rest of the world of news.
We believe that time is critical, and that all good forces now must unite. The technological development has moved rapidly in the last few years, and the adoption of technology has sometimes been irresponsible. If our democracies are to stand firm in the face of the disinformation tsunami we are facing, quality journalism must be empowered. Only then can we enable citizens to make informed choices free of manipulation and interference, in an environment where facts can be easily distinguished from lies. We call for the democratic governments of the world to invest in innovation in the news media space. The time for responsible tech innovation, made with resilient democracies in mind, is now.
This article was originally published in the report Seeking Truth, Ensuring Quality: Journalistic Weapons in the Age of Disinformation, published by the University of Bergen in collaboration with Media Cluster Norway, as a part of the Journalistic Weapons conference organised in Brussels on April 28 2025. The full report, including articles from Faktisk, the European Federation of Journalists, London School of Economics, the Center for Investigative Journalism Norway and others, is available at https://www.uib.no/sites/w3.uib.no/files/attachments/publication_seeking_truth_ensuring_quality.pdf.
Broadcast and entertainment companies including Disney, Sony Pictures, Gracenote, Sinclair, Amazon MGM heard from the IPTC’s Pam Fisher this week at the 2025 EIDR Annual Participant Meeting.
List of speakers at the 2025 EIDR Annual Participant Meeting, including Pam Fisher of the IPTC Video Metadata Working Group among others from Walt Disney Company, Sinclair Broadcast Group, Amazon MGM Studios, Gracenote, Warner Bros Discovery, Sony Pictures Entertainment and more.
The IPTC Verified News Publishers List is of particular interest to EIDR members who are keen to sign their content and ensure provenance and authenticity, similarly to news publishers. Pam talked gave a status update on the Verified News Publishers List, telling attendees that we now have between 10 and 20 publishers either with certificates or in the process of obtaining certificates.
Pam’s message to EIDR members, and the entertainment media industry in general, was to be patient: media provenance using C2PA is worth supporting, and broad adoption will become feasible over the next 6 to 18 months. The IPTC is figuring out a way forward using the news media industry as a test case. The goal is that once we have a system that works well in the news industry, we will be able to scale up to other types of media providers and publishers.
For more information on IPTC’s Media Provenance work or the Verified News Publisher List, please contact IPTC.
At the 2024 IPTC Photo Metadata Conference, James Lockman of Adobe’s Digital Media Services division demonstrated the Custom Metadata Panel, a tool that allows users to create their own user interface for editing sets of metadata fields. Since that time, the IPTC has worked with James and his team to make the tool even more useful, supporting the full set of IPTC properties and even enabling IPTC Photo Metadata as the default view in the tool.
Adobe’s Custom Metadata Panel offers the full set of IPTC Photo Metadata properties to users of Adobe Bridge, Photoshop, Premiere Pro and Illustrator.
The Custom Metadata Panel also supports IPTC’s equivalent standard for video content, IPTC Video Metadata Hub. We will add guidance in the future for how it can be used to edit Video Metadata Hub metadata from within Adobe Premiere Pro.
The IPTC thanks James and his team for their work on the panel and for enhancing it so well over the past 12 months to turn it into a real power tool for media managers who want the full power of IPTC Photo Metadata at their fingertips.
Hands-on metadata workshop in Juan les Pins, France in May 2025
IPTC Managing Director Brendan Quinn will run a workshop on Wednesday 14th May showing users how to activate the plugin and how to use it to edit metadata for various purposes. This workshop will take place as part of the IPTC Day at CEPIC 2025, and so will be accessible to attendees of CEPIC 2025 and of the IPTC 2025 Spring Meeting.
Helge O Svela of Media Cluster Norway speaks at the C2PA and Media Provenance Summit at AFP headquarters in Paris on April 3, 2025. (Photo by Kiran RIDLEY / AFP)
“It has never been more important to safeguard authentic news media,” say the organisers.
“We must strengthen our voice and hold our ground against the big tech players. It is critical that the industry works together,” said Fabrice Fries, Chief Executive Officer at AFP, in his opening remarks for the workshop in Paris.
“At AFP we are committed to ensure that both news organisations and the general public can inspect the provenance of our images. This transparency builds trust,” said Eric Baradat, the global news deputy director for photo and archives at AFP.
AFP, BBC and Media Cluster Norway jointly organised the workshop, which was hosted by AFP and supported by the International Press Telecommunications Council (IPTC). The workshop focused on image metadata and how the C2PA standard, also known as Content Credentials, can safeguard it.
“The challenges the news industry are facing are so great that we can only succeed if we work together. Making sure the public can discern between authentic media and content made by generative AI is vital not only for news organisations, but for democratic societies,” said Helge O. Svela, CEO of Media Cluster Norway.
More than 40 people from over 20 news organisations participated in the full day workshop. Among the presentations was a study commissioned by Media Cluster Norway’s Project Reynir on how media consumers respond to being shown more detailed information about an image. The study was conducted by MediaFutures at the University of Bergen, and built on a user study conducted by the BBC.
“Trust is earned. At the BBC we have seen that users really engage when we show them how their news was made. Extra media provenance details such as when and where an image was taken, or the steps used to verify it, make a real difference to how users trust their news. The C2PA standard can allow us to share this information with the users in a secure and trustworthy way,” said Judy Parnall, Principal Technologist, BBC Research and Development.
Among the participants in the workshop were CBC-Radio Canada, Deutsche Welle, France TV, ITV, NHK and Al Jazeera. Topics discussed included carrying provenance metadata from glass to glass versus adding it at the point of publishing, as well as the importance of redaction to the media industry and content provenance for media archives.
“It is vital that the needs of the news media ecosystem are heard as this technology and standards are further developed and refined,” said Brendan Quinn, Managing Director at IPTC.
The IPTC Media Provenance Committee works on several initiatives for implementing and furthering the development of the C2PA technology for the media industry. Many of the speakers and participants of the Paris workshop are actively involved in this work.
Chinese authorities issued guidelines on Friday requiring labels on all artificial intelligence-generated content circulated online, aiming to combat the misuse of AI and the spread of false information.
The regulations, jointly issued by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration, will take effect on Sept 1.
A spokesperson for the Cyberspace Administration said the move aims to “put an end to the misuse of AI generative technologies and the spread of false information.”
According to China Daily, “[t]he guidelines stipulate that content generated or synthesized using AI technologies, including texts, images, audios, videos and virtual scenes, must be labeled both visibly and invisibly” (emphasis added by IPTC). This potentially means that IPTC or another form of embedded metadata must be used, in addition to a visible watermark.
“Content identification numbers”
The article goes on to state that “[t]he guideline requires that implicit labels be added to the metadata of generated content files. These labels should include details about the content’s attributes, the service provider’s name or code, and content identification numbers.”
It is not clear from this article which particular identifiers should be used. There is currently no globally-recognised mechanism to identify individual pieces of content by identification numbers, although IPTC Photo Metadata does allow for image identifiers to be included via the Digital Image GUID property and the Video Metadata Hub Video Identifier field, which is based on Dublin Core’s generic dc:identifier property.
According to the article, “Service providers that disseminate content online must verify that the metadata of the content files contain implicit AIGC labels, and that users have declared the content as AI-generated or synthesized. Prominent labels should also be added around the content to inform users.”
Spain’s equivalent legislation on labelling AI-generated content
The Spanish proposal has been approved by the upper house of parliament but must still be approved by the lower house. The legislation will be enforced by the newly-created Spanish AI supervisory agency AESIA.
If companies do not comply with the proposed Spanish legislation, they could incur fines of up to 35 million euros ($38.2 million) or 7% of their global annual turnover.
On behalf of our memberships, IPTC and PLUS respectfully suggest that existing copyright law is sufficient to enable licensing of content to AI platforms. A “fair use” provision does not cover commercial AI training. Existing United States copyright law should be enforced.
IPTC and PLUS Photo Metadata provide a technical means for expressing the creator’s intent as to whether their creations may be used in generative AI training data sets. This takes the form of metadata embedded in image and video files. This solution, in combination with other solutions such as the Text and Data Mining Reservation Protocol, could take the place of a formal licence agreement between parties, making an opt-in approach technically feasible and scalable.
It is true that our technical solutions would also be relevant if the US government chose to implement an opt-out based approach. However, this does not currently protect owners’ rights well due to the routine activity of “metadata stripping” – removing important rights and accessibility metadata that is embedded in media files, in the misguided belief that it will improve site performance. Metadata stripping is performed by many publishers and publishing systems – often inadvertently.
As a result, we can only recommend that the US adopts an opt-in approach. We request that the US government ensures that metadata embedded in media files be declared as a core part of any technical mechanism to declare content owner’s desire for content to be included or excluded from training data sets.
Content creators are a core part of the US economy and have a strong voice. We agree with their position, but we don’t simply come with another voice of complaint: we bring a viable, ready-made technical solution that can be used today to implement true opt-in data mining permissions and reservations.
Close-up screenshot of Pinterest’s label for AI-generated content.
As reported in Social Media Today, Pinterest has started using IPTC embedded Photo Metadata to signal when content in “Image Pins” has been generated by AI.
Reports started in February that Pinterest had started labelling AI-generated images. Now it has been confirmed via an official update to Pinterest’s user documentation.
In the Pinterest documentation, a new section has recently been added that describes how it works:
Screenshot of Pinterest’s help pages showing how IPTC metadata is used to signal AI-generated content.
“Pinterest may display a label in the foreground of an image Pin when we detect that it has been generated or modified with AI. This is in accordance with IPTC standard for photo metadata. We’re working on ways to expand our capabilities to better identify GenAI content in the future through additional technologies.”
On behalf of our memberships, IPTC and PLUS respectfully suggest that existing UK copyright law is sufficient to enable licensing of content to AI platforms. There is no “fair use” provision in UK copyright law, and “fair dealing” does not cover commercial AI training. Existing copyright law should be enforced.
IPTC and PLUS Photo Metadata provide a technical means for expressing the creator’s intent as to whether their creations may be used in generative AI training data sets. This takes the form of metadata embedded in image and video files. This solution, in combination with other solutions such as the Text and Data Mining Reservation Protocol, could take the place of a formal licence agreement between parties, making an opt-in approach technically feasible and scalable.
It is true that our technical solutions would also be relevant if the UK government chooses to implement an “opt-out” approach similar to that adopted in the EU. However, an opt-out-based approach does not currently protect owners’ rights well, due to the routine activity of “metadata stripping” – removing important rights and accessibility metadata that is embedded in media files, in the misguided belief that it will improve site performance. Metadata stripping is performed by many publishers and publishing systems – often inadvertently. (See our research on metadata stripping by social media platforms from 2019; very little has changed since then)
As a result, we can only recommend that the UK adopts an opt-in approach. We request that the UK ensures metadata embedded in media files be declared as a core part of any technical mechanism to declare content owner’s desire for content to be included or excluded from training data sets.
During the course of this consultation, it has become clear that content creators are a core part of the UK economy and have a strong voice. We agree with their position, but we don’t simply come with another voice of complaint: we bring a viable, ready-made technical solution that can be used today to implement true opt-in data mining permissions and reservations.
The IPTC is happy to announce that EIDR and IPTC have signed a liaison agreement, committing to work together on projects of mutual interest including media metadata, content distribution technologies and work on provenance and authenticity for media.
The Entertainment Identifier Registry Association (EIDR) was established to provide a universal identifier registry that supports the full range of asset types and relationships between assets. Members of EIDR include Apple, Amazon MGM Studios, Fox, the Library of Congress, Netflix, Paramount, Sony Pictures, Walt Disney Studios and many more.
EIDR’s primary focus is managing globally unique, curated, and resolvable content identification (which applies equally to news and entertainment media), via the Emmy Award-winning EIDR Content ID, and content delivery services, via the EIDR Video Service ID. In support of this, EIDR is built upon and helps promulgate the MovieLabs Digital Distribution Framework (MDDF), a suite of standards and specifications that address core aspects of digital distribution, including identification, metadata, avails, asset delivery, and reporting.
IPTC’s Video Metadata Hub standard already provides a mapping to EIDR’s Data Fields and the MDDF fields from related organisation MovieLabs. The organisations will work together to keep these mappings up-to-date and to work on future initiatives including making C2PA metadata work for both the news and the entertainment sides of the media industry. IPTC members have already started working in this area via IPTC’s Media Provenance Committee.
“In the Venn diagram of media, there is significant overlap between news and entertainment interests in descriptive metadata standards, globally-unique content identification, and media provenance and authenticity,” said Richard W. Kroon, EIDR’s director of technical operations. “By working together, we each benefit from the other’s efforts and can bring forth useful standards and practices that span the entire commercial media industry.
“Our hope here is to find common ground that can align our respective metadata standards to support seamless metadata management across the commercial media landscape.”
Helge, Judy, Bruce and Charlie speaking at the Origin Media Provenance Summit in October 2024.
In October 2024, 70 people representing 30 organisations from 15 countries across four continents gathered at the BBC building in Salford to join the Origin Media Provenance Seminar. The seminar was organised by BBC R&D with partners from Media Cluster Norway(MCN) in Bergen.
Media provenance is a way to record digitally signed information about the provenance of imagery, video and audio – information (or signals) that shows where a piece of media has come from and how it’s been edited. Like an audit trail or a history, these signals are called ‘content credentials’, and are developed as an open standard by the C2PA (Coalition for Content Provenance and Authenticity). Content credentials have just been selected by Time magazine as one of their ‘Best Inventions of 2024’.
Attendees came from all over the world, including the US, Japan, all over Europe, and also sub-Saharan Africa.
According to the BBC blog post:
In order for news organisations to show their consumers that they really are looking at some content from the real “BBC”, content credentials use the same technology as websites – digital certificates – to prove who signed it. The International Press Telecommunications Council (IPTC) has created a programme called “Origin Verified News Publishers”, which allows news organisations to register to get their identity checked. Once their ID has been verified, they can get a certificate, which gives consumers assurance that the content certifiably comes from the organisation they have chosen to trust.