Categories
Archives

The Media Provenance Summit brought together leading experts, journalists and technologists from across the globe to Mount Fløyen in Bergen, Norway, to address some of the most pressing challenges facing news media today.
Hosted by Media Cluster Norway, and organised together with the BBC, the EBU and IPTC, the full-day summit on September 23 convened participants from major news organisations, technology providers and international standards bodies to advance the implementation of the C2PA content provenance standard, also known as Content Credentials, in real-world newsroom workflows. The ultimate aim is to strengthen the signal of authentic news media content in a time where it is challenged by generative AI.
“We need to work together to tackle the big problems that the news media industry is facing, and we are very grateful for everyone who came together here in Bergen to work on solutions. I believe we made important progress,” said Helge O. Svela, CEO of Media Cluster Norway.
The program focused on three critical questions:
- How to preserve C2PA information throughout editorial workflows when not all tools yet support the technology.
- When to sign content as it moves through the workflow at device level, organisational level, or both.
- How to handle confidentiality and privacy issues, including the protection of sources and sensitive material.
“We were very happy to see a focus on real solutions, with some great ideas and tangible next steps,” said IPTC’s Managing Director, Brendan Quinn. “With participants from across the media ecosystem, it was exciting to see vendors, publishers, broadcasters and service providers working together to address issues in practically applying C2PA to media
workflows in today’s newsrooms.”
Speakers included Charlie Halford (BBC), Andy Parsons (CAI/Adobe), François-Xavier Marit (AFP), Kenneth Warmuth (WDR), Lucille Verbaere (EBU), Marcos Armstrong and Sébastien Testeau (CBC/Radio-Canada), and Mohamed Badr Taddist (EBU).

“The BBC welcomes this focus on protecting access to trustworthy news. We are proud to have been founder members of the media provenance work carried out under the auspices of C2PA and we are delighted to see it moving forward with such strong industry support,” said Laura Ellis, Head of Technology Forecasting at BBC Research.
Participants travelled to participate in the summit from as far away as Japan, Australia, the US and Canada.
“We’re pleased to collectively have taken a few hurdles on the way to enabling a broader adoption of Content Provenance and Authenticity”, said Hans Hoffmann, Deputy Director at EBU Technology and Innovation Department. “The definition of common practices for signing content in workflows, retrieving provenance information thanks to soft binding, and better safeguards for the privacy of sources address important challenges. Public service media are committed to fight disinformation and improve transparency, and EBU members were well represented in Bergen. The broad participation from across the industry and globe
smooths the path towards adoption. Thanks to Media Cluster Norway for hosting the event!”
The summit emphasised moving from problem analysis to solution exploration. Through structured sessions, participants defined key blockers, sketched practical solutions and developed action plans aimed at strengthening trust in digital media worldwide.
About the Summit
The Media Provenance Summit was organised jointly by Media Cluster Norway, the EBU, the BBC and IPTC, and made possible with the support of Agenda Vestlandet.
For more information, please contact: helge@medieklyngen.no
The IPTC has joined the BBC (UK), YLE (Finland), RTÉ (Ireland), ITV (UK), ITN (UK), EBU (Europe), AP (USA/Global), Comcast (USA/Global), ASBU (Africa and Middle East), Channel 4 (UK) and the IET (UK) as a “champion” in the Stamping Your Content project, run by the IBC Accelerator as part of this year’s IBC Conference in Amsterdam.
These “Champions” represent the content creator side of the equation. The project also includes “participants” from the vendor and integrator community: CastLabs, TCS, Videntifier, Media Cluster Norway, Open Origins, Sony, Google Cloud and Trufo.
This project aims to develop open-source tools that enable organisations to integrate Content Credentials (C2PA) into their workflows, allowing them to sign and verify media provenance. As interest in authenticating digital content grows, broadcasters and news organisations require practical solutions to assert source integrity and publisher credibility. However, implementing Content Credentials remains complex, creating barriers to adoption. This project seeks to lower the entry threshold, making it easier for organisations to embed provenance metadata at the point of publication and verify credentials on digital platforms.
The initiative has created a proof-of-concept open source ‘stamping’ tool that links to a company’s authorisation certificate, inserting C2PA metadata into video content at the time of publishing. Additionally, a complementary open-source plug-in is being developed to decode and verify these credentials, ensuring compliance with C2PA standards. By providing these tools, the project enables media organisations to assert content authenticity, helping to combat misinformation and reinforce trust in digital media.
This work builds upon the “Designing Your Weapons in the Fight Against Disinformation” initiative at last year’s IBC Accelerator, which mapped the landscape of digital misinformation. The current phase focuses on practical implementation, ensuring that organisations can start integrating authentication measures in real-world workflows. By fostering an open and standardised approach, the project supports the broader media ecosystem in adopting content provenance solutions that enhance transparency and trustworthiness.
Attend the project’s panel presentation session at the International Broadcasting Convention, IBC2025 in Amsterdam on Monday, Sept 15 at 09:45 – 10:45.
The speakers on the panel on Monday September 15 are all from IPTC member organisations:
- Henrik Cox, Solutions Architect – OpenOrigins
- Judy Parnall, Principal Technologist, BBC Research & Development – BBC
- Mohamed Badr Taddist, Cybersecurity Master graduate, content provenance and authenticity – European Broadcasting Union (EBU)
- Tim Forrest, Head of Content Distribution and Commercial Innovation – ITN
See more detail on the IBC Show site.
Many of the participating organisations are also IPTC members, so the work started in the project will continue after IBC through the IPTC Media Provenance Committee and its Working Groups.
We are already planning to carry this work forward at the next Media Provenance Summit which will be held later in September in Bergen, Norway.

The IPTC is pleased to announce the full agenda for the 2025 IPTC Photo Metadata Conference, which will be held online on Thursday September 18th from 15.00 to 18.00 UTC. The focus this year is on how image metadata can improve real-world workflows.
We are excited to be joined by the following speakers:
- Brendan Quinn, IPTC Managing Director, presenting two sessions: presenting IPTC’s AI Opt-Out Best Practices guidelines and also an update on IPTC’s work with C2PA and the Media Provenance Committee
- David Riecks, Lead of the IPTC Photo Metadata Working Group, presenting two sessions: the latest on IPTC’s proposed new properties for Generative AI, and also an update on the Adobe Custom Metadata Panel plugin and how it makes the complete IPTC Photo Metadata Standard available in Adobe products
- Paul Reinitz, consultant previously with Getty Images, discussing AI opt-out and copyright issues
- Ottar A. B. Anderson, previously a photographer with the Royal Norwegian Air Force and with over 15 years of experience as a commercial photographer, on proposals for metadata for image archiving and his work on the Digital Object Authenticity Working Group (DOAWG)
- Jerry Lai, previously a photographer for Getty Images, Reuters and Associated Press and now with Imagn, presenting a case study on using AI for captioning huge numbers of images for Super Bowl LIX
- Marcos Armstrong, Senior Specialist, Content Provenance at CBC/Radio-Canada, speaking about CBC’s project to map editorial workflows and identify where content authenticity technologies can be used in the newsroom
- Tim Bray, creator of XML and founder of OpenText Corporation, among many others, speaking on his experiences with C2PA and his ideas for how it can be adopted in the future

This year’s conference promises to be a great one, with topics ranging from Generative AI and media provenance technology to the technical details of scanning historical documents, but always with a focus on how new technologies can be applied in the real world.
Registration is free and open to anyone.
See more information at the event page on the IPTC web site or simply sign up at the Zoom webinar page.
We look forward to seeing you there!
Google has announced the launch of its latest phone in the Pixel series, including support for IPTC Digital Source Type in its industry-leading C2PA implementation.
Many existing C2PA implementations focus on signalling AI-generated content, adding the IPTC Digital Source Type of “Generated by AI” to content that has been created by a trained model.
Google’s implementation in the new Pixel 10 phone differs by adding a Digital Source Type to every image created using the phone, using the “computational capture” Digital Source Type to denote photos taken by the phone’s camera. In addition, images edited using the phone’s AI manipulation tools show the “Edited using Generative AI” value in the Digital Source Type field.
Note that the Digital Source Type information is added using the “C2PA Actions” assertion in the C2PA manifest; unfortunately it is not yet added to the regular IPTC metadata section in the XMP metadata packet. So it can only be read by C2PA-compatible tools.
Background: what is “Computational Capture”?
The IPTC added Computational Capture as a new term in the Digital Source Type vocabulary in September 2024. It represents a “digital capture” that does involve some extra work using an algorithm, as opposed to simply recording the encoded sample hitting the phone sensor, as with simple digital cameras.
For example, a modern smartphone doesn’t simply take one photo when you press the shutter button. Usually the phone captures several images from the phone sensor using different exposure levels and then an algorithm merges them together to create a visually improved image.
This of course is very different from a photo that was created by AI or even one that was edited by AI at a human’s instruction, so we wanted to be able to capture this use case. Therefore we introduced the term “computational capture”.
For more information and examples, see the Digital Source Type guidance in the IPTC Photo Metadata User Guide.

The IPTC Photo Metadata Working Group is proposing a draft set of properties for recording details of images created using generative AI systems. The group presents a draft of these fields for your comments and feedback. After comments are reviewed the group intends to add new properties to a new version of the IPTC Photo Metadata Standard which would be released later in 2025.
Use Cases
The proposals detailed here are intended to address these scenarios, among others:
- How do you know which system/model generated this image? For instance, if you wanted to compare how different systems—or versions of systems—interpret a given prompt, where would you look?
- How can you know what prompt text was entered, or image shared as a starting point? If you want to recreate similar images in the future with the same look, where should that info be stored?
- How can you tell who was involved in the creation of a generative AI image?
Example Scenario
You are the new designer for an organisation, and need to create an image for a monthly column. You are told to use generative AI, but your boss wants the end result to have the same “look and feel” as images used previously in the column. If you needed to find those that were published previously—what information would be most useful in locating and retrieving the image(s) in your organization’s image collection?
Proposed Properties:
- AI Model
Name: AI Model
Definition: The foundational model name and version used to generate this image.
User Note: For example “DALL-E 2”, “Google Genesis 1.5 Pro”
Basic Specs: Data type: Text / Cardinality: 0..1 - AI Text Prompt Description
Name: AI Text Prompt Description
Definition: The information that was given to the generative AI service as “prompt(s)” in order to generate this image.
User Note: This may include negative [excludes] and positive [includes] statements in the prompt.
Basic Specs: Data type: Text / Cardinality: 0..1 - AI Prompt Writer Name
Name: AI Prompt Writer Name
Definition: Name of the person who wrote the prompt used for generating this image.
Basic Specs: Data type: Text / Cardinality: 0..1 - Reference Image(s)
Name: Reference Image
Definition: Image(s) used as a starting point to be refined by the generative AI system (sometimes referred to as “base image”).
Basic Specs: Data type: URI / Cardinality: 0..unbounded
All of these properties are of course optional, not required, but we would recommend that AI engines fill in the properties whenever possible.
Request for Comment
The intent is for a new standard version including these fields to be proposed at the IPTC Autumn Meeting 2025 in October to be voted on by IPTC member organisations. If approved by members, the new version would be published in November 2025.
Please send your comments or suggestions for improvements using the IPTC Contact Us form or via a post to the public iptc-photometadata@groups.io discussion list by Friday 29th August 2025.

The IPTC participated in a “design team” workshop for the Internet Engineering Task Force (IETF)’s AI Preferences Working Group. Brendan Quinn, IPTC Managing Director attended the workshop in London along with representatives from Mozilla, Google, Microsoft, Cloudflare, Anthropic, Meta, Adobe, Common Crawl and more.
As per the group’s charter, “The AI Preferences Working Group will standardize building blocks that allow for the expression of preferences about how content is collected and processed for Artificial Intelligence (AI) model development, deployment, and use.” The intent is that this will take the form of an extension to the commonly-used Robots Exclusion Protocol (RFC9309). This document defines the way that web crawlers should interact with websites.
The idea is that the Robots Exclusion Protocol would specify how website owners would like content to be collected, and the AI Preferences specification defines the statements that rights-holders can use to express how they would like their content to be used.
The Design Team is discussing and iterating the group’s draft documents: the Vocabulary for Expressing AI Usage Preferences and the “attachment” definition document, Indicating Preferences Regarding Content Usage. The results of the discussions will be taken to the IETF plenary meeting in Madrid next week, and
Discussions have been wide-ranging and include use cases for varying options of opt-in and opt-out, the ability to opt out of generative AI training but to allow search engine indexing, and the difference between preferences for training and preferences for how content can be used at inference time (also known as prompt time or query time, such as RAG or “grounding” use cases) and the varying mechanisms for attaching these preferences to content, i.e. a website’s robots.txt file, HTTP headers and embedded metadata.
The IPTC has already been looking at this area and defined a data mining usage vocabulary in conjunction with the PLUS Coalition in 2023. There is a possibility that our work will change to reflect the IETF agreed vocabulary.
The work also relates to IPTC’s recently-published guidance for publishers on opting out of Generative AI training. Hopefully we will be able to publish a much simpler version of this guidance in the future because of the work from the IETF.

The IPTC Video Metadata Working Group has released version 1.6 of its Video Metadata Hub standard, including terms for rights usage, language, and content created by Generative AI models.
New properties
-
Rights Usage Terms: The licensing parameters of the video expressed in free text. (Aligned with the equivalent term in IPTC Photo Metadata.)
Changed properties
-
Language: Changed label and description to reflect that this represents the main language of the video. (Previously the term was called “language version”).
-
Source (Supply Chain): Changed description to reflect that changes made be made by a system (such as a Generative AI engine) as well as a person or organisation.
The specification for Video Metadata Hub is separated into two parts: IPTC Video Metadata Hub properties and IPTC Video Metadata Hub mappings showing how to apply these core properties in many existing video standards.
There is also a JSON Schema representation of Video Metadata Hub, which is used by some large media companies in manageing their video content. The 1.6 version of the JSON schema reflects the latest changes.
The Video Metadata Hub User Guide and Video Metadata Hub Generator tool have also been updated to include the changes in version 1.6.
Please feel free to discuss the new version of Video Metadata Hub on the public iptc-videometadata discussion group, or contact IPTC via the Contact us form.
The latest version of NewsML-G2, version 2.35 has been released, adding support for the status of events.
Approved by the IPTC Standards Committee at the IPTC Spring Meeting, on 16th May, the new version adds a property eventStatus which matches the equivalent property in ninjs that was added in version 3.0.
eventStatus, within the eventDetails block, describes the status of an actual event – as opposed to occurenceStatus, which conveys the status of how likely it is that a future event will occur, and coverageStatus, which conveys the planned news coverage of a news event.
The recommended controlled vocabulary for eventStatus is http://cv.iptc.org/newscodes/eventstatus, which currently contains the terms “scheduled“, “in progress“, “completed“, “postponed” and “canceled“.
IPTC Catalog updated to version 41
The IPTC Catalog, the master list of internally- and externally-managed controlled vocabularies used and referenced by NewsML-G2, has been updated to version 41. It adds the PLUS Licence Data Format vocabulary, used extensively in IPTC Photo Metadata and now in other standards through the introduction of the Data Mining vocabulary.
The latest catalog is available at http://iptc.org/std/catalog/catalog.IPTC-G2-Standards_41.xml (note the plain http URL scheme. We don’t link directly to it here because clicking the link may trigger browser warnings about moving from https to http URLs.)
Find out more about NewsML-G2 2.35
All information related to NewsML-G2 2.35 is at https://iptc.org/std/NewsML-G2/2.35/.
The NewsML-G2 Specification document has been updated to cover the new version 2.35.
Example instance documents are at https://iptc.org/std/NewsML-G2/2.35/examples/.
Full XML Schema documentation is located at https://iptc.org/std/NewsML-G2/2.35/specification/XML-Schema-Doc-Power/
XML source documents and unit tests are hosted in the public NewsML-G2 GitHub repository.
The NewsML-G2 Generator tool has also been updated to produce NewsML-G2 2.35 files using the version 41 catalog.
For any questions or comments, please contact us via the IPTC Contact Us form or post to the iptc-newsml-g2@groups.io mailing list. IPTC members can ask questions at the weekly IPTC News Architecture Working Group meetings.
For more information, contact the IPTC News Architecture Working Group via the public NewsML-G2 mailing list.

The IPTC is excited to announce the latest updates to ninjs, our JSON-based standard for representing news content metadata. Version 3.1 is now available, along with updated versions 2.2 and 1.6 for those using earlier schemas.
These releases reflect IPTC’s ongoing commitment to supporting structured, machine-readable news content across a variety of technical and editorial workflows.
What is ninjs?
ninjs (News in JSON) is a flexible, developer-friendly format for describing news items in a structured way. It allows publishers, aggregators, and news tech providers to encode rich metadata about articles, images, videos, and more, using a clean JSON format that fits naturally into modern content pipelines.
What’s new in ninjs 3.1, 2.2 and 1.6?
The new releases add a new property for the IPTC Digital Source Type property, which was first used with the IPTC Photo Metadata Standard but now used across the industry to declare the source of media content, including content generated or manipulated by a Generative AI engine.
The new property (called digitalSourceType in 3.1 and digitalsourcetype in 2.2 and 1.6 to match the case conventions of each standard version) has the following properties:
- Name: the name of the digital source type, such as “Created using Generative AI”
- URI: the official identifier of the digital source type from the IPTC Digital Source Type vocabulary or another vocabulary, such as http://cv.iptc.org/newscodes/digitalsourcetype/trainedAlgorithmicMedia (the official ID for generative AI content)
- Literal: an optional way to add new digital source types that are not part of a controlled vocabulary.
IPTC supports multiple versions of ninjs in parallel to ensure stability and continuity for publishers and platforms that depend on long-term schema support.
The new property is part of the general ninjs schema, and so can be used in the main body of a ninjs object to describe the main news item and can also be used in an “association” object which refers to an associated media item.
Access the schemas
All versions are publicly available on the IPTC website:
ninjs generator and user guide
The ninjs Generator tool has been updated to cover the latest versions. Fill in the form fields and see what that content looks like in ninjs format. You can switch between the schema versions to see how the schema changes between 1.6, 2.2 and 3.1.
The ninjs User Guide has also been updated to reflect the newly added property.
Why it matters
As the news industry becomes increasingly reliant on metadata for content distribution, discoverability, and rights management, ninjs provides a modern, extensible foundation that supports both human and machine workflows. It’s trusted by major news agencies, technology platforms, and AI developers alike.
Get involved
We welcome feedback from the community and encourage you to share how you’re using ninjs in your own products or platforms. If you would like to discuss ninjs, you can join the public mailing list at https://groups.io/g/iptc-ninjs.
If you’re interested in contributing to the development of IPTC standards, join us!

“As content becomes commoditised, there will be a trend towards authentic, human-created work,” said Scott Belsky at the 2025 Content Authenticity Summit, held last week on Roosevelt Island, New York.
Over 200 authenticity experts from over 150 companies joined together at the Cornell Tech campus in New York City on Wednesday 4th June to share the latest work of those implementing C2PA in the industry.
The theme of the event was real-world implementation of C2PA and spreading the word about Content Credentials, the user-facing brand of the C2PA technology.
The event was co-presented by IPTC along with the Content Authenticity Initiative (CAI) and the Coalition for Content Provenance and Authenticity (C2PA)
Highlights were:
- The launch of the C2PA Conformance programme, which will allow device and software implementers to be able to obtain certificates on the official C2PA Trust List (after the current Temporary Trust List is shut down later in 2025)
- A talk from Bruce MacCormack of CBC / Radio Canada, Chair of the IPTC Media Provenance Committee, on how the media industry is implementing C2PA, and the importance of publisher branding and organisational stamping of content at publish time to prevent brand hijacking and misattribution of news content
- An in-depth discussion of the IPTC Origin Verified News Publisher programme, including the launch of the IPTC guidance document helping news publishers to implement C2PA
- Another deep-dive workshop looking at which metadata fields should be included in C2PA-signed content. The discussion covered both metadata about the publisher and metadata about the content itself.
- Eight simultaneous tracks of breakout sessions covering device conformance, implementation in the news industry, real-world deployments on Amazon Web Services, work on standardisation with ISO and other bodies,
- A fast-paced and wide-ranging presentation from UC Berkeley professor Hany Farid on the importance of authenticity and the difficulty of keeping up with deepfake detection in our world of ever-improving generative AI models
- The many and varied discussions among attendees around their own effort to implement C2PA technology within their newsrooms
The most common feedback that we heard from attendees was that participants would have liked to be at all of the breakout sessions at the same time!
The event was held under the Chatham House Rule, which means that detailed recordings will not be available, although anonymised workshop summaries will soon be made available to attendees.
For more information about C2PA, the Media Provenance Committee or the Verified News Publisher List, please contact IPTC directly.
Categories
Archives
- January 2026
- December 2025
- November 2025
- October 2025
- September 2025
- August 2025
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- February 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- December 2019
- November 2019
- October 2019
- September 2019
- July 2019
- June 2019
- May 2019
- April 2019
- February 2019
- November 2018
- October 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- January 2018
- November 2017
- October 2017
- September 2017
- August 2017
- June 2017
- May 2017
- April 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- June 2015
- April 2015
- March 2015
- February 2015
- November 2014