Tim Bray speaking about C2PA at the IPTC Photo Metadata Conference 2025.
Tim Bray speaking about C2PA at the IPTC Photo Metadata Conference 2025.

Software industry legend Tim Bray gave a resounding call to IPTC and to others working on media provenance and C2PA: his verdict was that while the specification and its implementation had issues, they were slowly being resolved and he lauded the project’s goal of “making it harder for liars to lie and easier for truth tellers to be believed”.

The 2025 Photo Metadata Conference, held on September 18th, was a great success, with 280 registered attendees from hundreds of organisations around the world. Video recordings from the event are now available.

Speakers included:

  • David Riecks, lead of the IPTC Photo Metadata Working Group, describing some new IPTC Photo Metadata properties concerning Generative AI models and prompts that will be proposed for a vote at the next Standards Committee meeting to be held at the IPTC Autumn Meeting.
  • Brendan Quinn, Managing Director of IPTC, gave an update on the IPTC’s guidelines for opting out of Generative AI training and ongoing work to standardise AI training preferences at the Internet Engineering Task Force (IETF)
  • Ottar A. B. Anderson, Head of Photography at SEDAK, the GLAM imaging service of Møre og Romsdal County in Norway, spoke about Metadata for Image Quality in Galleries, Libraries, Archives and Museums (GLAM) and his work on the Digital Object Authenticity Working Group.
  • David Riecks gave an update on IPTC Photo Metadata Panel in Adobe Custom Metadata.
  • AI caption tagging for the Superbowl – Jerry Lai, Senior Director of Content, Imagn Images – Download Jerry’s slides
  • Paul Reinitz, previously with Getty Images and now a consultant on business and legal issues around copyright, spoke about recent developments in the area including updates in the US, EU and China
  • Brendan Quinn spoke again to give an update on the IPTC’s work with C2PA and the world of Media Provenance, including our work on the Verified News Publishers List
  • Tim Bray, creator of tech standards like XML and Atom and companies like OpenText 
  • Marcos Armstrong of CBC / Radio Canada spoke about his work on mapping image publishing workflows at CBC.

Videos of all sessions are publicly available from the event page at https://iptc.org/events/photo-metadata-conference-2025/.

Feedback from the event was almost universally positive:

  • “While I knew I wouldn’t understand all of the terms, I was so impressed the amount of topics that were touched upon. I had no problem following along. I loved the passion and the openess to different perspectives”
  • “Great topic choices- perfect level of beginner/more advanced content presentation.”
  • “It was a good critical look at the pluses and minuses of various decisions being made, ultimately pointing to developing public trust about authorship.”
  • “Informative, I really liked the expertise all the speaker brought to the virtual table”
  • “Learning about strategies to protect from and tools for blocking AI, as well as metadata fields to record AI use”
  • “Informative, good presentations and presenters. Very relevant to today – AI.”
  • “Focus on Content Credentials and AI. Range of speaker roles provided different perspectives on the topic area. Excellent organization, presentation quality and management of the zoom space.”
  • “Three things in particular stood out. Tim Bray’s talk was great as it brought everything to my world as a photographer and is pretty much what I’ve found. Brendan Quinn’s opt out information was definitely worth knowing and now I’m going to look at it. Finally, David Riecks talk about Adobe’s Metadata Panel gave me more insight into it and if it should be included in my workflow but his information for the proposed new properties for Generative AI was very good to hear.”

Thanks to everyone who attended and to our speakers David, Brendan, Paul, Ottar, Tim and Marcos.

Special thanks to David and the IPTC Photo Metadata Working Group for organising the event.

We look forward to seeing even more attendees next year!

To be sure of being notified about next year’s event, subscribe to the (very low volume) “Friends of IPTC Newsletter”.

Attendees at the Media Provenance Summit in Bergen, Norway on September 23 2025. Photo credit: Gunnbjørg Gunnarsdottir   Media Cluster Norway
Attendees at the Media Provenance Summit in Bergen, Norway on September 23 2025. Photo credit: Gunnbjørg Gunnarsdottir Media Cluster Norway

The Media Provenance Summit brought together leading experts, journalists and technologists from across the globe to Mount Fløyen in Bergen, Norway, to address some of the most pressing challenges facing news media today.

Hosted by Media Cluster Norway, and organised together with the BBC, the EBU and IPTC, the full-day summit on September 23 convened participants from major news organisations, technology providers and international standards bodies to advance the implementation of the C2PA content provenance standard, also known as Content Credentials, in real-world newsroom workflows. The ultimate aim is to strengthen the signal of authentic news media content in a time where it is challenged by generative AI.

“We need to work together to tackle the big problems that the news media industry is facing, and we are very grateful for everyone who came together here in Bergen to work on solutions. I believe we made important progress,” said Helge O. Svela, CEO of Media Cluster Norway.

The program focused on three critical questions:

  1. How to preserve C2PA information throughout editorial workflows when not all tools yet support the technology.
  2. When to sign content as it moves through the workflow at device level, organisational level, or both.
  3. How to handle confidentiality and privacy issues, including the protection of sources and sensitive material.

“We were very happy to see a focus on real solutions, with some great ideas and tangible next steps,” said IPTC’s Managing Director, Brendan Quinn. “With participants from across the media ecosystem, it was exciting to see vendors, publishers, broadcasters and service providers working together to address issues in practically applying C2PA to media
workflows in today’s newsrooms.”

Speakers included Charlie Halford (BBC), Andy Parsons (CAI/Adobe), François-Xavier Marit (AFP), Kenneth Warmuth (WDR), Lucille Verbaere (EBU), Marcos Armstrong and Sébastien Testeau (CBC/Radio-Canada), and Mohamed Badr Taddist (EBU).

François-Xavier Marit of AFP speaking at the Media Provenance Summit in Bergen, Norway on 23 September 2025.
François-Xavier Marit of AFP speaking at the Media Provenance Summit in Bergen, Norway on 23 September 2025.

“The BBC welcomes this focus on protecting access to trustworthy news. We are proud to have been founder members of the media provenance work carried out under the auspices of C2PA and we are delighted to see it moving forward with such strong industry support,” said Laura Ellis, Head of Technology Forecasting at BBC Research.

Participants travelled to participate in the summit from as far away as Japan, Australia, the US and Canada.

“We’re pleased to collectively have taken a few hurdles on the way to enabling a broader adoption of Content Provenance and Authenticity”, said Hans Hoffmann, Deputy Director at EBU Technology and Innovation Department. “The definition of common practices for signing content in workflows, retrieving provenance information thanks to soft binding, and better safeguards for the privacy of sources address important challenges. Public service media are committed to fight disinformation and improve transparency, and EBU members were well represented in Bergen. The broad participation from across the industry and globe
smooths the path towards adoption. Thanks to Media Cluster Norway for hosting the event!”

The summit emphasised moving from problem analysis to solution exploration. Through structured sessions, participants defined key blockers, sketched practical solutions and developed action plans aimed at strengthening trust in digital media worldwide.

About the Summit
The Media Provenance Summit was organised jointly by Media Cluster Norway, the EBU, the BBC and IPTC, and made possible with the support of Agenda Vestlandet.

For more information, please contact: helge@medieklyngen.no

"Stamping Your Content (C2PA Provenance)" IBC Accelerator project social bannerThe IPTC has joined the BBC (UK), YLE (Finland), RTÉ (Ireland), ITV (UK), ITN (UK), EBU (Europe), AP (USA/Global), Comcast (USA/Global), ASBU (Africa and Middle East), Channel 4 (UK) and the IET (UK) as a “champion” in the Stamping Your Content project, run by the IBC Accelerator as part of this year’s IBC Conference in Amsterdam. 

These “Champions” represent the content creator side of the equation. The project also includes “participants” from the vendor and integrator community: CastLabs, TCS, Videntifier, Media Cluster Norway, Open Origins, Sony, Google Cloud and Trufo.

This project aims to develop open-source tools that enable organisations to integrate Content Credentials (C2PA) into their workflows, allowing them to sign and verify media provenance. As interest in authenticating digital content grows, broadcasters and news organisations require practical solutions to assert source integrity and publisher credibility. However, implementing Content Credentials remains complex, creating barriers to adoption. This project seeks to lower the entry threshold, making it easier for organisations to embed provenance metadata at the point of publication and verify credentials on digital platforms. 

The initiative has created a proof-of-concept open source ‘stamping’ tool that links to a company’s authorisation certificate, inserting C2PA metadata into video content at the time of publishing. Additionally, a complementary open-source plug-in is being developed to decode and verify these credentials, ensuring compliance with C2PA standards. By providing these tools, the project enables media organisations to assert content authenticity, helping to combat misinformation and reinforce trust in digital media.

This work builds upon the “Designing Your Weapons in the Fight Against Disinformation” initiative at last year’s IBC Accelerator, which mapped the landscape of digital misinformation. The current phase focuses on practical implementation, ensuring that organisations can start integrating authentication measures in real-world workflows. By fostering an open and standardised approach, the project supports the broader media ecosystem in adopting content provenance solutions that enhance transparency and trustworthiness.

Attend the project’s panel presentation session at the International Broadcasting Convention, IBC2025 in Amsterdam on Monday, Sept 15 at 09:45 – 10:45.

The speakers on the panel on Monday September 15 are all from IPTC member organisations:

  • Henrik Cox, Solutions Architect – OpenOrigins
  • Judy Parnall, Principal Technologist, BBC Research & Development – BBC
  • Mohamed Badr Taddist, Cybersecurity Master graduate, content provenance and authenticity – European Broadcasting Union (EBU)
  • Tim Forrest, Head of Content Distribution and Commercial Innovation – ITN

See more detail on the IBC Show site.

Many of the participating organisations are also IPTC members, so the work started in the project will continue after IBC through the IPTC Media Provenance Committee and its Working Groups.

We are already planning to carry this work forward at the next Media Provenance Summit which will be held later in September in Bergen, Norway.

The IPTC is pleased to announce the full agenda for the 2025 IPTC Photo Metadata Conference, which will be held online on Thursday September 18th from 15.00 to 18.00 UTC. The focus this year is on how image metadata can improve real-world workflows.

We are excited to be joined by the following speakers:

  • Brendan Quinn, IPTC Managing Director, presenting two sessions: presenting IPTC’s AI Opt-Out Best Practices guidelines and also an update on IPTC’s work with C2PA and the Media Provenance Committee
  • David Riecks, Lead of the IPTC Photo Metadata Working Group, presenting two sessions: the latest on IPTC’s proposed new properties for Generative AI, and also an update on the Adobe Custom Metadata Panel plugin and how it makes the complete IPTC Photo Metadata Standard available in Adobe products
  • Paul Reinitz, consultant previously with Getty Images, discussing AI opt-out and copyright issues
  • Ottar A. B. Anderson, previously a photographer with the Royal Norwegian Air Force and with over 15 years of experience as a commercial photographer, on proposals for metadata for image archiving and his work on the Digital Object Authenticity Working Group (DOAWG)
  • Jerry Lai, previously a photographer for Getty Images, Reuters and Associated Press and now with Imagn, presenting a case study on using AI for captioning huge numbers of images for Super Bowl LIX
  • Marcos Armstrong, Senior Specialist, Content Provenance at CBC/Radio-Canada, speaking about CBC’s project to map editorial workflows and identify where content authenticity technologies can be used in the newsroom
  • Tim Bray, creator of XML and founder of OpenText Corporation, among many others, speaking on his experiences with C2PA and his ideas for how it can be adopted in the future
Attendees at IPTC's Photo Metadata Conference 2017 in Berlin.
Attendees at IPTC’s Photo Metadata Conference 2017 in Berlin.

This year’s conference promises to be a great one, with topics ranging from Generative AI and media provenance technology to the technical details of scanning historical documents, but always with a focus on how new technologies can be applied in the real world.

Registration is free and open to anyone.

See more information at the event page on the IPTC web site or simply sign up at the Zoom webinar page.

We look forward to seeing you there!

Google has announced the launch of its latest phone in the Pixel series, including support for IPTC Digital Source Type in its industry-leading C2PA implementation.

Many existing C2PA implementations focus on signalling AI-generated content, adding the IPTC Digital Source Type of “Generated by AI” to content that has been created by a trained model.

Google’s implementation in the new Pixel 10 phone differs by adding a Digital Source Type to every image created using the phone, using the “computational capture” Digital Source Type to denote photos taken by the phone’s camera. In addition, images edited using the phone’s AI manipulation tools show the “Edited using Generative AI” value in the Digital Source Type field.

Note that the Digital Source Type information is added using the “C2PA Actions” assertion in the C2PA manifest; unfortunately it is not yet added to the regular IPTC metadata section in the  XMP metadata packet. So it can only be read by C2PA-compatible tools.

Background: what is “Computational Capture”?

The IPTC added Computational Capture as a new term in the Digital Source Type vocabulary in September 2024. It represents a “digital capture” that does involve some extra work using an algorithm, as opposed to simply recording the encoded sample hitting the phone sensor, as with simple digital cameras.

For example, a modern smartphone doesn’t simply take one photo when you press the shutter button. Usually the phone captures several images from the phone sensor using different exposure levels and then an algorithm merges them together to create a visually improved image.

This of course is very different from a photo that was created by AI or even one that was edited by AI at a human’s instruction, so we wanted to be able to capture this use case. Therefore we introduced the term “computational capture”.

For more information and examples, see the Digital Source Type guidance in the IPTC Photo Metadata User Guide.

 

 

A cute robot penguin painting a picture of itself using a canvas mounted on a a wooden easel, in the countryside. Generated by Imagine with Meta AI
An image generated by Imagine with Meta AI, using the prompt “A cute robot penguin painting a picture of itself using a canvas mounted on a a wooden easel, in the countryside.” The image contains IPTC DigitalSourceType metadata showing that it was generated by AI. This image would be a candidate for the new photo metadata properties being proposed here.

The IPTC Photo Metadata Working Group is proposing a draft set of properties for recording details of images created using generative AI systems. The group presents a draft of these fields for your comments and feedback. After comments are reviewed the group intends to add new properties to a new version of the IPTC Photo Metadata Standard which would be released later in 2025.

Use Cases

The proposals detailed here are intended to address these scenarios, among others:

  • How do you know which system/model generated this image? For instance, if you wanted to compare how different systems—or versions of systems—interpret a given prompt, where would you look?
  • How can you know what prompt text was entered, or image shared as a starting point? If you want to recreate similar images in the future with the same look, where should that info be stored?
  • How can you tell who was involved in the creation of a generative AI image? 

Example Scenario

You are the new designer for an organisation, and need to create an image for a monthly column. You are told to use generative AI, but your boss wants the end result to have the same “look and feel” as images used previously in the column. If you needed to find those that were published previously—what information would be most useful in locating and retrieving the image(s) in your organization’s image collection?

Proposed Properties:

  • AI Model
    Name: AI Model
    Definition: The foundational model name and version used to generate this image.
    User Note: For example “DALL-E 2”, “Google Genesis 1.5 Pro”
    Basic Specs: Data type: Text / Cardinality: 0..1

  • AI Text Prompt Description
    Name: AI Text Prompt Description
    Definition: The information that was given to the generative AI service as “prompt(s)” in order to generate this image.
    User Note: This may include negative [excludes] and positive [includes] statements in the prompt.
    Basic Specs: Data type: Text / Cardinality: 0..1

  • AI Prompt Writer Name
    Name: AI Prompt Writer Name
    Definition: Name of the person who wrote the prompt used for generating this image.
    Basic Specs: Data type: Text / Cardinality: 0..1

  • Reference Image(s)
    Name: Reference Image
    Definition: Image(s) used as a starting point to be refined by the generative AI system (sometimes referred to as “base image”).
    Basic Specs: Data type: URI / Cardinality: 0..unbounded

All of these properties are of course optional, not required, but we would recommend that AI engines fill in the properties whenever possible.

Request for Comment

The intent is for a new standard version including these fields to be proposed at the IPTC Autumn Meeting 2025 in October to be voted on by IPTC member organisations. If approved by members, the new version would be published in November 2025.

Please send your comments or suggestions for improvements using the IPTC Contact Us form or via a post to the public iptc-photometadata@groups.io discussion list by Friday 29th August 2025.

The IETF AI Preferences Design Team workshop
IPTC at the IETF AI Preferences Design Team workshop held in London in July 2025. The laptop screen shows the current public draft.

The IPTC participated in a “design team” workshop for the Internet Engineering Task Force (IETF)’s AI Preferences Working Group. Brendan Quinn, IPTC Managing Director attended the workshop in London along with representatives from Mozilla, Google, Microsoft, Cloudflare, Anthropic, Meta, Adobe, Common Crawl and more.

As per the group’s charter, “The AI Preferences Working Group will standardize building blocks that allow for the expression of preferences about how content is collected and processed for Artificial Intelligence (AI) model development, deployment, and use.” The intent is that this will take the form of an extension to the commonly-used Robots Exclusion Protocol (RFC9309). This document defines the way that web crawlers should interact with websites.

The idea is that the Robots Exclusion Protocol would specify how website owners would like content to be collected, and the AI Preferences specification defines the statements that rights-holders can use to express how they would like their content to be used.

The Design Team is discussing and iterating the group’s draft documents: the Vocabulary for Expressing AI Usage Preferences and the “attachment” definition document, Indicating Preferences Regarding Content Usage. The results of the discussions will be taken to the IETF plenary meeting in Madrid next week, and 

Discussions have been wide-ranging and include use cases for varying options of opt-in and opt-out, the ability to opt out of generative AI training but to allow search engine indexing, and the difference between preferences for training and preferences for how content can be used at inference time (also known as prompt time or query time, such as RAG or “grounding” use cases) and the varying mechanisms for attaching these preferences to content, i.e. a website’s robots.txt file, HTTP headers and embedded metadata.

The IPTC has already been looking at this area and defined a data mining usage vocabulary in conjunction with the PLUS Coalition in 2023. There is a possibility that our work will change to reflect the IETF agreed vocabulary.

The work also relates to IPTC’s recently-published guidance for publishers on opting out of Generative AI training. Hopefully we will be able to publish a much simpler version of this guidance in the future because of the work from the IETF.

Hannes Schulz from Axel Springer showing their implementation of IPTC Video Metadata Hub for an internal video management system.
Hannes Schulz from Axel Springer showing their JSON-based implementation of IPTC Video Metadata Hub for an internal video management system at the IPTC Autumn Meeting 2024.

The IPTC Video Metadata Working Group has released version 1.6 of its Video Metadata Hub standard, including terms for rights usage, language, and content created by Generative AI models.

New properties

  • Rights Usage Terms: The licensing parameters of the video expressed in free text. (Aligned with the equivalent term in IPTC Photo Metadata.)

Changed properties

  • Language: Changed label and description to reflect that this represents the main language of the video. (Previously the term was called “language version”).

  • Source (Supply Chain): Changed description to reflect that changes made be made by a system (such as a Generative AI engine) as well as a person or organisation.

The specification for Video Metadata Hub is separated into two parts: IPTC Video Metadata Hub properties and IPTC Video Metadata Hub mappings showing how to apply these core properties in many existing video standards.

There is also a JSON Schema representation of Video Metadata Hub, which is used by some large media companies in manageing their video content. The 1.6 version of the JSON schema reflects the latest changes.

The Video Metadata Hub User Guide and Video Metadata Hub Generator tool have also been updated to include the changes in version 1.6.

Please feel free to discuss the new version of Video Metadata Hub on the public iptc-videometadata discussion group, or contact IPTC via the Contact us form.

The latest version of NewsML-G2, version 2.35 has been released, adding support for the status of events.

Approved by the IPTC Standards Committee at the IPTC Spring Meeting, on 16th May, the new version adds a property eventStatus which matches the equivalent property in ninjs that was added in version 3.0.

eventStatus, within the eventDetails block, describes the status of an actual event – as opposed to occurenceStatus, which conveys the status of how likely it is that a future event will occur, and coverageStatus, which conveys the planned news coverage of a news event.

The recommended controlled vocabulary for eventStatus is http://cv.iptc.org/newscodes/eventstatus, which currently contains the terms “scheduled“, “in progress“, “completed“, “postponed” and “canceled“.

IPTC Catalog updated to version 41

The IPTC Catalog, the master list of internally- and externally-managed controlled vocabularies used and referenced by NewsML-G2, has been updated to version 41. It adds the PLUS Licence Data Format vocabulary, used extensively in IPTC Photo Metadata and now in other standards through the introduction of the Data Mining vocabulary.

The latest catalog is available at http://iptc.org/std/catalog/catalog.IPTC-G2-Standards_41.xml (note the plain http URL scheme. We don’t link directly to it here because clicking the link may trigger browser warnings about moving from https to http URLs.)

Find out more about NewsML-G2 2.35

All information related to NewsML-G2 2.35 is at https://iptc.org/std/NewsML-G2/2.35/.

The NewsML-G2 Specification document has been updated to cover the new version 2.35.

Example instance documents are at https://iptc.org/std/NewsML-G2/2.35/examples/

Full XML Schema documentation is located at https://iptc.org/std/NewsML-G2/2.35/specification/XML-Schema-Doc-Power/

XML source documents and unit tests are hosted in the public NewsML-G2 GitHub repository.

The NewsML-G2 Generator tool has also been updated to produce NewsML-G2 2.35 files using the version 41 catalog.

For any questions or comments, please contact us via the IPTC Contact Us form or post to the iptc-newsml-g2@groups.io mailing list. IPTC members can ask questions at the weekly IPTC News Architecture Working Group meetings.

For more information, contact the IPTC News Architecture Working Group via the public NewsML-G2 mailing list.

photo of a computer screen showing a portion of the ninjs 3.1 schema featuring the new Digital Source Type property
ninjs 3.1 schema featuring the new Digital Source Type property

The IPTC is excited to announce the latest updates to ninjs, our JSON-based standard for representing news content metadata. Version 3.1 is now available, along with updated versions 2.2 and 1.6 for those using earlier schemas.

These releases reflect IPTC’s ongoing commitment to supporting structured, machine-readable news content across a variety of technical and editorial workflows.

What is ninjs?

ninjs (News in JSON) is a flexible, developer-friendly format for describing news items in a structured way. It allows publishers, aggregators, and news tech providers to encode rich metadata about articles, images, videos, and more, using a clean JSON format that fits naturally into modern content pipelines.

What’s new in ninjs 3.1, 2.2 and 1.6?

The new releases add a new property for the IPTC Digital Source Type property, which was first used with the IPTC Photo Metadata Standard but now used across the industry to declare the source of media content, including content generated or manipulated by a Generative AI engine.

The new property (called digitalSourceType in 3.1 and digitalsourcetype in 2.2 and 1.6 to match the case conventions of each standard version) has the following properties:

  • Name: the name of the digital source type, such as “Created using Generative AI”
  • URI: the official identifier of the digital source type from the IPTC Digital Source Type vocabulary or another vocabulary, such as http://cv.iptc.org/newscodes/digitalsourcetype/trainedAlgorithmicMedia (the official ID for generative AI content)
  • Literal: an optional way to add new digital source types that are not part of a controlled vocabulary.

IPTC supports multiple versions of ninjs in parallel to ensure stability and continuity for publishers and platforms that depend on long-term schema support.

The new property is part of the general ninjs schema, and so can be used in the main body of a ninjs object to describe the main news item and can also be used in an “association” object which refers to an associated media item.

Access the schemas

All versions are publicly available on the IPTC website:

ninjs generator and user guide

The ninjs Generator tool has been updated to cover the latest versions. Fill in the form fields and see what that content looks like in ninjs format. You can switch between the schema versions to see how the schema changes between 1.6, 2.2 and 3.1.

The ninjs User Guide has also been updated to reflect the newly added property.

Why it matters

As the news industry becomes increasingly reliant on metadata for content distribution, discoverability, and rights management, ninjs provides a modern, extensible foundation that supports both human and machine workflows. It’s trusted by major news agencies, technology platforms, and AI developers alike.

Get involved

We welcome feedback from the community and encourage you to share how you’re using ninjs in your own products or platforms. If you would like to discuss ninjs, you can join the public mailing list at https://groups.io/g/iptc-ninjs.

If you’re interested in contributing to the development of IPTC standards, join us!