Screenshot of the section of the C2PA 1.2 specification showing the new IPTC assertion definition.
Screenshot of the section of the C2PA 1.2 specification showing the new IPTC assertion definition.

We are happy to announce that IPTC’s work with C2PA, the Coalition for Content Provenance and Authority, continues to bear fruit. The latest development is that C2PA assertions can now include properties from both the IPTC Photo Metadata Standard and our video metadata standard, IPTC Video Metadata Hub.

Version 1.2 of the C2PA Specification describes how metadata from either the photo or video standard can be added, using the XMP tag for each field in the JSON-LD markup for the assertion.

For IPTC Photo Metadata properties, the XMP tag name to be used is shown in the “XMP specs” row in the table describing each property in the Photo Metadata Standard specification. For Video Metadata Hub, the XMP tag can be found in the Video Metadata Hub properties table under the “XMP property” column.

We also show in the example assertion how the new accessibility properties can be added using the Alt Text (Accessibility) field which is available in Photo Metadata Standard and will soon be available in a new version of Video Metadata Hub.

Alamy, a stock photo agency offering a collection of over 300 million images along with millions of videos, has recently launched a new Partnerships API, and has chosen IPTC’s ninjs 2.0 standard as the main format behind the API.

Alamy is an IPTC member via its parent company PA Media, and Alamy staff have contributed to the development of ninjs in recent years, leading to the introduction of ninjs 2.0 in 2021.

“When looking at a response format, we sought to adopt an industry standard which would aid in the communication of the structure of the responses but also ease integration with partners who may already be familiar with the standard,” said Ian Young, Solutions Architect at Alamy.

Example item on the Alamy Partnerships API in ninjs 2.0 format
An example item on the Alamy Partnerships API in ninjs 2.0 format.

“With this in mind, we chose IPTCs news in JSON format, ninjs,” he said. “We selected version 2 specifically due to its structural improvements over version 1 as well as its support for rights expressions.”

Young continued: “ninjs allows us to convey the metadata for our content, links to the media itself and the various supporting renditions as well as conveying machine readable rights in a concise payload.”

“We’ve integrated with customers who are both familiar with IPTC standards and those who are not, and each have found the API equally easy to work with.”

Learn more about ninjs via IPTC’s ninjs overview pages, consult the ninjs User Guide, or try it out yourself using the ninjs generator tool.

Family Tree magazine has published a guide on using embedded metadata for photographs in genealogy – the study of family history.Screenshot of the beginning of the article on FamilyTree.com describing how to use IPTC photo metadata for genealogy

Rick Crume, a genealogy consultant and the article’s author, says IPTC metadata “can be extremely useful for savvy archivists […] IPTC standards can help future-proof your metadata. That data becomes part of the digital photo, contained inside the file and preserved for future software programs.”

Crume quotes Ken Watson from All About Digital Photos saying “[IPTC] is an internationally recognized standard, so your IPTC/XMP data will be viewable by someone 50 or 100 years from now. The same cannot be said for programs that use some proprietary labelling schemes.”

Crume then adds: “To put it another way: If you use photo software that abides by the IPTC/XMP standard, your labels and descriptive tags (keywords) should be readable by other programs that also follow the standard. For a list of photo software that supports IPTC Photo Metadata, visit the IPTC’s website.

“[IPTC] is an internationally recognized standard, so your IPTC/XMP data will be viewable by someone 50 or 100 years from now”

The article goes on to recommend particular software choices based on IPTC’s list of photo software that supports IPTC Photo Metadata. In particular, Crume recommends that users don’t switch from Picasa to Google Photos, because Google Photos does not support IPTC Photo Metadata in the same way. Instead, he recommends that users stick with Picasa for as long as possible, and then choose another photo management tool from the supported software list.

Similarly, Crume recommends that users should not move from Windows Photo Gallery to the Windows 10 Photos app, because the Photos app does not support IPTC embedded metadata.

Crume then goes on to investigate popular genealogy sites to examine their support for embedded metadata, something that we do not cover in our photo metadata support surveys.

The full article can be found on FamilyTree.com.

 

 

Attendees at CEPIC Congress 2022 table area
Attendees at the table area of CEPIC Congress 2022, held in Mallorca, Spain.

The IPTC took part in a panel on Diversity and Inclusion at the CEPIC Congress 2022, the picture industry’s annual get-together, held this year in Mallorca Spain.

Google’s Anna Dickson hosted the panel, which also included Debbie Grossman of Adobe Stock, Christina Vaughan of ImageSource and Cultura, and photographer Ayo Banton.

Unfortunately Abhi Chaudhuri of Google couldn’t attend due to Covid, but Anna presented his material on Google’s new work surfacing skin tone in Google image search results.

Brendan Quinn, IPTC Managing Director participated on behalf of the IPTC Photo Metadata Working Group, who put together the Photo Metadata Standard including the new properties covering accessibility for visually impaired people: Alt Text (Accessibility) and Extended Description (Accessibility).

Brendan also discussed IPTC’s other Photo Metadata properties concerning diversity, including the Additional Model Information which can include material on “ethnicity and other facets of the model(s) in a model-released image”, and the characteristics sub-property of the Person Shown in the Image with Details property which can be used to enter “a property or trait of the person by selecting a term from a Controlled Vocabulary.”

Some interesting conversations ensued around the difficulty of keeping diversity information up to date in an ever-changing world of diversity language, the pros and cons of using controlled vocabularies (pre-selected word lists) to cover diversity information, and the differences in covering identity and diversity information on a self-reported basis versus reporting by the photographer, photo agency or customer.

It’s a fascinating area and we hope to be able to support the photographic industry’s push forward with concrete work that can be implemented at all types of photographic organisations to make the benefits of photography accessible for as many people as possible, regardless of their cultural, racial, sexual or disability identity.

IPTC member delegates Phil Avner (AP), Pam Fisher (Individual Member / The Media Institute), Alison Sullivan (Individual Member / MGM Resorts) and Mark Milstein (Microstocksolutions / VAIsual) at NAB 2022 in Las Vegas.
IPTC member delegates Phil Avner (AP), Pam Fisher (Individual Member / The Media Institute), Alison Sullivan (Individual Member / MGM Resorts) and Mark Milstein (Microstocksolutions / VAIsual) at NAB 2022 in Las Vegas.

The National Association of Broadcasters (NAB) Show wrapped up its first face-to-face event in three years last week in Las Vegas.  In spite of the name, this is an internationally attended trade conference and exhibition showcasing equipment, software and services for film and video production, management and distribution. There were 52,000 attendees, down from a typical 90-100k, with some reduction in booth density; overall the show was reminiscent of pre-COVID days.  A few members of IPTC met while there: Mark Milstein (vAIsual), Alison Sullivan (MGM Resorts), Phil Avner (Associated Press) and Pam Fisher (The Media Institute).  Kudos to Phil for working, showcasing ENPS on the AP stand, while others walked the exhibition stands.

NAB is a long-running event and several large vendors have large ‘anchor’ booths.  Some such as Panasonic and Adobe reduced their normal NAB booth size, while Blackmagic had their normal ‘city block’-sized presence, teeming with traffic.  In some ways the reduced booth density was ideal for visitors: plenty of tables and chairs populated the open areas making more meeting and refreshment space available.  The NAB exhibition is substantially more widely attended than the conference, and this year several theatres were provided on the show floor for sessions any ‘exhibits only’ attendee could watch.  Some content is now available here: https://nabshow.com/2022/videos-on-demand/

For the most part this was a show of ‘consolidation’ rather than ‘innovation’. For example, exhibitors were enjoying welcoming their partners and customers face-to-face rather than launching significant new products.  Codecs standardised during the past several years were finally reaching mainstream support, with AV1, VP9 and HEVC well-represented across vendors. SVT-AV1 (Scalable Vector Technology) was particularly prevalent, having been well optimised and made available to use license-free by the standard’s contributors. VVC (Versatile Video Coding), a more recent and more advanced standard, is still too computationally intensive for commercial use, though a small set made mention of it on their stands (e.g. Fraunhofer).

IP is now fairly ubiquitous within broadcast ecosystems.  To consolidate further, an IP Showcase booth illustrating support across standards bodies and professional organisations championed more sophisticated adoption. A pyramid graphic showing a cascade of ‘widely available’ to ‘rarely available’ sub-systems encouraged deeper adoption.

Super Resolution – raising the game for video upscaling

One of the show floor sessions – “Improving Video Quality with AI” – presented advances by iSIZE and Intel. The Intel technology may be particularly interesting to IPTC members, and concerns “Super Resolution.” Having followed the subject for over 20 years, for me this was a personal highlight of the show.

Super Resolution is a technique for creating higher resolution content from smaller originals.  For example, achieving a professional quality 1080p video from a 480p source, or scaling up a social media-sized image for feature use.

A representative from Intel explaining the forthcoming SuperResolution library and FFmpeg plugin for video upscaling
A representative from Intel explaining their forthcoming Super Resolution library and FFmpeg plugin for video upscaling

A few years ago a novel and highly effective new Super Resolution method was innovated (“RAISR”, see https://arxiv.org/abs/1606.01299); this represented a major discontinuity in the field, albeit with the usual mountain of investment and work needed to take the ‘R’ (research) to ‘D’ (development).

This is exactly what Intel have done, and the resulting toolsets will be made available at no cost at the company’s Open Visual Cloud repository at the end of May.

Intel invested four years in improving the AI/ML algorithms (having created a massive ground truth library for learning), optimising to CPUs for performance and parallelisation, and then engineering the ‘applied’ tools developers need for integration (e.g. Docker containers, FFmpeg and GStreamer plug-ins). Performance will now be commercially robust.

The visual results are astonishing, and could have a major impact on the commercial potential of photographic and film/video collections needing to reach much higher resolutions or even to repair ‘blurriness’.

Next year’s event is the centennial of the first NAB Show and takes place from April 15th-19th in Las Vegas.

– Pam Fisher – Lead, IPTC Video Metadata Working Group

IPTC members will be appearing at imaging.org’s Imaging Science and Technology DigiTIPS 2022 meeting series tomorrow, April 26.

The session description is as follows:

Unmuting Your ‘Silent Images’ with Photo Metadata
Caroline Desrosiers, founder and CEO, Scribely
David Riecks and Michael Steidl, IPTC Photo Metadata Working Group

Abstract: Learn how embedded photo metadata can aid in a data-driven workflow from capture to publish. Discover what details exist in your images; and learn how you can affix additional information so that you and others can manage your collection of images. See how you can embed info to automatically fill in “Alt Text” to images shown on your website. Explore how you can test your metadata workflow to maximize interoperability.”

Registration is still open. You can register at https://www.imaging.org/Site/IST/Conferences/DigiTIPS/DigiTIPS_Home.aspx?Entry_CCO=3#Entry_CCO

This image was generated from a set of captured digital images used to train a Generative Adversarial Network, so would be classified as “trainedAlgorithmicMedia” in the proposed new version of the Digital Source Type CV. Source: Public Domain via Wikimedia Commons

A hot topic in media circles these days is “synthetic media”. That is, media that was created either partly or fully by a computer. Usually the term is used to describe content created either partly or wholly by AI algorithms.

IPTC’s Video Metadata Working Group has been looking at the topic recently and we concluded that it would be useful to have a way to describe exactly what type of content a particular media item is. Is it a raw, unmodified photograph, video or audio recording? Is it a collage of existing photos, or a mix of synthetic and captured content? Was it created using software trained on a set of sample images or videos, or is it purely created by an algorithm?

We have an existing vocabulary that suits some of this need: Digital Source Type. This vocabulary was originally created to be able to describe the way in which an image was scanned into a computer, but it also represented software-created images at a high level. So we set about expanding and modifying that vocabulary to cover more detail and more specific use cases.

It is important to note that we are only describing the way a media object has been created: we are not making any statements about the intent of the user (or the machine) in creating the content. So we deliberately don’t have a term “deepfake”, but we do have “trainedAlgorithmicMedia” which would be the term used to describe a piece of content that was created by an AI algorithm such as a Generative Adversarial Network (GAN).

Here are the terms we propose to include in the new version of the Digital Source Type vocabulary. (New terms and definition changes are marked in bold text. Existing terms are included in the list for clarity.)

Term ID digitalCapture
Term name Original digital capture sampled from real life
Term description The digital media is captured from a real-life source using a digital camera or digital recording device
Examples Digital photo or video taken using a digital SLR or smartphone camera
Term ID negativeFilm
Term name Digitised from a negative on film
Term description The digital media was digitised from a negative on film on any other transparent medium
Examples Film scanned from a moving image negative
Term ID positiveFilm
Term name Digitised from a positive on film
Term description The digital media was digitised from a positive on a transparency or any other transparent medium
Examples Digital photo scanned from a photographic positive
Term ID print
Term name Digitised from a print on non-transparent medium
Term description The digital image was digitised from an image printed on a non-transparent medium
Examples Digital photo scanned from a photographic print
Term ID humanEdited
Term name Original media with minor human edits
Term description Minor augmentation or correction by a human, such as a digitally-retouched photo used in a magazine
Examples Video camera recording, manipulated digitally by a human editor
Term ID compositeCapture
Term name Composite of captured elements
Term description Mix or composite of several elements that are all captures of real life
Examples * A composite image created by a digital artist in Photoshop based on several source images
* Edited sequence or composite of video shots
Term ID algorithmicallyEnhanced
Term name Algorithmically-enhanced media
Term description Minor augmentation or correction by algorithm
Examples A photo that has been digitally enhanced using a mechanism such as Google Photos’ “de-noise” feature
Term ID dataDrivenMedia
Term name Data-driven media
Term description Digital media representation of data via human programming or creativity
Examples A representation of a distant galaxy created by analysing the outputs of a deep-space telescope (as opposed to a regular camera)
An infographic created using a computer drawing tool such as Adobe Illustrator or AutoCAD
Term ID digitalArt
Term name Digital art
Term description Media created by a human using digital tools
Examples * A cartoon drawn by an artist into a digital tool using a digital pencil, a tablet and a drawing package such as Procreate or Affinity Designer
* A scene from a film/movie created using Computer Graphic Imagery (CGI)
* Electronic music composition using purely synthesised sounds
Term ID virtualRecording
Term name Virtual recording
Term description Live recording of virtual event based on synthetic and optionally captured elements
Examples * A recording of a computer-generated sequence, e.g. from a video game
* A recording of a Zoom meeting
Term ID compositeSynthetic
Term name Composite including synthetic elements
Term description Mix or composite of several elements, at least one of which is synthetic
Examples * Movie production using a combination of live-action and CGI content, e.g. using Unreal engine to generate backgrounds
* A capture of an augmented reality interaction with computer imagery superimposed on a camera video, e.g. someone playing Pokemon Go
Term ID trainedAlgorithmicMedia
Term name Trained algorithmic media
Term description Digital media created algorithmically using a model derived from sampled content
Examples * Image based on deep learning from a series of reference examples
* A “speech-to-speech” generated audio or “deepfake” video using a combination of a real actor and an AI model
* “Text-to-image” using a text input to feed an algorithm that creates a synthetic image
Term ID algorithmicMedia
Term name Algorithmic media
Term description Media created purely by an algorithm not based on any sampled training data, e.g. an image created by software using a mathematical formula
Examples * A purely computer-generated image such as a pattern of pixels generated mathematically e.g. a Mandelbrot set or fractal diagram
* A purely computer-generated moving image such as a pattern of pixels generated mathematically

We propose that the following term, which exists in the current DigitalSourceType CV, be retired:

Term ID RETIRE: softwareImage
Term name Created by software
Term description The digital image was created by computer software
Note We propose that trainedAlgorithmicMedia or algorithmnicMedia be used instead of this term.

 

We welcome all feedback from across the industry to these proposed terms.

Please contact Brendan Quinn, IPTC Managing Director at mdirector@iptc.org use the IPTC Contact Us form to send your feedback.

Marc Lavallee of The New York Times, Brendan Quinn of IPTC, Pascale Doucet of France Télévision and Scott Yates of JournalList spoke on a panel at the Project Origin event on February 22, 2022.
Marc Lavallee of The New York Times, Brendan Quinn of IPTC, Pascale Doucet of France Télévision and Scott Yates of JournalList spoke on a panel at the Project Origin event on February 22, 2022.

The IPTC has an ongoing project to the news and media industry deal with content credibility and provenance. As part of this, we have started working with Project Origin, a consortium of news and technology organisations who have come together to fight misinformation through the use of content provenance technologies.

On Tuesday 22nd February, Managing Director of IPTC Brendan Quinn spoke on a panel at an invite-only Executive Briefing event attended by leaders from news organisations around the world.

Other speakers at the event included Marc Lavallee, Head of R&D for The New York Times, Pascale Doucet of France Télévision, Eric Horvitz of Microsoft Research, Andy Parsons of Adobe, and Laura Ellis, Jamie Angus and Jatin Aythora of the BBC.

The event marks the beginning of the next phase of the industry’s work on content credibility. C2PA has now delivered the 1.0 version of its spec, so the next phase of the work is for the news industry to get together to create best practices around implementing it in news workflows.

IPTC and Project Origin will be working together with stakeholders from all parts of the news industry to establish guidelines for making provenance work in a practical way across the entire news ecosystem.

A screenshot of a browser showing Bill Kasdorf's latest column. Follow the link to read the full article.
Bill Kasdorf’s article on PublishersWeekly.com discusses IPTC Photo Metadata Standard’s new properties, Alt Text (Accessibility) and Extended Description (Acessibility).

Bill Kasdorf, IPTC Individual Member, has written about IPTC Photo Metadata in his latest column for Publishers Weekly.

In the article, a double-page spread in the printed version of the 11/22/2021 issue of Publishers Weekly and an extended article online, Bill references Caroline Desrosiers of IPTC Startup member Scribely saying “if publications are born accessible, then their images should be born accessible, as well.”

The article describes how the new Alt Text (Accessibility) and Extended Description (Accessibility) properties in IPTC Photo Metadata can be used to make EPUBs more accessible.

Bill goes on to provide an example, supplied by Caroline Desrosiers, of how an image’s caption, alt text and extended description fulfil very different purposes, and mentions that it’s perfectly fine to leave alt text blank in some cases! For more details, read the article here.

Carl-Gustav Linden of University of Bergen on the use of IPTC metadata as a means of powering AI in journalism, speaking at the JournalismAI Festival on 30 November 2021.
Carl-Gustav Linden of University of Bergen on the use of IPTC metadata as a means of powering AI in journalism, speaking at the JournalismAI Festival on 30 November 2021.

“Metadata is the wheel in the digital business model,” according to Carl-Gustav Linden of University of Bergen in Norway. “We can use it to combine the right content with the right readers, listeners and viewers. That’s why metadata is so essential.”

Professor Linden was speaking at the JournalismAI Festival taking place this week, hosted by the Polis think-tank at the London School of Economics and Political Science. The JournalismAI project is a collaboration between POLIS and newsrooms and institutes around the world, funded by the Google News Initiative.

We are very happy to see several mentions of IPTC standards and IPTC members, particularly the New York Times and iMatrics. The New York Times is seen as a forerunner in content classification, with Jennifer Parrucci (lead of the IPTC NewsCodes Working Group) giving presentations recently about their work. iMatrics supplies an automated content classification system based on IPTC Media Topics which can be used as part of editorial workflows.

One thing we would like to note is that Professor Linden mentions that the IPTC vocabularies are influenced by our background in US-based news organisations, citing an example of the schools terms being focussed on the US system. We are happy to say that in a recent update to IPTC Media Topics we clarified our terms around school systems, making the label names and descriptions much more generic and based on the international schools classifications.

This change was the result of many IPTC member organisations working together from different parts of the world, including Scandinavia, to come to a result that hopefully works for everyone (and of course, each user of Media Topics is welcome to extend the vocabulary for their own purposes if necessary). This is an example of the great work that takes place when our members work together.

The JournalismAI festival continues until Friday this week. All sessions from the festival are available on YouTube.

Thanks again to Polis and the JournalismAI team for giving us a mention!