Michael Steidl, Lead of the IPTC Photo Metadata Working Group, spoke on a workshop panel at the Perpignan Photojournalism Conference “Visa Pour L’Image”.

The panel session was pre-recorded and released this week.

This joint workshop from Google, IPTC and Alamy covers some product updates from Google Images, including Image Rights Metadata and the new features on Google Images highlighting licensing information for images that we announced earlier this week. The speakers share best practices and experience on these features.

The speakers are:

  • John Mueller, Google Senior Webmaster Trends Analyst
  • Michael Steidl, IPTC working group lead
  • Roxana Stingu, Alamy SEO Head

View the presentation here (free registration required)

The IPTC is very happy to announce that as a result of our collaboration with Google and CEPIC, Google Images’s new licensable badge and other related features are now live.

This means that when photo owners include a photograph’s Web Statement of Rights (also known as Copyright Info URL)  in an image’s embedded metadata, Google will display a “Licensable” badge on the image in Google Images search results and the image will appear when the “View all images with Commercial or other licenses” filter is selected. If the Licensor URL is also added, Google will feature a “get this image on” link that takes users directly to a page on the photo owner’s site enabling the user to easily obtain a license to re-use the image elsewhere.

Example of how Google Images search results look, showing the IPTC fields used to provide the data.

The launch on Google Images comprises three different components:

  • “Licensable” badge on image search results for images that have the required metadata fields
  • Two new links in the Image Viewer (the panel that appears when a user selects an image result) for people to access the image supplier’s licensing information, namely:

    • A “License details” link. This directs users to a page defined by the image supplier explaining how they can license and use the image responsibly
    • A “Get this image on” link, which directs users to a page from the image supplier where users can directly take the necessary steps to license the image
  • A Usage Rights drop-down filter in Google Image search results pages to support filtering results for Creative Commons, commercial, and other licenses.

“As a result of a multi-year collaboration between IPTC and Google, when an image containing embedded IPTC Photo Metadata is re-used on a popular website, Google Images will now direct interested users back to the supplier of the image,” said Michael Steidl, Lead of the IPTC Photo Metadata Working Group. “This is a huge benefit for image suppliers and an incentive to add IPTC metadata to image files.”

The features have been in beta since February, and after extensive testing, refinement and discussion with IPTC, CEPIC and others, Google is rolling out the new features this week.

As we describe in the Quick guide to IPTC Photo Metadata and Google Images, image owners can choose from two methods to enable the Licensable badge and “Get this image” link: embedding IPTC metadata in image files, or including structured schema.org metadata in the HTML of the web page hosting the image.

Of the two approaches, using embedded IPTC metadata has two benefits. Firstly, the embedded metadata stays with the image even when it is re-used, so that the Licensable badge will appear even when the image is re-published on another website.

Secondly, the “Creator”, “Copyright” and “Credit” messages are only displayed in search results when they are declared using embedded IPTC metadata.

“The IPTC anticipates that this will lead to increased awareness of image ownership, copyright and licensing issues amongst content creators and users,” said Brendan Quinn, Managing Director of IPTC. “By providing direct leads to image owners’ websites, we hope that this leads to increased business for image suppliers both large and small.”

The Google announcements can be found here:

On July 1st 2020, IPTC was invited to participate in an online workshop held by the Arab States Broadcasting Union (ASBU).

In a joint presentation, Brendan Quinn (IPTC Managing Director) and Robert Schmidt-Nia (Chair of IPTC and consultant with DATAGROUP Consulting Services) spoke on behalf of IPTC and Jürgen Grupp (data architect with the German public broadcaster SWR) spoke on behalf of the European Broadcasting Union.

The invitation was extended to IPTC and EBU because ASBU is looking at creating a common framework for sharing content between ASBU member broadcasters.

Jürgen Grupp started with an overview of why metadata is important in broadcasting and media organisations, and introduced the EBU’s high-level architecture for media, the EBU Class Conceptual Data Model (CCDM). and the EBUCore metadata set. Jürgen then gave examples of how CCDM and EBUCore are implemented by some European broadcasters.

Next, Brendan Quinn introduced IPTC and the IPTC News Architecture, the underlying logical model behind all of IPTC’s standards. We then took a deep dive into some video-related NewsML-G2 constructs like partMeta (used to describe metadata for parts of a video such as the rights and descriptive metadata for multiple time-based shots within a single video file) and contentSet (used to link to multiple renditions of the same video in different formats, resolutions or quality levels).

Then Robert Schmidt-Nia described some real-world examples of implementation of NewsML-G2 and the IPTC News Architecture at broadcasters and news agencies in Europe, in particular touching on the real-world issues of whether to “push” content or to create a “content API” that customers can use to select and download the content that they would like.

A common theme throughout our presentations was that the representation of the data in XML, RDF, JSON or some other format is relatively easy to change, but the important decision is what logical model to use and how to agree on the meaning (semantics) of terms and vocabularies.

A robust question and answer period touched on wide-ranging issues from the choices between XML, RDF and JSON, extending standardised models and vocabularies, and what decisions should be made to decide how to proceed.

This was one of the first meetings of ASBU on this topic and we look forward to assisting them further on their journey to metadata-based content sharing between their members.

Previously, we shared​ that Google was making image credits and usage rights information more visible on Google Images. Google now displays information about image copyright and ownership details, alongside creator and credit info, when websites and photo-owners make that information available for Google to crawl. Since the announcement there has been steady growth in the amount images containing these embedded metadata fields, which in turn has helped drive greater user awareness of copyright for images on the internet.

Up to now, users have seen the IPTC metadata information when they click on the “Image Credits” link in the “Google Images viewer” – the panel that appears when a user selects an image. Starting from today, users will begin to see this information directly in the viewer, making this rights-related information even more visible.

You can see an example of what this looks like below:

The Google Images team has said in a statement: “We are committed to helping people understand the nature of the content they’re looking at on Google Images. This effort to make IPTC-related information more visible is one more step in that direction.”

For more information on how you can embed rights and credits metadata in your photos, please see our Quick Guide to IPTC Photo Metadata and Google Images.

If you create photo editing or manipulation software and are looking for more information, please consult the Quick Guide or contact us for more information and advice.

We are very happy to continue working with Google and our partner organisation CEPIC on this and other developments in this area. We look forward to making an announcement about the launch of the related “Licensable Images” feature over the summer.

The draft guidelines document

At the IPTC Autumn Meeting in Toronto in 2018, IPTC considered the issues of “trust and credibility” in news media. We looked at the existing initiatives and considered whether IPTC could contribute to the space.

We concluded that some existing efforts were doing great work and that we should not create our own trust and credibility standard. Instead, our resources could best be put towards working with those groups, and aligning IPTC’s standards — particularly our main news standards NewsML-G2 and ninjs —  to work well with the outputs of those groups.

Since that time, the IPTC NewsML-G2 Working Group has been collaborating with several initiatives around trust and misinformation in the news industry. We have been working mainly with The Trust Project and the Journalism Trust Initiative from Reporters Without Borders, but have also been in communication with the Credibility Coalition, the Certified Content Coalition and others to identify all known means of expressing trust in news content.

Our aim is to make it easy for users of NewsML-G2 and ninjs to work with these standards to convey the trustworthiness of their content. This should make it easier for news publishers to translate trust information to something that can be read by aggregator platforms and user tools.

In particular, we want to make it as easy as possible for syndicated content to be distributed and published in alignment with trust principles.

A new IPTC Guideline document

To that end, we are publishing a “public draft” of a new IPTC guideline document: Expressing Trust and Credibility Information in IPTC Standards. While not complete, we hope that it helps IPTC members and other users of our standards to understand how they can express trust indicators.

To go along with the draft, we are proposing some changes to existing IPTC standards, including updates to NewsML-G2 and to ninjs, and a new Trust Indicator taxonomy created as part of the IPTC NewsCodes.

New Genres in NewsCodes and changes to NewsML-G2 and ninjs

To accommodate the new work, we will be adding some new entries to the NewsCodes Genre vocabulary. Some genres required for this work such as “Opinion” and “Special Report” were already in the genres vocabulary, but we are proposing to add new genres including “Fact Check” and “Satire“, and some genres to handle sponsored content: Advertiser Supplied, Sponsored and Supported.

We will also be making some small changes to the existing ninjs and NewsML-G2 standards to accommodate some new requirements, such as being able to associate a publisher with another organisation, to indicate membership of The Trust Project, Journalism Trust Initiative or a similar group.

From trusted agency to publisher and then to a user

By following the guidelines, a news agency can add their own trust information to the news items that they distribute. A publisher can then take those trust indicators and convert them to the standard schema.org markup used to convey trust indicators in HTML pages (initially created via a collaboration between schema.org and The Trust Project in 2017).

The schema.org markup can then be read by search engines, platforms such as Facebook, and specialised trust tools such as the NewsGuard browser plugin, so that users can see the trust indicators and decide for themselves whether they can trust a piece of news.

Please give us your feedback

The document will not be final until after those changes have been approved by IPTC members at our next meeting in May.

We have published the draft to ask for feedback from the community about how we could improve our guidance, ask for any trust indicators that we have missed, and to ask for implementation feedback.

Please use the IPTC Contact Us form to send your feedback.

About the Trust Project

The Trust Project is a global network of news organizations working to affirm and amplify journalism’s commitment to transparency, accuracy and inclusion. The project created the Trust Indicators, which are a collaborative, journalism-generated standard for news that helps both regular people and the technology companies’ machines easily assess the authority and integrity of news. The Trust Indicators are based in robust user-centered design research and respond to public needs and wants.

For more information, visit thetrustproject.org.

The Trust Project is funded by Craig Newmark Philanthropies, Democracy Fund, Facebook, Google and the John S. and James L. Knight Foundation.

About the Journalism Trust Initiative

The Journalism Trust Initiative aims at a healthier information space. It is developing indicators for trustworthiness of journalism and thus, promote and reward compliance with professional norms and ethics. JTI is led by Reporters Without Borders (RSF) in partnership with the European Broadcasting Union (EBU), the Global Editors Network (GEN) and Agence France Presse (AFP).

For more, visit https://jti-rsf.org/en/ 

 

We are excited to announce that the result of our latest collaboration with Google has been launched in a beta phase: Licensable Images.

This feature, that Google is exploring with this beta, will enable image owners not only to receive credit for their work but also to find ways to raise people’s awareness of licensing requirements for content found via Google Images.

Mockup of how licensable images might look on google.com when it launches to users later this year.
Mockup of how licensable images might look on google.com when it launches to users later this year.

By embedding IPTC Photo Metadata fields into their images (or using schema.org markup), Google will place a badge on licensable images in search results pages.

Under the image preview, Google will show embedded rights metadata (creator, copyright and credit fields). These have been displayed since IPTC’s collaboration with Google in 2018, but will now be given more prominence.

Along with the rights metadata, Google will now show links to the image’s usage licence and also a link to “Get this image”.

See the image for a mockup of how it might look.

By embedding IPTC Photo Metadata into your images, these links will be shown for images on your own website and also when your customers publish images on their sites.

Along with the photo industry organisation CEPIC, IPTC has been working with Google on this project since the IPTC Photo Metadata Conference at CEPIC Congress in June 2019.

The user-facing side of the feature is planned to launch in the next few months. Google has released some developer documentation to encourage image owners to get ready for the launch.

Learn how to make licensable images work for your image collections

For IPTC members, we will be running a webinar today, Thursday 20 February at 15:00 GMT.

The webinar will  explain how the licensable images feature works and what image owners can do to get ready for the launch.

The speakers will be Michael Steidl, Lead of the IPTC Photo Metadata Working Group, and Brendan Quinn, Managing Director of IPTC.

Please check your email for the announcement and information on how to join.

For non-members, we will be publishing a page on this site on Friday 21 February that will explain how to take advantage of the feature.

UPDATE: We have now updated our Quick Guide to IPTC Photo Metadata and Google Images to include information on how to embed rights and licensing metadata in your images.

We’re very pleased to see this launch. We look forward to seeing how our members will use this feature to draw more attention to the importance of image rights and licensing.

To support the work of IPTC in this and other areas, please consider joining IPTC.

We’re excited that the biggest week in the photo metadata calendar has arrived – the IPTC Photo Metadata Conference 2019 will be held in Paris this Thursday, 6 June.

We are looking forward to hearing from some IPTC members: Andreas Gnutzmann from Fotoware, Lúí Smyth from Shutterstock, Isabelle Wirth of Agence France Presse and Michael Steidl, Chair of the Photo Metadata Working Group and honourable member of IPTC. Stéphane Guerrilot, CEO of AFP Blue will be chairing the event.

We will also be hearing from independent photographer Andrew Wiard representing the British Press Photographer’s Association (BPPA), plus Anna Dickson, Visual Lead, Image Search at Google attend, bringing her expertise as one of Google’s experts on images but also with a history leading photography teams at Dow Jones and Huffington Post. Mayank Sagar from Image Data Systems will be speaking about the latest developments in automatic image tagging, and Simon Brown of Deep3D will look at the photographer’s view around embedding metadata.

Michael Steidl and Sarah Saunders will be presenting the results of the 2019 Photo Metadata Survey, where we have obtained the views of image creators, publishers and software makers regarding embedded image metadata.

Brendan Quinn, Managing Director of IPTC will be presenting the IPTC Photo Metadata Crawler which looks at usage of embedded photo metadata among news publishers.

We’re looking forward to analysing the world of photo metadata from the perspective of image creators and editors, software makers, publishers, search engines and end users.

There are still some tickets available, so please join us! Attendance is free for CEPIC Congress attendees, but if you just want to come for the IPTC event on Thursday afternoon you can register using this form for €100 + VAT.

See you there!

Brendan Quinn introducing IPTC standards at the DPP event in London, February 2019. Photo: Andy Read

We were proud to be involved at last week’s Metadata Exchange for News interoperability demo organised by DPP (formerly known as the Digital Production Partnership).

DPP’s “Metadata Exchange for News” is an industry initiative aimed at making the news production process easier.

The DPP team looked around for existing standards on which to base their work, and when they found IPTC’s NewsML-G2, they realised that it exactly matched their requirements. NewsML-G2’s generic PlanningItem and NewsItem structure meant that it could easily be used to manage news production workflows with no customisation required.

We were treated to a demo of a full news production workflow in the DPP’s offices at ITV in London on February 6th.

A full news production workflow

DPP Metadata for News Exchange workflow diagram

As you can see from the diagram, the workflow involves these steps:

  • An editor creates a planning record for a news item using Wolftech’s planning system, describing metadata for the planned story
  • The system sends the planning item as NewsML-G2 to Sony’s XDCAM Air system which converts it to Sony’s proprietary planning metadata and sends it directly to a camera
  • XDCAM Air retrieves the footage from the camera, links it to the planning metadata using the NewsML-G2 IDs, back into XDCAM Air which is then retrieved by some simple custom web services
  • The web services send NewsML-G2 NewsItem metadata along with the MP4 video file to Ooyala’s Flex Media Platform via an Amazon Web Services S3 bucket
  • Ooyala Flex Media Platform sends the media and metadata to the platforms that require it, in this case the Reuters Connect video browsing and distribution platform.

The NewsML-G2 integrations were built for the demo but the idea is that they will soon become standard features of the products involved. All parties reported that implementing NewsML-G2 was fast and fairly painless!

Thanks to all involved and special thanks to Abdul Hakim of DPP for leading the project and organising the demo day.

Look out for an IPTC Webinar on this topic soon!

… the image business in a changing environment

By Sarah Saunders

The web is a Wild West environment for images, with unauthorised uses on a massive scale, and a perception by many users that copyright is no longer relevant. So what is a Smart Photo in this environment? The IPTC Photo Metadata Conference 2018 addressed the challenges for the photo industry and looked at some of the solutions.

Isabel Doran, Chair of UK image Library association BAPLA kicked off  the conference with some hard facts. The use of images – our images – has created multibillion dollar industries for social media platforms and search engines, while revenues for the creative industry are diminishing in an alarming way. It has long been been said that creators are the last to benefit from use of their work; the reality now is that creators and their agents are in danger of being squeezed out altogether.

Take this real example of image use: An image library licenses an image of a home  interior to a company for use on their website. The image is right-click downloaded from the company’s site, and uploaded to a social media platform. From there it is picked up by a commercial blog which licenses the image to a US real estate newsfeed – without permission. Businesses make money from online advertising, but the image library and photographer receive nothing. The image is not credited and there is no link to the site that licensed the image legitimately, or to the supplier agency, or to the photographer.

Social media platforms encourage sharing and deep linking (where an image is shown through a link back to the social media platform where the image is posted, so is not strictly copied). Many users believe they  can use images found on the web for free in any way they choose. The link to the creator is lost, and infringements, where found, are hard to pursue with social media platforms.

Tracking and enforcement – a challenge

The standard procedure for tracking and enforcement involves upload of images to the site of a service provider, which maintains a ‘registry’ of identified images (often using invisible watermarks) and runs automated matches to images on the web to identify unauthorised uses. After licensed images have been identified, the image provider has to decide how to enforce their rights for unauthorised uses in what can only be called a hostile environment. How can the tracking and copyright enforcement processes be made affordable for challenged image businesses, and who is responsible for the cost?

The Copyright Hub was created by the UK Government and now creates enabling technologies to protect Copyright and encourage easier content licensing in the digital environment. Caroline Boyd from Copyright Hub demonstrated the use of the Hub copyright icon for online images. Using the icon (like this one ) promotes copyright awareness, and the user can click on the icon for more information  on image use and links back to the creator. Creating the icon involves adding a Hub Key to the image metadata. Abbie Enock, CEO of software company Capture and a board member of the Copyright Hub, showed how image management software can incorporate this process seamlessly into the workflow. The cost to the user should be minimal, depending on the software they are using.

Publishers can display the icon on images licensed for their web site, allowing users to find the creator without the involvement of – and risk to – the publisher.

Meanwhile, suppliers are working hard to create tracking and enforcement systems. We heard from Imatag, Copytrack, PIXRAY and Stockfood who produce solutions that include tracking and watermarking, legal enforcement and follow up.

Design follows devices

Images are increasingly viewed on phones and tablets as well as computers. Karl Csoknyay from Keystone-SDA spoke about responsive design and the challenges of designing interfaces for all environments. He argued that it is better to work  from simple to complex, starting with design for the smartphone interface, and offering the same (simple) feature set for all environments.

Smart search engines and smart photos

Use of images in search engines was one of the big topics of the day, with Google running its own workshop as well as appearing in the IPTC afternoon workshop along with the French search engine QWANT.

Image search engines ‘scrape’ images from web sites for use in their image searches and display them in preview sizes. Sharing is encouraged, and original links are soon lost as images pass from one web site to the next.

CEPIC has been in discussion with Google for some time, and some improvements have been made, with general copyright notices more prominently placed, but there is still a way to go. The IPTC conference and Google workshop were useful, with comments from the floor stressing the damage done to photo businesses by use of images in search engines.

Attendees asked if IPTC metadata could be picked up and displayed by search engines. We at IPTC know the technology is possible; so the issue is one of will. Google appears to be taking the issue seriously. By their own admission, it is now in their interest to do so.

Google uses imagery  to direct users to other non-image results, searching through  images rather than for images. Users searching for ‘best Indian restaurant’ for example are more likely to be attracted to click through by sumptuous images than by dry text. Google wants to ‘drive high quality traffic to the web ecosystem’ and visual search plays an important part in that. Their  aim is to operate in a ‘healthy image ecosystem’ which recognises the rights of creators. More dialogue is planned.

Search engines could drive the use of rights metadata

The fact that so few images on the web have embedded metadata (3% have copyright metadata according to a survey by Imatag) is sad but understandable. If search engines were to display the data, there is no doubt that creators and agents would press their software providers and customers to retain the data rather than stripping it, which again would encourage greater uptake. Professional photographers generally supply images with IPTC metadata; to strip or ignore copyright data of this kind is the greatest folly. Google, despite initial scepticism, has agreed to look at the possibilities offered by IPTC data, together with CEPIC and IPTC. That could represent a huge step forward for the industry.

As Isabel Doran pointed out, there is no one single solution which can stand on its own. For creators to benefit from their work, a network of affordable solutions needs to be built up; awareness of copyright needs support from governments and legal systems; social media platforms and search engines need to play their part in upholding rights.

Blueprints for the Smart Photo are out there; the Smart Photo will be easy to use and license, and will discourage freeloaders.  Now’s the time to push for change.

Tagging tool at The New York Times
The New York Times uses a software tool for rules-based categorization to assign metadata to content. This is followed by human supervised review and tagging. Source: The New York Times

 

By Jennifer Parrucci
Senior Taxonomist at The New York Times
Lead of IPTC’s NewsCodes Working Group

The New York Times has a proud history of metadata. Every article published since The Times’s inception in 1851 contains descriptive metadata. The Times continues this tradition by incorporating metadata assignment into our publishing process today so that we can tag content in real-time and deliver key services to our readers and internal business clients.

I shared an overview of The Times’s tagging process at a recent conference held by the International Press Telecommunications Council in Barcelona. One of the purposes of IPTC’s face-to-face meetings is for members and prospective members to gain insight on how other member organizations categorize content, as well as handle new challenges as they relate to metadata in the news industry.

Why does The New York Times tag content today?

The Times doesn’t just tag content just for tradition’s sake. Tags play an important role in today’s newsroom. Tags are used to create collections of content and send out alerts on specific topics. In addition, tags help boost relevance on our site search and send a signal to external search engines, as well as inform content recommendations for readers. Tags are also used for tracking newsroom coverage, archive discovery, advertising and syndication.

How does The New York Times tag content?

The Times employs rules-based categorization, rather than purely statistical tagging or hand tagging, to assign metadata to all published content, including articles, videos, slideshows and interactive features.

Rules-based classification involves the use of software that parses customized rules that look at text and suggest tags based on how well they match the conditions of those rules. These rules might take into account things like the frequency of words or phrases in an asset, the position of words or phrases, for example whether a phrase appears in the headline or lead paragraph, a combination of words appearing in the same sentence, or a minimum amount of names or phrases associated with a subject appearing in an asset.

Unlike many other publications that use rules-based classification, The Times adds a layer of human supervision to tagging. While the software suggests the relevant subject terms and entities, the metadata is not assigned to the article until someone in the newsroom selects and assigns tags from that list of suggestions to an asset.

Why does The Times use rules-based and human supervised tagging?

This method of tagging allows for more transparency in rule writing to see why a rule has or has not matched. Additionally it gives the ability to customize rules based on patterns specific to our publication. For example, The Times has a specific style for obituaries, whereby the first sentence usually states someone died, followed by a short sentence stating his or her age. This language pattern can be included in the rule to increase the likelihood of obituaries matching with the term “Deaths (Obituaries).” Rules-based classification also allows for the creation of tags without needing to train a system. This option allows taxonomists to create rules for low-frequency topics and breaking news, for which sufficient content to train the system is lacking.

These rules can then be updated and modified as a topic or story changes and develops. Additionally, giving the newsroom rule suggestions and a controlled vocabulary to choose from ensures a greater consistency in tagging, while the human supervision of the tagging ensures quality.

What does the tagging process at The New York Times look like?

Once an asset (an article, slideshow, video or interactive feature) is created in the content management system, the categorization software is called. This software runs the text against the rules for subjects and then through the rules for entities (proper nouns). Once this process is complete, editors are presented with suggestions for each term type within our schema: subjects, organizations, people, locations and titles of creative works. The subject suggestions also contain a relevancy score. The editor can then choose tags from these suggestions to be assigned to an article. If they do not see a tag that they know is in the vocabulary suggested to them, the editors have the option to search for that term within the vocabulary. If there are new entities in the news, the editors can request that they be added as new terms. Once the article is published/republished the tags chosen from the vocabulary are assigned to the article and the requested terms are sent to the Taxonomy Team.

The Taxonomy Team receives all of the tag requests from the newsroom in a daily report. Taxonomists review the suggestions and decide whether they should be added to the vocabulary, taking into account factors such as: news value, frequency of occurrence, and uniqueness of the term. If the verdict is yes, then the taxonomist creates a new entry for the tag in our internal taxonomy management tool and disambiguates the entry using Boolean rules. For example, there cannot be two entries both named “Adams, John” for the composer and the former United States president of the same name. To solve this, disambiguation rules are added so that the software knows which one to suggest based on context.

John Adams,_IF:{(OR,”composer”,”Nixon in China”,”opera”…)}::Adams, John (1947- )
John Adams,_IF:{(OR,”federalist”,”Hamilton”,”David McCullough”…)}:Adams, John (1735-1826)

Once all of these new terms are added into the system, the Taxonomy Team retags all assets with the new terms.

In addition to these term updates, taxonomists also review a selection of assets from the day for tagging quality. Taxonomists read the articles to identify whether the asset has all the necessary tags or has been over-tagged. The general rule is to tag the focus of the article and not everything mentioned. This method ensures that the tagging really gets to the heart of what the piece is about. When doing this review, taxonomists will notice subject terms that are either not suggesting or suggesting improperly. The taxonomist uses this opportunity to tweak the rules for that subject so that the software suggests the tag properly next time.

After this review of the tagging process at the New York Times, the Taxonomy Team compiles a daily report back to the newsroom that includes shoutouts for good tagging examples, tips for future tagging and a list of all the new term updates for that day. This email keeps the newsroom and the Taxonomy Team in contact and acts as a continuous training tool for the newsroom.

All of these procedures come together to ensure that The Times has a high quality of metadata upon which to deliver highly relevant, targeted content to readers.

Read more about taxomony and IPTC standard Media Topics.

Follow IPTC on LinkedIn andTwitter: @IPTC

Contact IPTC