… the image business in a changing environment

By Sarah Saunders

The web is a Wild West environment for images, with unauthorised uses on a massive scale, and a perception by many users that copyright is no longer relevant. So what is a Smart Photo in this environment? The IPTC Photo Metadata Conference 2018 addressed the challenges for the photo industry and looked at some of the solutions.

Isabel Doran, Chair of UK image Library association BAPLA kicked off  the conference with some hard facts. The use of images – our images – has created multibillion dollar industries for social media platforms and search engines, while revenues for the creative industry are diminishing in an alarming way. It has long been been said that creators are the last to benefit from use of their work; the reality now is that creators and their agents are in danger of being squeezed out altogether.

Take this real example of image use: An image library licenses an image of a home  interior to a company for use on their website. The image is right-click downloaded from the company’s site, and uploaded to a social media platform. From there it is picked up by a commercial blog which licenses the image to a US real estate newsfeed – without permission. Businesses make money from online advertising, but the image library and photographer receive nothing. The image is not credited and there is no link to the site that licensed the image legitimately, or to the supplier agency, or to the photographer.

Social media platforms encourage sharing and deep linking (where an image is shown through a link back to the social media platform where the image is posted, so is not strictly copied). Many users believe they  can use images found on the web for free in any way they choose. The link to the creator is lost, and infringements, where found, are hard to pursue with social media platforms.

Tracking and enforcement – a challenge

The standard procedure for tracking and enforcement involves upload of images to the site of a service provider, which maintains a ‘registry’ of identified images (often using invisible watermarks) and runs automated matches to images on the web to identify unauthorised uses. After licensed images have been identified, the image provider has to decide how to enforce their rights for unauthorised uses in what can only be called a hostile environment. How can the tracking and copyright enforcement processes be made affordable for challenged image businesses, and who is responsible for the cost?

The Copyright Hub was created by the UK Government and now creates enabling technologies to protect Copyright and encourage easier content licensing in the digital environment. Caroline Boyd from Copyright Hub demonstrated the use of the Hub copyright icon for online images. Using the icon (like this one ) promotes copyright awareness, and the user can click on the icon for more information  on image use and links back to the creator. Creating the icon involves adding a Hub Key to the image metadata. Abbie Enock, CEO of software company Capture and a board member of the Copyright Hub, showed how image management software can incorporate this process seamlessly into the workflow. The cost to the user should be minimal, depending on the software they are using.

Publishers can display the icon on images licensed for their web site, allowing users to find the creator without the involvement of – and risk to – the publisher.

Meanwhile, suppliers are working hard to create tracking and enforcement systems. We heard from Imatag, Copytrack, PIXRAY and Stockfood who produce solutions that include tracking and watermarking, legal enforcement and follow up.

Design follows devices

Images are increasingly viewed on phones and tablets as well as computers. Karl Csoknyay from Keystone-SDA spoke about responsive design and the challenges of designing interfaces for all environments. He argued that it is better to work  from simple to complex, starting with design for the smartphone interface, and offering the same (simple) feature set for all environments.

Smart search engines and smart photos

Use of images in search engines was one of the big topics of the day, with Google running its own workshop as well as appearing in the IPTC afternoon workshop along with the French search engine QWANT.

Image search engines ‘scrape’ images from web sites for use in their image searches and display them in preview sizes. Sharing is encouraged, and original links are soon lost as images pass from one web site to the next.

CEPIC has been in discussion with Google for some time, and some improvements have been made, with general copyright notices more prominently placed, but there is still a way to go. The IPTC conference and Google workshop were useful, with comments from the floor stressing the damage done to photo businesses by use of images in search engines.

Attendees asked if IPTC metadata could be picked up and displayed by search engines. We at IPTC know the technology is possible; so the issue is one of will. Google appears to be taking the issue seriously. By their own admission, it is now in their interest to do so.

Google uses imagery  to direct users to other non-image results, searching through  images rather than for images. Users searching for ‘best Indian restaurant’ for example are more likely to be attracted to click through by sumptuous images than by dry text. Google wants to ‘drive high quality traffic to the web ecosystem’ and visual search plays an important part in that. Their  aim is to operate in a ‘healthy image ecosystem’ which recognises the rights of creators. More dialogue is planned.

Search engines could drive the use of rights metadata

The fact that so few images on the web have embedded metadata (3% have copyright metadata according to a survey by Imatag) is sad but understandable. If search engines were to display the data, there is no doubt that creators and agents would press their software providers and customers to retain the data rather than stripping it, which again would encourage greater uptake. Professional photographers generally supply images with IPTC metadata; to strip or ignore copyright data of this kind is the greatest folly. Google, despite initial scepticism, has agreed to look at the possibilities offered by IPTC data, together with CEPIC and IPTC. That could represent a huge step forward for the industry.

As Isabel Doran pointed out, there is no one single solution which can stand on its own. For creators to benefit from their work, a network of affordable solutions needs to be built up; awareness of copyright needs support from governments and legal systems; social media platforms and search engines need to play their part in upholding rights.

Blueprints for the Smart Photo are out there; the Smart Photo will be easy to use and license, and will discourage freeloaders.  Now’s the time to push for change.

By Jennifer Parrucci from IPTC member New York Times

The NewsCodes Working Group has been tirelessly working on a project to review the definitions of all terms in the MediaTopics vocabulary.

The motivation behind this work is feedback received from members using the vocabulary that some definitions are unclear, confusing or not grammatically correct. Additionally, some labels have also been found to be outdated or insensitive and have been changed.

Changing these definitions and labels is not meant to completely change the usage of the terms. Definition and label changes are meant to refine and clarify the usage.

While reviewing each definition the members of the working group have considered various factors, including whether the definition is clear, whether the definition uses the label to define itself (not very helpful) and whether there are typos or grammatical errors in the definition. Additionally, definitions were made more consistent and examples were added when possible.

Once these changes were made in English, the French, Spanish and German translations were also updated.

Currently, updates are available for three branches of Media Topics:

  • arts, culture and entertainment
  • weather
  • conflict, war and peace

Updates can be viewed in cv format on cv.iptc.org or in the tree view on show.newscodes.org.

The working group plans to continue working on the definition review and periodically release more updates as they become available.

As announced in late April, Brendan Quinn is succeeding Michael Steidl in the position of Managing Director of IPTC in June 2018.

This change is happening right now: Brendan has taken over the responsibilities and actions of the Managing Director and Michael is available for giving advice to Brendan until the end of June.

From now on Brendan should be contacted as Managing Director. He uses the same email address that Michael used before: mdirector@iptc.org.

Note from Brendan

I’m really looking forward to working with all IPTC’s members and friends, and hope that I can make a real difference to the future of the news and media industry through my work with IPTC.

I aim to speak with as many of you as possible over the next weeks and months to find out what you as members like and don’t like about what IPTC is doing, but in the meantime, please feel free to email me via mdirector@iptc.org or find me on the members-only Slack group, and I would be happy to say hello and to hear your thoughts.

Michael and I sat down to hand over the role in early June so I now have a sense of the task before me, and I hope that I can follow up on his work but also bring a fresh energy to the organisation.

I would especially like to thank Michael for all his efforts over the last 15 years in making IPTC what it is today.

Best regards, and see you in Toronto in October if not before!

Brendan.

 

Note from Michael

It was a great experience to help Brendan diving into the IPTC universe with all its members and workstreams plus many contacts in the media industry. I wish you, Brendan, all the best for a great future at this organisation.

Dear all who showed interest in IPTC, it was a pleasure getting in touch with you and I hope I was able to respond in a way helping you to solve problems.

I was invited to continue as Lead of the Photo Metadata and Video Metadata Working Groups and I’m happy to take on responsibility for these workstreams. If you want to contact me from now on please use my own email address mwsteidl@newsit.biz

Best,

Michael

An updated version 2.27 of NewsML-G2 is available as Developer Release

  •  XML Schemas and the corresponding documentation are updated

Packages of version 2.27 files can be downloaded:

All changes of version 2.27 can be found on that page: http://dev.iptc.org/G2-Approved-Changes

The NewsML-G2 Implementation Guidelines are a web document now at https://www.iptc.org/std/NewsML-G2/guidelines

 

Reminder of an important decision taken for version 2.25 and applying also to version 2.27: the Core Conformance Level will not be developed any further as all recent Change Requests were in fact aiming at features of the Power Conformance Level,  changes of the Core Level were only a side effect.

The Core Conformance Level specifications of version 2.24 will stay available and valid, find them at http://dev.iptc.org/G2-Standards#CCLspecs  

The new Video Metadata Hub Recommendation 1.2 supports videos as delivered by professional video cameras by mapping their key properties to the common properties of the VMHub – see  https://iptc.org/std/videometadatahub/mapping/1.2 – the new mappings are shown in columns at the right end of the table.

IPTC developed the Video Metadata Hub as common ground for metadata across already existing video formats with their own specific metadata properties. The VMHub is comprised of a single set of video metadata properties, which can be expressed by multiple technical standards, in full as reference implementation in XMP, EBU Core and JSON. These properties can be used for describing the visible and audible content, rights data, administrative details and technical characteristics of a video.

The Recommendation 1.2 adds new properties for the camera device use for recording a video and for referencing an item of a video planning system. All properties are shown at https://iptc.org/std/videometadatahub/recommendation/1.2

“Chasing the SmartPhoto” is the theme of the IPTC Photo Metadata Conference 2018. In the day-long conference, session speakers will examine the image business in a changing environment as new technologies, new devices and Artificial Intelligence will be game changers in the coming  years. The Conference will be held in Berlin (Germany) on 31 May 2018. More details and how to register can be found at www.phmdc.org.

In the afternoon session titled “SmartPhotos and Smart Search Engines”, speakers from Google and Qwant will show how their search engines process photos found on the web and how they present search results. This session will include a discussion with conference participants about how photo businesses may critically perceive presentation of copyright protected photos.

”Protecting images Against Infringements” is the topic of another conference session. Publishing a photo on the Web opens up the possibility of anyone downloading and republishing it, without permission or paying for a license. Speakers from photo businesses and service providers will show how to implement copyright protection and how to track downloaded and reused photos using blockchain and other technologies.

The Photo Metadata Conference is organised by the International Press Telecommunications Council (IPTC, iptc.org), the body behind the industry standard  for administrative, descriptive, and copyright information about images. It brings together photographers, photo editors, managers of metadata and system vendors to discuss how technical means can help improving the use of images. The Conference is held in conjunction with the annual CEPIC Congress (www.cepic.org).

The International Press Telecommunications Council (IPTC) has named Brendan Quinn as its new managing director.

Brendan Quinn
Brendan Quinn

Quinn joins the IPTC with two decades of experience in managing technology for media companies. In June 2018, he will succeed Michael Steidl, who will retire this summer after 15 years with the organisation. IPTC made the announcement today at their Spring Meeting in Athens.

Quinn brings to the IPTC a vast well of real-life experience in media industry technology, including leading the team that crafted the Associated Press’ APVideoHub.com video syndication platform, implementing content management systems at Fairfax Media in Australia, and handling an array of architecture and R&D roles over nine years with the BBC.

“I’m very much looking forward to my new role as MD for IPTC,” Quinn said. “I have huge respect for the organisation, in fact one of my first open source projects as a developer was writing a Perl module for NewsML v1 back in 2001 while I was a developer in Australia. I’m very proud to now be able to take the lead on defining the role of the IPTC in the challenging environment now faced by the media industry.”

Stuart Myles, Chairman of the Board of IPTC and Director of Information Management at the Associated Press, said he was “thrilled” to welcome Quinn to the organisation.

“He brings with him a wealth of news technology experience, with organisations from around the world and of all sizes. He has a unique combination of strategic insight into the challenges faced by the news industry and the technical know-how to help guide our work in technical standards and beyond.”

IPTC develops technical standards that address challenges in the news and photo industries, and other related fields. Recent IPTC initiatives are the Video Metadata Hub for mapping metadata across multiple existing standards; a major revision of RightsML for expressing machine readable licenses, now aligned with the new W3C standard ODRL; and a comprehensive update of SportsML for covering more efficiently a wide range of sports results and statistics. The Media Topics taxonomy for categorizing news now provides descriptions in four major languages.

Quinn says he looks forward to meeting IPTC members and learning as much as he can about the organization’s standards and outreach work.

“From iconic standards such as IPTC Photo Metadata and NewsML-G2 to emerging standards work like the Video Metadata Hub,” he said, “the IPTC aims to stay relevant in a changing media climate.”

About IPTC:

The IPTC, based in London, brings together the world’s leading news agencies, publishers, and industry vendors. It develops and promotes efficient technical standards to improve the management and exchange of information between content providers, intermediaries, and consumers. The standards enable easy, cost-effective, and rapid innovation and include the Photo Metadata and the Video Metadata Hub standards, the news exchange formats NewsML-G2, SportsML-G2 and NITF, rNews for marking up online news, the rights expression language RightsML, and NewsCodes taxonomies for categorizing news.

IPTC: www.iptc.org
Twitter: @IPTC
Brendan Quinn: @brendanquinn
Stuart Myles: @smyles 

Tagging tool at The New York Times
The New York Times uses a software tool for rules-based categorization to assign metadata to content. This is followed by human supervised review and tagging. Source: The New York Times

 

By Jennifer Parrucci
Senior Taxonomist at The New York Times
Lead of IPTC’s NewsCodes Working Group

The New York Times has a proud history of metadata. Every article published since The Times’s inception in 1851 contains descriptive metadata. The Times continues this tradition by incorporating metadata assignment into our publishing process today so that we can tag content in real-time and deliver key services to our readers and internal business clients.

I shared an overview of The Times’s tagging process at a recent conference held by the International Press Telecommunications Council in Barcelona. One of the purposes of IPTC’s face-to-face meetings is for members and prospective members to gain insight on how other member organizations categorize content, as well as handle new challenges as they relate to metadata in the news industry.

Why does The New York Times tag content today?

The Times doesn’t just tag content just for tradition’s sake. Tags play an important role in today’s newsroom. Tags are used to create collections of content and send out alerts on specific topics. In addition, tags help boost relevance on our site search and send a signal to external search engines, as well as inform content recommendations for readers. Tags are also used for tracking newsroom coverage, archive discovery, advertising and syndication.

How does The New York Times tag content?

The Times employs rules-based categorization, rather than purely statistical tagging or hand tagging, to assign metadata to all published content, including articles, videos, slideshows and interactive features.

Rules-based classification involves the use of software that parses customized rules that look at text and suggest tags based on how well they match the conditions of those rules. These rules might take into account things like the frequency of words or phrases in an asset, the position of words or phrases, for example whether a phrase appears in the headline or lead paragraph, a combination of words appearing in the same sentence, or a minimum amount of names or phrases associated with a subject appearing in an asset.

Unlike many other publications that use rules-based classification, The Times adds a layer of human supervision to tagging. While the software suggests the relevant subject terms and entities, the metadata is not assigned to the article until someone in the newsroom selects and assigns tags from that list of suggestions to an asset.

Why does The Times use rules-based and human supervised tagging?

This method of tagging allows for more transparency in rule writing to see why a rule has or has not matched. Additionally it gives the ability to customize rules based on patterns specific to our publication. For example, The Times has a specific style for obituaries, whereby the first sentence usually states someone died, followed by a short sentence stating his or her age. This language pattern can be included in the rule to increase the likelihood of obituaries matching with the term “Deaths (Obituaries).” Rules-based classification also allows for the creation of tags without needing to train a system. This option allows taxonomists to create rules for low-frequency topics and breaking news, for which sufficient content to train the system is lacking.

These rules can then be updated and modified as a topic or story changes and develops. Additionally, giving the newsroom rule suggestions and a controlled vocabulary to choose from ensures a greater consistency in tagging, while the human supervision of the tagging ensures quality.

What does the tagging process at The New York Times look like?

Once an asset (an article, slideshow, video or interactive feature) is created in the content management system, the categorization software is called. This software runs the text against the rules for subjects and then through the rules for entities (proper nouns). Once this process is complete, editors are presented with suggestions for each term type within our schema: subjects, organizations, people, locations and titles of creative works. The subject suggestions also contain a relevancy score. The editor can then choose tags from these suggestions to be assigned to an article. If they do not see a tag that they know is in the vocabulary suggested to them, the editors have the option to search for that term within the vocabulary. If there are new entities in the news, the editors can request that they be added as new terms. Once the article is published/republished the tags chosen from the vocabulary are assigned to the article and the requested terms are sent to the Taxonomy Team.

The Taxonomy Team receives all of the tag requests from the newsroom in a daily report. Taxonomists review the suggestions and decide whether they should be added to the vocabulary, taking into account factors such as: news value, frequency of occurrence, and uniqueness of the term. If the verdict is yes, then the taxonomist creates a new entry for the tag in our internal taxonomy management tool and disambiguates the entry using Boolean rules. For example, there cannot be two entries both named “Adams, John” for the composer and the former United States president of the same name. To solve this, disambiguation rules are added so that the software knows which one to suggest based on context.

John Adams,_IF:{(OR,”composer”,”Nixon in China”,”opera”…)}::Adams, John (1947- )
John Adams,_IF:{(OR,”federalist”,”Hamilton”,”David McCullough”…)}:Adams, John (1735-1826)

Once all of these new terms are added into the system, the Taxonomy Team retags all assets with the new terms.

In addition to these term updates, taxonomists also review a selection of assets from the day for tagging quality. Taxonomists read the articles to identify whether the asset has all the necessary tags or has been over-tagged. The general rule is to tag the focus of the article and not everything mentioned. This method ensures that the tagging really gets to the heart of what the piece is about. When doing this review, taxonomists will notice subject terms that are either not suggesting or suggesting improperly. The taxonomist uses this opportunity to tweak the rules for that subject so that the software suggests the tag properly next time.

After this review of the tagging process at the New York Times, the Taxonomy Team compiles a daily report back to the newsroom that includes shoutouts for good tagging examples, tips for future tagging and a list of all the new term updates for that day. This email keeps the newsroom and the Taxonomy Team in contact and acts as a continuous training tool for the newsroom.

All of these procedures come together to ensure that The Times has a high quality of metadata upon which to deliver highly relevant, targeted content to readers.

Read more about taxomony and IPTC standard Media Topics.

Follow IPTC on LinkedIn andTwitter: @IPTC

Contact IPTC

The comprehensive NewsML-G2 Guidelines are available now in an updated version and anybody can read them on the web: https://www.iptc.org/std/NewsML-G2/guidelines/

What has been modified:

How to use the new Guidelines:

We welcome feedback on the format and the content of the Guidelines. Use the Contact Us form.

IPTC's AGM 2017 in Barcelona

Photo credit: Jill Laurinaitis

By Stuart Myles
Chairman of the Board of Directors, IPTC

IPTC holds face-to-face meetings in several locations throughout the year, although, most of the detailed work of the IPTC is now conducted via teleconferences and email discussions. Our Annual General Meeting for 2017 was held in Barcelona in November. As well as being the time for formal votes and elections, the AGM is a chance for the IPTC to look back over the last year and to look ahead about what is in store. What follows are a slightly edited version of my remarks at IPTC’s AGM 2017 in Barcelona.

IPTC has had a good year – the 52nd year for the organization!

We’ve updated our veteran standards, Photo Metadata – our most widely-used standard – and NewsML-G2 – our most comprehensive XML standard, marking its 10th year of development.

We’re continuing to work in partnership with other organizations, to maximize the reach and benefits of our work for the news and media industry. In coordination with CEPIC we organized the 10th annual Photo Metadata Conference, looking to the future of auto tagging and search, examining advanced AI techniques – and considering both their benefits and their drawbacks for publishers. With the W3C we have crafted the ODRL rights standard and are launching plans to create RightsML as the official profile of the ODRL standard, endorsed by both the IPTC and W3C.

We’ve also tackled problems that matter to the media industry with technology solutions which are founded on standards, but go beyond them. The Video Metadata Hub is a comprehensive solution for video metadata management that allows exchange of metadata over multiple existing standards. The EXTRA engine is a Google DNI sponsored project to create an open source rules based classification engine for news.

We’ve had some changes in the make-up of IPTC. Johan Lindgren of TT joined the Board. Bill Kasdorf has taken over as the PR Chair. And we were thrilled to add Adobe as a voting member of IPTC, after many years of working together on photo metadata standards. Of course, with more mixed emotions, we have also learnt that Michael Steidl, the IPTC Managing Director, for 15 years will retire next Summer. As has been clear throughout this meeting and, indeed, every day between the meetings on numerous emails and phone calls, Michael is the backbone of the work of the IPTC. Once again, I ask you to join me in acknowledging the amazing contributions and dedications that Michael displays towards the IPTC.

Later today, we will discuss in detail our plans to recruit a successor for the crucial role of the Managing Director. And this is not the only challenge that the IPTC faces. We describe ourselves as “the global standards body of the news media” and that “we provide the technical foundation for the news ecosystem”. As such, just as the wider news industry is facing a challenging business and technical environment, so is the IPTC.

During this meeting, we’ve talked about some of the technical challenges – including the continuing evolution of file formats and supporting technologies, whilst many of us are still working to adopt the technologies from 5 or 10 year ago. We’ve also talked about the erosion of trust in media organizations and whether a combination of editorial and technical solutions can help.

But I thought I would focus on a particular shift in the business and technical environment for news that may well have a bigger impact than all of those. That shift can be traced back to 2014 which, by coincidence, is when I became Chairman of the IPTC. Last week, Andre Staltz published an interesting and detailed article called “The Web Began Dying in 2014, Here’s How“. If you haven’t read it, I recommend it. The article makes a number of interesting points and backs them up with numerous charts and statistics. I will not attempt to summarize the whole thing, but a few key points are worth highlighting.

Staltz points out that, prior to 2014, Google and Facebook accounted for less than 50% of all of the traffic to news publisher websites. Now those two companies alone account for over 75% of referral traffic. Also, through various acquisitions, Google and Facebook properties now share the top ten websites with news publishers – in the USA 6 of the 10 most popular websites are media properties. In Brazil it is also 6 out of 10. In the UK it is 5 out of 10. The rest all belong to Facebook and Google.

Both Facebook and Google reorganized themselves in 2014, to better focus on their core strengths. In 2014, Facebook bought Whastapp and terminated its search relationship with Bing, effectively relinquishing search to Google and doubling down on social. Also in 2014, Google bought DeepMind and shutdown Orkut, its most successful social product. This, along with the reorganization into Alphabet, meant that Google relinquished social to Facebook and allowing it to focus on search and – even more – artificial intelligence. Thus, each company seems happy to dominate their own massive parts of the web.

But … does that matter to media companies? Well, Facebook said if you want optimal performance on our website, you must adopt Instant Articles. Meanwhile, Google requires publishers to use its Accelerated Mobile Pages or “AMP” format for better performance on mobile devices. And, worldwide, Internet traffic is shifting from the desktop to mobile devices.

Then, if you add in Amazon, Apple and Microsoft, it is clear that another huge shift is going on. All of the Frightful Five are turning away from the Web as a source of growth and instead turning to building brand loyalty via high end devices. Following the successful strategy of Apple, they are all becoming hardware manufacturers with walled gardens. Already we have Siri, Cortana, Alexa and Google Home. But also think about the investments going on by these companies in AR and VR as ways to dominate social interactions, e-commerce and machine learning over the Internet.

So, just as news companies must confront these shifts in the global business and technology environment, so must the IPTC. During this meeting, we’ve talked about our initial efforts to grapple with metadata for AR, VR and 360 degree imagery. We’ve also discussed techniques which are relevant to news taxonomy and classification, including machine learning and artificial intelligence. At the same time, Facebook, Google and others are not totally in control, as they – along with Twitter – found themselves having to explain the spread of disinformation on their platforms and under increased government scrutiny, particular in the EU. So, all of us, whether we describe ourselves as news publishers or not, are dealing with a rapidly changing and turbulent information, technical and business environment.

What does this mean for IPTC? IPTC is a news technology standards organization. But it is also unique in that we are composed of news companies from around the world. We know from the membership survey that both of these factors – influence over technical solutions and access to technology peers from competitors, partners, diverse organizations large and small – are very important to current members. In order to prosper as an organization, IPTC needs to preserve these unique benefits to members, but also scale them up. This means that we need to find ways to open up the organization in ways that preserve the value of the IPTC and fit with the mission, but also in ways that are more flexible. We need to continue to move beyond saying that the only thing we work on is standards and instead use standards as a component of the technical solutions we develop, as we are doing with EXTRA and the Video Metadata Hub. We need to work with diverse groups focused on solving specific business and journalistic problems – such as trust in the media – and in helping news companies learn the best ways to work with emerging technologies, whether it is voice assistants, artificial intelligence or virtual reality.

I’m confident that – working together – we can continue to reshape the IPTC to better meet the needs of the membership and to move us further forward in support of solving the business and editorial needs of the news and media industry. I look forward to working with all of you on addressing the challenges in 2018 and beyond.

Stuart Myles is the Director of Information Management at Associated Press.