Headshot photo of Helge O. Svela, CEO of Media Cluster Norway, author of this guest post.
Helge O. Svela, CEO of Media Cluster Norway, author of this guest post.

The following is a guest post from Helge O. Svela, CEO of Media Cluster Norway. Media Cluster Norway joined the IPTC as Associate Members of IPTC in 2024 and Helge is the lead of the Provenance Best Practices and Implementation Working Group, where news publishers work together to talk about their progress in implementing C2PA and the IPTC Verified News Publisher programme within their organisations.

In this article, Helge introduces Project Reynir, an initiative to bring secure media signing technology to the Norwegian media industry.

The journalistic institution must rethink how it develops and applies technology if society is to stand a chance against the deluge of fake images and video from generative AI. Never before in human history has it been easier to produce realistic, but fake, images and video, and spread them around the world. The rapid technological development of generative artificial intelligence has turbocharged the engines of disinformation, and caught both society and journalism off-guard. Never have we been more vulnerable.

Disinformation is destabilising our democracies, and spreading erroneous information. This potentially has severe consequences for both democratic processes and for the public in the face of natural disasters and other crises. The first round of the Romanian elections in 2024 was annulled due to what was dubbed an “algorithmic invasion” of social media disinformation. In the aftermath of the earthquake in Myanmar in March, AI-generated videos of the devastating destruction shared by so called “engagement farmers”, likely with financial motives, got millions of views on social media.

Generative AI has given humanity the ability to create realistic videos and images simply by typing a few words into a website. However, these tools also quickly became a part of the arsenal for enemies of democracy. As a result, disinformation is becoming more prevalent, appearing more professional and costing almost nothing to produce. Generative AI is an industrial revolution also for the troll factories in Russia and others who seek to manipulate our perception of the world and sow doubt about what is true.

This is not a media problem. It is a democratic problem, and a dangerous one at that. Disinformation created by troll factories and generative artificial intelligence and spread by bots pose an immediate threat to our democracies. We might end up doubting absolutely everything. When anyone can claim anything is generated and fake, the liars come out on top. This could destroy the foundation of our democracies: trust in each other and in our institutions. One thing is current news, another is history. Imagine a dictator using fake historical footage of a hunger catastrophe in order to justify an ethnic cleansing of a minority. An internet flooded with claims and visual “proof” of what happened in the past, all of which looks authentic. As a result of generative AI this is no longer just a dystopian science fiction scenario. It is a real possibility. Never before have we needed editorial media more. However, the signal strength of editorial media risks being drowned out by an ever growing cacophony of junk content and disinformation.

Project Reynir is our response to the threat Generative AI poses. Because Generative AI makes it so easy to fake both content and sender, editorial media are under threat on two fronts. In Project Reynir, we aim to solve this problem using technical solutions.

The goal is to create something that makes it easier for ordinary people to distinguish between what is fake and what is real. By using cryptographically secured images and video, based on the open C2PA specification, it is possible for both newsrooms and regular media users to be confident that the images we are seeing have not been tampered with on their journey from the photographer’s lens to the mobile screen. Moreover, using the same technology, authenticity markers can be added to the images and videos from news publishers when they post stories on social media and other third party platforms. Thus guaranteeing that content that appears to be the BBC and AFP actually is from these news organisations and not someone impersonating them. If we succeed, we will be a significant step closer to solving the problem of artificially created noise for our present moment. Project Reynir unites newsrooms, media technology companies and academic researchers in the fight against disinformation. Our goal is an 80 percent adoption in the Norwegian news ecosystem, and to serve as a beacon of best practices for the rest of the world of news.

We believe that time is critical, and that all good forces now must unite. The technological development has moved rapidly in the last few years, and the adoption of technology has sometimes been irresponsible. If our democracies are to stand firm in the face of the disinformation tsunami we are facing, quality journalism must be empowered. Only then can we enable citizens to make informed choices free of manipulation and interference, in an environment where facts can be easily distinguished from lies. We call for the democratic governments of the world to invest in innovation in the news media space. The time for responsible tech innovation, made with resilient democracies in mind, is now.

This article was originally published in the report Seeking Truth, Ensuring Quality: Journalistic Weapons in the Age of Disinformation, published by the University of Bergen in collaboration with Media Cluster Norway, as a part of the Journalistic Weapons conference organised in Brussels on April 28 2025. The full report, including articles from Faktisk, the European Federation of Journalists, London School of Economics, the Center for Investigative Journalism Norway and others, is available at https://www.uib.no/sites/w3.uib.no/files/attachments/publication_seeking_truth_ensuring_quality.pdf.