Securing digital truths
How can manipulated videos and fake news be unmasked? Together with HAW Hamburg, the 2017 Chainstep is developing technologies to detect manipulation in digital content and ensure its authenticity.
How can manipulated videos and fake news be unmasked? Together with HAW Hamburg, the 2017 Chainstep is developing technologies to detect manipulation in digital content and ensure its authenticity.
Mr. Graf, you are currently working with HAW Hamburg on a technology for detecting manipulated videos. How did the collaboration come about?
Since July 2024, we have been working on a research project that uses invisible watermarks to ensure the authenticity of digital media (image, audio, video) - an approach that is particularly relevant in the fight against fake news and election manipulation and the protection of digital assets against unauthorized use.
The project is being funded as part of the "Data Innovation Sprint" of the Federal Ministry of Education and Research (BMBF), which has provided Chainstep and HAW Hamburg with 300,000 euros to develop this technology.
What is the aim of the project?
The aim is to guarantee the authenticity of digital media and ensure that content is not distributed or reused without permission, for example for the training of AI models. This is particularly relevant in the fight against fake news and election manipulation.
Can you explain the technology in more detail? How does an invisible watermark work?
The watermark is mathematically embedded in the structure of the file without changing the image, sound or video quality. It remains robust against common changes such as compression or color changes. As soon as an image or video is manipulated, whether by cropping or adding content, the watermark is damaged or destroyed and the change becomes visible.
What does this look like in a specific example?
For example, if a music video is edited by adding an overlay or cutting the video, our technology indicates that it has been manipulated. Platforms such as Instagram or YouTube could also use watermarks to verify the authenticity of uploaded content.
of news consumers in the US unknowingly shared fake news or misinformation on social media in December 2020
of the world's population say that news organizations regularly spread false reports.
Deep fakes were estimated to have been spread on social media in 2023
https://redline.digital/fake-news-statistics/
If I brighten up a video, use a yellow filter or adjust the size - is that already manipulation? Where do you draw the line?
Such changes - i.e. adjusting the brightness, applying a filter or changing the size - count as modifications. These are adjustments that are generally not intended to deceive. Manipulations, on the other hand, go beyond that. They deliberately distort the content of a video or image to create a certain effect or statement, such as cropping, filters, overlays or changes to metadata.
How can your technology prevent fake news or election manipulation in the future?
There are two approaches. The first requires legal regulations that oblige platforms to check watermarks. This would ensure that only authentic content is distributed. The second approach would be voluntary: celebrities, politicians or CEOs could, for example, watermark their pictures and videos so that journalists can check their authenticity. Ultimately, however, legal penetration is crucial if the technology is to be widely implemented.
How can the technology be used in elections?
In the US, deepfake videos showing political figures such as Donald Trump circulated during the last presidential election. With watermarking technology, platforms could be legally obliged to only publish content with intact watermarks. This would make manipulated content such as deepfakes immediately recognizable and could be flagged or removed. In addition, leading politicians could watermark their speeches or election campaign videos to guarantee their authenticity and strengthen trust in digital content.
How can this be implemented?
For the technology to spread effectively, cameras and smartphones would have to be equipped with watermarking technology as standard. This requires regulatory requirements, laws or business models that create incentives for companies. How do Chainstep and HAW Hamburg divide up the project?
HAW concentrates on the mathematical principles and algorithms, while Chainstep translates this research into prototypes and tests their practicability in real applications. For example, we are developing our own image models to simulate manipulations and test the robustness of the watermarks.
What will 2025 bring?
We still have one year of funding ahead of us: until the end of 2025, during which time we want to further develop and optimize the technology. At the same time, we are looking for partners from the media and music industry who are interested in testing and applying the technology in their value chain.
Marie-Louise Schlutius works as a freelance communications and journalist. She previously worked for the ZEIT publishing group and gained professional experience at the Progressive Center in Berlin, ZDF in New York and the Goethe Institute in Paris.