WHY THIS MATTERS IN BRIEF
Some deepfakes are meant for fun, others are meant to spread lies, disinformation, and weaponise society against itself so new tools are needed to detect them.
Deepfakes that can make people say things in video that they never did, or perhaps never would, have so far been used to make ads, spread misinformation, discredit people, and even to try and change the course of war as a modern day form of PsyOps.
While there have been several advances in the development of new tools that detect deepfakes now tech firms including Adobe, Arm, BBC, Intel, Microsoft, Twitter, Sony, and Nikon have formed an alliance to create an open standard to fight against deepfakes.
The Future of Deepfakes and Synthetic Media, by keynote speaker Matthew Griffin
About a month ago, when Russia and Ukraine were at the height of their conflict, a heavily manipulated video depicting Ukrainian President Volodymyr Zelenskyy circulated on social media and was even uploaded on a Ukrainian news website by hackers, before it was debunked and removed. The video is among a variety of deepfakes that are proliferating online at a rapid clip while a handful of technology firms try to fight the trend using blockchain technology.
Deepfakes are not something new, in fact, deceptive technology has been around for a number of years now. But especially in this post-pandemic era, we are at a collective inflection point. In fact, there are even a number of free deepfake apps that are just a Google search away. Even Ukraine’s military intelligence agency foresaw such incidents when it released a video last month about how state-sponsored deepfakes could be used to sow panic and confusion.
Being aware of potentially grave consequences, an alliance spanning the software, chips, cameras, and social media giants aims to create standards to ensure the authenticity of images and videos shared online. Known as the Coalition for Content Provenance and Authenticity (C2PA), the group’s ultimate aim is to fight deepfakes using blockchain technology with Japanese camera makers Sony and Nikon coming in to develop an open standard intended to work with any software showing evidence of tampering, as per Nikkei.
Adobe’s content authenticity initiative’s senior director Andy Parsons even told Nikkei that we’ll “see many of these [features] emerging in the market this year. And I think in the next two years, we will see many sorts of end-to-end [deepfake detection] ecosystems.”
C2PA unifies the efforts of the Adobe-led Content Authenticity Initiative (CAI) which focuses on systems that provide context and history for digital media, and Project Origin, a Microsoft- and BBC-led initiative that tackles disinformation in the digital news ecosystem. Hereon, the coalition also plans to reach out to more social media platforms, such as social video site YouTube, to have more on board with the standard.
In a statement from January this year, C2PA said the coalition empowers content creators and editors worldwide to create tamper-evident media, by enabling them to selectively disclose information about who created or changed digital content and how it was altered.
“The C2PA’s work is the result of industry-wide collaborations focused on digital media transparency that will accelerate progress toward global adoption of content provenance,” it said.
Parsons reckons that broad adoption across all of these platforms is key to the success of digital provenance, “so that users can be assured that when media is uploaded with content authenticity, that it is maintained throughout the entire chain of sharing [and] publishing creation, back and forth,” he added.
“We’ve only been at this for a couple of years so it’s relatively early in the life cycle. And we have a long way further to go to make sure that all platforms can adopt this,” Parsons concluded.