World leaders can rest easy for now after researchers find new ways to reveal DeepFakes

WHY THIS MATTERS IN BRIEF

We are now locked in a war as nefarious actors find new ways to weaponsise deepfakes and fake news, and defenders try to figure out how to discover and flag it.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

An Artificial Intelligence (AI) produced DeepFake video could show Donald Trump saying or doing something extremely outrageous and inflammatory – just imagine that! Crazy I know, and some people might find it believable and in a worst case scenario it might sway an election, trigger violence in the streets, or spark an international armed conflict.

 

 

Now though a new digital forensics technique promises to protect President Trump, other world leaders, and celebrities against such deepfake chicanery – for not at least. The new method devised by researchers uses machine learning to analyze a specific individual’s style of speech and movement using what the researchers call a “Soft-biometric signature.”

The researchers, from UC Berkeley and the University of Southern California, used an existing tool to extract the face and head movements of individuals. They also created their own deepfakes for Donald Trump, Barack Obama, Bernie Sanders, Elizabeth Warren, and Hillary Clinton using Generative Adversarial Networks (GANs).

 

 

In their experiments the team then used machine learning to distinguish the head and face movements that characterize the real person. These subtle signals – the way Bernie Sanders nods while saying a particular word, perhaps, or the way Trump smirks after a comeback – are not currently modelled by deepfake algorithms and so their absence is a tell.

 

See the new technique in action

 

In experiments the technique was at least 92% accurate in spotting several variations of deepfakes, including face swaps and ones in which an impersonator is using a digital puppet. It was also able to deal with artifacts in the files that come from recompressing a video, which can confuse other detection techniques. The researchers now plan to improve the technique by accounting for characteristics of a person’s speech as well. The research, which was presented at a computer vision conference in California this week, was funded by Google and DARPA, a research wing of the Pentagon who is funding a parallel program to devise better detection techniques.

 

 

The problem facing world leaders, and everyone else for that matter, is that it has become ridiculously simple to generate video forgeries using AI, and false news reports, bogus social-media accounts, and doctored videos have already undermined political news coverage and discourse. Meanwhile politicians are especially concerned that fake media could be used to sow misinformation during the upcoming 2020 presidential election.

Some tools for catching deepfake videos have been produced already, but forgers have quickly adapted as you’d expect. For example, for a while it was possible to spot a deepfake by tracking the speaker’s eye movements, which tended to be unnatural in deepfakes. Shortly after this method was identified, however, deepfake algorithms were tweaked to include better blinking. And so the arms race continues.

 

 

“We are witnessing an arms race between digital manipulations and the ability to detect those, and the advancements of AI based algorithms are catalyzing both sides,” says Hao Li, a professor at the University of Southern California who helped develop the new technique, and it’s for this reason his team hasn’t yet released the code behind the new method.

Li says it will be particularly difficult for deepfake-makers to adapt to the new technique, but he concedes that they probably will eventually.

“The next step to go around this form of detection would be to synthesize motions and behaviors based on prior observations of this particular person,” he says.

 

 

Li also says that as deepfakes get easier to use and more powerful, it may become necessary for everyone to consider protecting themselves.

“Celebrities and political figures have been the main targets so far,” he says. “But I would not be surprised if in a year or two, digital humans that look indistinguishable from real ones can be synthesized by any end user.”

And he’s right on the latter point – let’s face it, at some point in the near future this tech is going to be democratised, like most tech, and shoved into an app on a smartphone – at which point we’ll be deluged by deepfakes and Donald Trump will be everywhere. Just imagine that…

Related Posts

Leave a comment

Explore More!

Explore 1000's of articles about our exponential future, 1000's of pages of insights, 1000's of videos, and dig into 100's of exponential technologies. Subscribe to get your no-nonsense briefing on all the biggest stories in exponential technology and science.

Awesome! You're now subscribed.

Pin It on Pinterest

Share This