Scroll Top

US military and DARPA team up to develop tech to uncover fake news

WHY THIS MATTERS IN BRIEF

The advent of powerful AI tools, adaptable neural networks and the democratisation of hi definition rendering have taken fake news to a new level and this program aims to unmask the fakes.

 

We’ve all seen that video where Artificial Intelligence (AI) powered algorithms make it seem like Barack Obama is giving a speech by synthesising his voice and facial movements into a believable, and more importantly, credible, video clip, and depending on which side of the fence you sit it’s both amazing and thought provoking all at the same time. But if you thought you were the only one that felt that way then you weren’t,  as it turns out it’s also provoked some similar thoughts within the ranks of the US Department of Defense’s (DOD) bleeding edge research organisation DARPA too.

 

See also
Researchers use mind-reading AI to put people's thoughts on TV

 

Over the course of the summer DARPA have announced that they will fund a contest where participants will compete to create the most believable, fake AI generated photos, videos, and audio recordings, collectively referred to as “Deepfakes,” like the examples in the video below. However, the competition is also designed to go one step further and try to discover and develop new advanced tools to detect these Deepfakes which are becoming increasingly cheap and simple to make, and more sophisticated, as people, from regular researchers through to cyber criminals, get better at creating AI algorithms that are made to fool us, like in the video below.

 

We’ve already moved way beyond this crude technology…

 

In particular, DARPA is concerned by a relatively new class of AI’s called Generative Adversarial Networks (GAN’s), which are types of sophisticated algorithms that pit two neural networks against each other to eventually hone in on the ability to create something indistinguishable from those made by people, hence the inclusion of the word, adversarial. In this case, a world leader being made to say something in an AI generated fake news video versus something they actually said in a speech.

That said one of the ways to detect fakes might come from academia, specifically MIT, who a few years ago found a way to use AI to take someone’s heart rate from video, seen below. While using this technique might at first sound odd Deefakes don’t have a heart beat so this “old” and odd technique could, in the near term help to quickly weadle fakes out.

 

One way to root out the fakes, from an unexpected source

 

It’s easy to see why the DOD is concerned, right now the president of the US boasts about the nation’s nuclear arsenal over social media while the US and Korea inch towards talks of disarmament. What would definitely not help anyone right now would be having a believable, fake video of President Trump or Supreme Leader Kim Jong Un saying they’re planning to launch missiles go viral. But it’s not just internet pranksters or malicious enemies of the state who are making these videos.

 

See also
Google DeepMind's new business unit to assess AI's impact on society

 

A quick scan through the libraries of Facebook and Google’s published AI research shows that both companies invested in learning how to develop algorithms that can processanalyze, and alter photos and videos of people, and if DARPA wants to nip this potential threat in the bud, maybe they should look into what the tech giants are doing.

Even though some of these research projects are relatively benign they could still be used to smooth out the glitches of an altered or fake video, or like another of Google’s AI projects designed to reduce the noise of videos and make them more realistic. And some projects though are, well, creepier, like Google’s AI algorithm that creates a neutral, front facing photo of a person by analyzing other pictures of them.

The problem is that AI researchers often take a “Can we?” instead of a “Should we?” approach to making the coolest stuff possible. This is particularly relevant for a Facebook research project that found a way to animate the profile photos of its users. The researchers behind the project said that they did not consider any ethical issues or potential misuse of their work while they were building it, they just wanted to create as sophisticated a product as possible.

The problem for DARPA is that fixing this problem requires a change in attitude towards how technology is developed, and inevitably as they find ways to detect and combat it, just like any war, their opponents, whether it’s sovereign states or bedroom criminals, will continue to find a way to out do them. Arguably it’s a race they won’t win.

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This