Scroll Top

Nvidia’s newest AI is creating scarily realistic photos of fake celebrities

article_fake_celebrities

WHY THIS MATTERS IN BRIEF

Artificial Intelligence is getting more creative and better at creating realistic images and photos that fool even the most cynical experts, and it will bring about a new era in content creation.

 

One of the more unexpected outcomes of today’s Artificial Intelligence (AI) revolution is just how good AI’s are getting when it comes to producing high resolution, photo grade fake images and video, something I’ve talked about before, and it’s also increasingly clear that these new systems will, among many other things, help fuel the rapid rise of a new type of online “creator community” who’ll soon have the power to use these technologies, and others, such as Lyrebird’s AI that can mimic and overlay anyone’s voice onto a video, to create new forms of quirky, entertaining video, as well as the next generation of fake news clips.

 

See also
No humans required, the fully autonomous AI running a Wall Street hedge fund

 

In this case earlier this week Nvidia, who recently proudly announced they’re created a Virtual Reality copy of their head office “down to the photons in the air” published a paper showing how their latest AI can create photorealistic pictures of fake celebrities, and while generating fake celebs isn’t in itself new the researchers say these are the most convincing and detailed pictures of their type ever made. And looking at the results, I’d have to agree with them, and they knock the socks off the high resolution renders of a CGI school girl that were produced last year by an expert team in Japan.

The video below shows Nvidia’s process in full, starting with the database of celebrity images the system was trained on.

 

See also
ESA's first self-driving spacecraft heads to space for maiden tests

 

The researchers used what’s known as a Generative Adversarial Network, or GAN, to make the pictures. GANs are actually made up of two distinctly separate networks – one that generates the imagery based on the data it’s fed, and a second “Discriminator network,” the adversary, that tries to guess whether they’re real or not.

 

Real or no real?

 

By working together, these two networks can produce some startlingly good fakes. And not just faces either — everyday objects and landscapes can also be created. The generator networks produces the images, the discriminator checks them, and then the generator improves its output accordingly, and basically the system teaches itself.

 

See also
Lockheed's new camera puts a space telescope in your pocket

 

There are limitations to this method though as you’d expect. For starters the pictures the networks create are extremely small by the standards of modern cameras, just 1,024 by 1,024 pixels, and there are quite a few tell tale signs they’re fake. For example, in some cases the results look a lot like the celebrities the system was trained on, check out the Beyoncé lookalike early on, and there are slight glitches in some images like ears that dribble away into red mush.

That said though, bearing in mind that only a year ago these networks were no where close to being able to fool anyone their images were real and now, in many cases, they can, it’s a huge step forwards. And those glitches? Well, they’ll be gone soon and then how will you be able to tell the real celeb from the fake one?

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This