Scroll Top

Nvidia’s GauGAN turns your crappy sketches into masterpieces


Increasingly AI is being used as a tool to help democratise creativity and unlock people’s creative potential.


It’s long been known that I’m a crappy artist – despite my best efforts at school it was evident early on that I was just never cut out to be the next Picasso or Rembrandt. The ability to transfer onto a paper what one sees either in one’s mind or with one’s eyes is, after all, a skill that many people would love to have, but, for one reason or another, just don’t.


See also
Invisible AI helps cars makers see what machines and humans do wrong


Anyway, now, thanks to a breakthrough from those great folks at Nvidia I no longer need to just dream about being a great artist, I can give up my aspirations of being a real futurist and become the artist I know I was always destined to be, after the company announced they’ve developed GauGAN. And yes, that’s an intentional reference to the post impressionist painter Paul Gauguin. Obs.


Goodbye crappy art!

GauGAN is an Artificial Intelligence (AI) Deep Learning model that lets anyone turn the most basic of sketches into photorealistic masterpieces, and it’s awesome.

So how does GauGAN take shapeless blobs of colour and turn them into mountains and shimmering Alpine landscapes you might ask? By using a form of AI known as a Generative Adversarial Network (GAN) – the same type of AI that today is being used to create everything from films and fake celebrities, to fake news, as well as the world’s first generations of “Creative Machines,” one of which just sold a painting for over $400,000, and others that are helping design Amazon’s new clothing lines, and invent products as well as helping people create their own videos just by writing what they’d like to see. And much more. In short GAN’s are helping democratise creativity and innovation and that makes them one of my top technologies to watch.


See also
Researchers hacked a Tesla’s autopilot using three stickers on the road


The best description of how a GAN works is “One neural network, called the generator, generates new data instances, while the other, the discriminator, evaluates them for authenticity; I.E. the discriminator decides whether each instance of data that it reviews belongs to the actual training dataset or not.”

In the case of GauGAN the AI learned to create fantastic images by using a discriminator network to compare them to real images. As such, GauGAN “knows” what a field or forest would look like in whatever certain shape you provide. You make the sketch, tell GauGAN where everything should go, and then the program fills in all the details for you. Congratulations, you’re an artist – or should I say artiste!?

“It’s like a colouring book picture that describes where a tree is, where the sun is, where the sky is,” said Brian Catanzaro vice president of applied deep learning research at NVIDIA. “And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows and colors, based on what it has learned about real images.”

Take a look at the video to see more, and you can try the online demo for yourself here.

Related Posts

Leave a comment


Awesome! You're now subscribed.

Pin It on Pinterest

Share This