Scroll Top

Nvidia unveiled a new AI engine that renders virtual world’s in real time


Arguably the most complex part of creating VR content is coding and building the environments, Nvidia just bought those barriers crashing down.


Nvidia have announced that they’ve introduced a new Artificial Intelligence (AI) Deep Learning model that “aims to catapult the graphics industry into the AI Age,” and the result is the first ever interactive AI rendered virtual world. In short, Nvidia now has an AI capable of rendering high definition virtual environments, that can be used to create Virtual Reality (VR) games and simulations, in real time, and that’s big because it takes the effort and cost out of having to design and make them from scratch, which has all sorts of advantages.


See also
Point. Click. Create your own 3D avatar and put yourself in the game


In order to work their magic the researchers used what they called a Conditional Generative Neural Network as a starting point and then trained a neural network to render new 3D environments, and now the breakthrough will allow developers and artists of all kinds to create new interactive 3D virtual worlds based on videos from the real world, dramatically lowering the cost and time it takes to create virtual worlds.


Watch the tech in action


“NVIDIA has been creating new ways to generate interactive graphics for 25 years – and this is the first time we can do this with a neural network,” said the leader of the Nvidia researchers Bryan Catanzaro, Vice President of Applied Deep Learning at Nvidia. “Neural networks – specifically  – generative models like these are going to change the way graphics are created.”

“One of the main obstacles developers face when creating virtual worlds, whether for game development, telepresence, or other applications is that creating the content is time consuming and expensive. This method allows artists and developers to create at a much lower cost, by using AI that learns from the real world,” Catanzaro said.


See also
Hawaii becomes first US state to pass a bill supporting Universal Basic Income


“The capability to model and recreate the dynamics of our visual world is essential to building intelligent agents,” the researchers stated in their paper. “Apart from purely scientific interests, learning to synthesize continuous visual experiences has a wide range of applications in computer vision, robotics, and computer graphics,” the researchers explained.

Furthermore, and as I discuss during my keynotes, when this technology is combined with new Brain Machine Interface technologies that can read people’s brainwave responses in real time, a technology like this could let those environments change and respond in real time, something that would let educators and corporate trainers easily create adaptable virtual worlds that help people learn new things much faster than they used to be able to. And in a world where we often talk up the rise of automation and its ability to dead end jobs being able to use these types of technologies to help us learn at speed could be a big help in the years and decades to come, and one that helps us train future workforces for “what’s next.”

Related Posts

Leave a comment


Awesome! You're now subscribed.

Pin It on Pinterest

Share This