Digital human Douglas is awesome, unsettling, and possibly brilliant


Today’s Digital Humans are often like yesterday’s satnavs – canned technology with some basic smarts, but DigiDoug and his AI operating system could help create the first “true” autonomous digital humans.


Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

There’s something off about Douglas. There’s always been something off about Douglas – ever since his first TED talk back in 2019. It’s probably got something to do with the fact he’s not human, he’s a Digital Human. Kind of. While companies like Soul Machines use a mix of Artificial Intelligence (AI) and computer graphics to create their digital humans the creators behind DigiDoug use an altogether different approach which in this instance looks like it has more in common with telepresence than some of its competitors.


See also
Alibaba trials virtual reality payments for VR malls


DigiDoug, as he’s known, is touted by his creators as the “most realistic real-time autonomous digital human in the world.” With the word “autonomous” being the most interesting word by far there, but if that’s true, then sadly I don’t think the best the world has to offer is quite good enough.

Douglas is being developed by Digital Domain, a visual effects titan that has worked on movies including Titanic and the last two Avengers releases, as well as video games like Destiny and Assassin’s Creed Odyssey. He’s certainly an impressive creation visually, but once conversations get rolling, you can really tell that he’s an imposter.


Learn about DigiDoug and see him do a Zoom call


Digital Domain modelled Douglas off of its senior director of software R&D, Doug Roble, capturing his facial structure, movements, and mannerisms from all angles, as well as his voice. By creating as realistic a model as possible, the goal of Douglas is to make conversations between humans and machines feel easier and more natural.


See also
US spy agencies prepare to build DNA computers and molecular storage systems


While his face looks pretty good in the demo Zoom call as you can tell for yourself his voice doesn’t quite match his mannerisms, his movements are slightly off, and he seems to really love gesturing with his hands. Instead of acting like a real human, the way he movies is more reminiscent of a modern video game non-player character (NPC) running through an animation cycle.


See DigiDoug’s first keynote from 2019


You’re on the right path, Digital Domain, but there’s still a disconnect here between us humans and the slightly off, vaguely unsettling appearance of Douglas that settles somewhere inside the uncanny valley. Alarmingly, this technology is adaptive, Digital Domain explains. With just 10 minutes of video and 30 minutes of audio, Douglas can change his voice and face like a chameleon – something that might be a great trick for the metaverse. Chilling.


See also
Nvidia demos cloud based rendering to create high quality streamed AR experiences


Of course, you have to actually make the technology do that, it doesn’t just happen on a whim, which is a slight comfort. This chameleon-like ability would be appealing to people who license the technology and want Douglas to take on a unique form.

Digital Domain is currently looking for investors, partners, and clients to help them bring Douglas into the wider world.

Related Posts

Leave a comment

Awesome! You're now subscribed.

Pin It on Pinterest

Share This