WHY THIS MATTERS IN BRIEF
Social networks are awash with human influencers, but now some virtual influencers are making more money than their human counterparts.
One of the most in demand influencers in the world today isn’t even human – well, not physically human at least. She’s released her own pop songs, promoted Samsung’s new Galaxy S10 phone, and has more than 1.6 million Instagram followers. And she’s also made millions of dollars. Real dollars. Miquela “Lil Miquela” Sousa is what’s known as a digital human, a new “category” of human avatars that I’ve spoken at length about before, that are now starting to be involved in everything from teaching hundreds of thousands of children in New Zealand, and customer service, all the way through to helping Fortune CEO’s “live” from beyond the grave. No kidding. But in the online world of marketing Lil Miquela is the most famous example in the growing trend of virtual influencers.
As marketing continues to go digital and online it goes without saying that increasingly there are more and more online real human influencers commanding ever bigger audiences and salaries, so the ability to custom build a custom virtual influencer, complete with a custom “personality,” is a powerful thing indeed.
This is how people are made
And the benefits of virtual influencers, it seems, are numerous. Control is the most obvious positive, not only in the sense of a brand being able to mould and shape a campaign with complete flexibility and precision, but also to do so without the need to find and retain human models and the camera crews that go with them.
Courtesy: Siren, Cubic Motion
Removing the human factor also reduces the possibility of an influencer or spokesperson unintentionally reflecting badly on a brand, whether it’s a slip of the tongue, a poor decision, or dirt pulled from the influencer’s background. After all, virtual influencers don’t have skeletons in their closets… well, unless you want them there that is.
As you’d expect creating digital humans that are imperceptible from the real deal is a difficult, nuance-driven process though, and this is where companies such as Cubic Motion come in. In their case the company harnesses the power of machine vision to capture every intricate detail of the human face and then precisely transfer that onto their new digital characters, ensuring that nothing is lost in the process. And so far they’ve animated the faces for PlayStation 4 smashes such as Spider-Man and God of War, and helped bring to life Siren, one of the most convincing digital humans to date as you can see from the photos.
Believable digital humans must replicate every element of the human face as we understand it. We’re all experts on what a human face should look like, having seen an uncountable number of them over our entire lives-and if any part of the digital human falters or doesn’t live up to the quality level of the rest, the entire illusion falls apart – something known as the uncanny valley test.
The key to delivering better digital humans, and as a result, improved virtual influencers, is machine vision, and Cubic Motion’s computer vision PhDs have been pushing the technology for more than 15 years, essentially teaching computers to understand an image in the same way that we all do. With the ability to recognise details and movements, a computer is better able to rapidly process the data fed to it and then transfer the result into any medium.
The capture process begins in the studio, with multiple cameras pointed at the actor’s face to ensure that every tiny nuance-even the ones we don’t notice, but would miss if they weren’t there-is recorded and analysed. Thanks to computer vision, the company can quickly record and reproduce complex human emotions and input them into digital humans.
It all begins with the eyes – the window of the soul. We’ve all seen video game cinematics and low-quality CGI animation with dead-looking eyes, and it completely throws off the effect of the digital human. By capturing blinking and pupil movements directly from the actor, we can ensure that the digital human never reads as soulless or false. Computer vision also captures a spectrum of facial animations to drive the performance of digital characters.
Incredibly detailed scans of real human beings are used to amplify the effect, to ensure that no detail is left behind in the process. What completes the illusion is rigging – the process of solving the capture data and connecting the tracking of the human face to the computer model. It provides animators the so called strings of the puppet, letting them drive the performance in any way they see fit. It’s also possible to have a real person “perform” the digital human via live motion capture technology.
It’s still early days for digital humans across films, games and online culture. But the next generation will be less a novelty and much more indistinguishable from the real thing. In 10 to 20 years, we may all have our own near-identical digital doubles that represent us in online worlds and interactions, and virtual influencers are just the first step on that journey, and a fascinating glimpse of how digital humans could transform marketing and many other sectors in the future, so stay tuned.