Scroll Top

Edtech company Udacity uses deepfake tech to create educational videos automatically

WHY THIS MATTERS IN BRIEF

Most online educational courses are text and graphics based, but now Udacity is using deepfake tech to automatically generate educational videos from the content.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

Producing content for Massive Open Online Course (MOOC) platforms like Coursera and EdX might be academically rewarding, and potentially lucrative, but it’s also hugely time consuming – particularly where videos are involved, so Udacity, in an ode to Soul Machines who recently created “Will” the world’s first avatar teacher who’s already taught over 250,000 children about energy, have been looking into ways to get Artificial Intelligence (AI) to produce the videos automatically for them – something that would be a game changer in the academic world.

 

See also
EU law makers are thinking of letting people sue robots

 

After all, professional level lecture clips require not only a veritable studio’s worth of equipment, but significant resources to transfer, edit, and upload footage of each lesson, so that’s why research scientists at Udacity, an online learning platform with over 100,000 courses, are investigating a new machine learning framework that automatically generates lecture videos from audio narration alone. And, for now at least the tech they’re developing isn’t a million miles away from other so called synthetic content AI generators, like the ones I’ve discussed many times before that are being used to create DeepFakes and next generation Text to Video content, among many other things.

 

An example of the tech

 

They claim in a preprint paper (“LumièreNet: Lecture Video Synthesis from Audio“) on Arxiv.org that their AI system, called LumièreNet, “can synthesise footage of any length by directly mapping between audio and corresponding visuals.”

 

See also
Blockchain and Universal Basic Income join forces to lift 22,000 children out of poverty

 

“In current video production an AI that semi, or fully), automates lecture video production at scale would be highly valuable to enable agile video content development (rather than reshooting each new video),” wrote the paper’s co-authors. “To [this] end, we propose a new method to synthesise lecture videos from any length of audio narration: … A simple, modular, and fully neural network-based [AI] which produces an instructor’s full pose lecture video given the audio narration input, which has not been addressed before from deep learning perspective, as far as we know.”

The researchers’ model has a pose estimation component that’s not too dissimilar from Nvidia’s latest GauGAN AI or the so called full body DeepFake tech that recently came out of Japan, that synthesises body figure images from video frames extracted from a training data set, chiefly by detecting and localizing major body points to create detailed surface-based human body representations.

 

See also
OpenAI readies AI search engine to smash Google on home turf

 

Meanwhile a second module in the model, a bidirectional recurrent long-short term memory (BLSTM) network that processes data in order so that each output reflects the inputs and outputs that precede it, takes as input audio features and attempts to suss out the relationship between them and visual elements.

To test LumièreNet, the researchers filmed an instructor’s lecture video for around eight hours at Udacity’s in-house studio. This yielded roughly four hours of video and two narrations for training and validation. The researchers report that the trained AI system produces “convincing” clips with smooth body gestures and realistic hair, but note that its creations, two of which are here and here, likely won’t fool most observers because the pose estimator can’t capture fine details like eye motion, lips, hair, and clothing, synthesized lecturers rarely blink and they tend to move their mouths unnaturally. Worse, their eyes sometimes look in different directions and their hands always appear oddly blurry.

 

See also
Robot Baristas help coffee shops limit the spread of Covid-19 as lockdowns ease

 

The team posits that the addition of “face keypoints” (i.e., fine details) might lead to better synthesis, and they note that — fortunately — their system’s modular design allows each component to be trained and improved independently.

“[M]any future directions are feasible to explore,” wrote the researchers. “Even though our approach is developed with primary intents to support agile video content development, which is crucial in current online MOOC courses, we acknowledge there could be potential misuse of the technologies … We hope that our results will catalyse new developments of deep learning technologies for commercial video content production.”

Related Posts

Leave a comment

FREE! 2024 TRENDS AND EMERGING TECHNOLOGY CODEXES
+

Awesome! You're now subscribed.

Pin It on Pinterest

Share This