Scroll Top

These sonar enabled smart glasses can listen in on silent voice commands

WHY THIS MATTERS IN BRIEF

Some people can’t speak and others don’t want to speak out loud in certain situations, but now your silent words can be heard.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, read our codexes, or browse my blog.

Some people lack the power of speech, others find themselves in noisy settings where speaking voice commands out loud just doesn’t work, and others just don’t want the people around them to hear whatever it is they’re saying. And while all these scenarios have been problematic for privacy conscious people in the past as we see more gadgets that let people talk silently to whoever it is on the other end of the phone now another solutions emerged in the form of EchoSpeech glasses which read their user’s silently spoken words.

 

See also
Scientists have used a new breakthrough treatment to wake up coma patients

 

The experimental eyewear is being developed by a team at Cornell University’s Smart Computer Interfaces for Future Interactions (SciFi) Lab.

Two downwards-facing miniature speakers are mounted on the underside of the frame beneath one lens, while two mini microphones are located beneath the other. The speakers emit inaudible sound waves, which are reflected off the wearer’s moving mouth and back up to the mics like sonar.

Those echoes are then analyzed in real time by a deep learning algorithm on a wirelessly linked smartphone. That algorithm was trained to associate specific echoes with specific mouth movements, which are in turn associated with specific silently spoken commands.

 

See also
World first as virtual reality helps legally blind man see for the first time

 

EchoSpeech is currently capable of recognizing 31 such commands with about 95% accuracy, and only requires a few minutes of training for each user. And importantly for people with privacy concerns, the system doesn’t incorporate any cameras, nor does it send any information to the internet.

What’s more, because it doesn’t utilize a power-hungry camera, it can run for up to 10 hours on one charge of its battery. By contrast, the researchers claim that experimental camera-based systems are only good for about 30 minutes of use per charge.

The university is now working on commercializing the technology.

“For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer,” said doctoral student Ruidong Zhang, who is leading the study. “It could give patients their voices back.”

 

See also
First of a kind study shows humans and AI are better together when it comes to work

 

The SciFi Lab previously developed a somewhat similar system called EarIO, which uses a sonar-equipped ear-worn device to capture the wearer’s facial expressions – although it’s utilized mainly to create digital avatars. That said, the University at Buffalo’s EarCommand system does read silently spoken words via an earbud which detects distinctive ear canal deformations produced by specific mouth movements.

EchoSpeech is demonstrated in the video above.

Source: Cornell University

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This