Scroll Top

This new AI uses both sight and sound to estimate depression

WHY THIS MATTERS IN BRIEF

As mental health issues become more pronounced and more prominent in society researchers are trying to find new ways to identify people who suffer from it.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

Detecting emotional arousal from the sound of someone’s voice is one thing — startups like Beyond VerbalAffectiva, and MIT spinout Cogito are leveraging natural language processing to accomplish just that. But as robots and bots trained in psychology, such as Woebot who’s now helped millions of people, start appearing on the scene to help patients in new ways, there’s an argument to be made that speech alone isn’t enough to diagnose someone with depression – let alone judge its severity.

Enter new research from scientists at the Indian Institute of Technology Patna and the University of Caen Normandy, which examines how non-verbal signs and visuals can drastically improve estimations of depression level.

 

See also
New prosthetic hand project will give amputees back their sense of touch

 

“The steadily increasing global burden of depression and mental illness acts as an impetus for the development of more advanced, personalized and automatic technologies that aid in its detection,” the paper’s authors wrote. “Depression detection is a challenging problem as many of its symptoms are covert.”

The researchers encoded seven modalities — things like downward angling of the head, eye gaze, the duration and intensity of smiles, and self-touches, along with text and verbal cues — which they fed to a machine learning model that fused them together into vectors, or mathematical representations. These fused vectors were then passed onto a second system that predicted the severity of depression based on the Personal Health Questionnaire Depression Scale (PHQ-8), a diagnostic test often employed in large clinical psychology studies.

 

See also
The world's first active duty military AI co-pilot just got its next mission

 

To train the various systems, the researchers tapped AIC-WOZ, a depression data set that’s part of a larger corpus — the Distress Analysis Interview Corpus — containing annotated audio snippets, video recordings, and questionnaire responses of 189 clinical interviews supporting the diagnosis of psychological conditions like anxiety, depression, and post-traumatic stress disorder. Each sample contained an enormous amount of data, including a raw audio file, a file containing the coordinates of 68 facial “landmarks” of the interviewee, complete with time stamps, confidence scores, and detection success flags, two files containing head pose and eye gaze features of the participant, a transcript file of the interview, and more.

After several pre-processing steps and model training, the team compared the results of the AI systems using three metrics – Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Explained Variance Score (EVS). They report that the fusion of the three modalities — acoustic, text, and visual — helped in giving the “most accurate” estimation of depression level, outperforming the previous state of the art systems by 7.17% on RMSE and 8.08% on MAE.

 

See also
Bloomberg used 1.3 Million GPU hours and 600 Billion documents to train BloombergGPT

 

In the future, they plan to study recent multitask learning architectures and “dig deeper” into novel representations of text data, and if their work bears fruit it’d be a promising development for the more than 300 million people now living with depression — a number that’s sadly on the rise.

Source: arVix

Related Posts

Leave a comment

FREE! 2024 TRENDS AND EMERGING TECHNOLOGY CODEXES
+

Awesome! You're now subscribed.

Pin It on Pinterest

Share This