WHY THIS MATTERS IN BRIEF
PTSD is an incredibly difficult condition to diagnose even for humans, now an AI has mastered it and even taught the humans a thing or too.
Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.
Recently I discussed how Artificial Intelligence (AI) is helping analyse and diagnose everything from pancreatic cancer and depression to dementia and heart disease, as well as a person’s character and even their intent to criminality. And as the number of problems that AI tackles grows one area that it’s increasingly making inroads into is mental health. One the one hand, for example, we have technologies like Woebot, the world’s first AI psychiatrist that’s treated millions of people, and now, it’s tackling Post Traumatic Stress Disorder (PTSD) which has been one of the most challenging disorders to diagnose because traditional methods, like one-on-one clinical interviews, can be inaccurate due to the clinician’s subjectivity, or if the patient is holding back their symptoms.
Now though researchers at New York University say they’ve taken the guesswork out of diagnosing PTSD in veterans by using AI to objectively detect PTSD by listening to the sound of someone’s voice – a technique that so far has proved very effective in identifying patients with dementia.
Their research, conducted alongside SRI International – the research institute responsible for bringing Siri to iPhones – was published recently in the journal Depression and Anxiety.
According to The New York Times, SRI and NYU spent five years developing a voice analysis program that understands human speech, but also can detect PTSD signifiers and emotions. As the NYT reports, this is the same process that teaches automated customer service programs how to deal with angry callers: By listening for minor variables and auditory markers that would be imperceptible to the human ear, the researchers say the algorithm can diagnose PTSD with 89% accuracy.
Researchers interviewed and recorded 129 war-zone exposed veterans and gathered 40,000 speech samples to study. Then they used the audio to teach the AI which vocal changes correlated with diagnoses of PTSD — a slower, more monotonous cadence was an indicator of PTSD, as well as a shorter tonal range with less enunciation.
The AI, they say, can detect minute changes in the voice, like the tension of throat muscles and whether the tongue touches the lips — all potential indicators of a PTSD diagnosis.
“They were not the speech features we thought,” Charles Marmar, a psychiatry professor at NYU and one of the authors of the paper, told the NYT. “We thought the telling features would reflect agitated speech. In point of fact, when we saw the data, the features are flatter, more atonal speech. We were capturing the numbness that is so typical of PTSD patients.”
Although the AI is a breakthrough for VA clinicians, there are still some blind spots. By only inputting data from male combat veterans, the scope of the program’s potential is limited to men in the military — though it could be a proof of concept toward a more universal technology. But, as it’s refined speech analysis could become an effective biomarker for objectively identifying the disorder, allowing clinicians to accurately diagnose veterans and give them the mental health support they need.