According to their paper, they’ve developed a context-free method by which a machine can break down text or audio from a human and assign a score indicating the person’s level of depression.
MIT researchers detail a neural-network model that can be unleashed on raw text and audio data from interviews to discover speech patterns indicative of depression.
The lead author on the paper, Tuka Alhanai, said “It’s not so much detecting depression, but it’s a similar concept of evaluating, from an everyday signal in speech, if someone has cognitive impairment or not.”
To test their AI, the researchers conducted an experiment where 142 people being screened for depression answered a series of questions asked by a human-controlled virtual agent.
There was no A,B,C,D to choose from, the AI discerns depression from linguistic cues.
In the text version, the AI was able to predict depression after about seven question and answer sequences.
In a game where, theoretically, a computer and a person listen to the same conversation and end up making diametrically opposed depression diagnoses, who decides which is correct? Or, if you prefer, in a scenario in which a computer identifies potential depressives, does a human also conduct the same checks to ensure doctors aren’t treating patients the algorithm was wrong about? Because that’d be redundant and wasteful.
Individuals suffering from depression are beset by debilitating sadness for weeks to years on end.
Passive automated monitoring of human communication may address these constraints and provide better screening for depression.
Imagine losing out on a job because a company used a “Depression detector” AI to determine you weren’t mentally stable enough during your interview, or having an algorithm’s interpretation of your responses to a lawyer’s questions being admissible in your child custody case.
This article was summarized automatically.