The voice is the key to diagnosing mental illness
Let me hear your voice, and I’ll give you a diagnosis. While this might sound like science fiction, it’s not far off the goal of a research project at the Interacting Minds Centre. This is because people suffering from depression tend to speak monotonously, autistic patients speak mechanically, and schizophrenics speak tonelessly.
The information can be used to provide objective support for the process of diagnosis, according to the two AU researchers Kristian Tylén and Riccardo Fusaroli. Alongside the linguist Ethan Weed, they are responsible for the research project. They are combining their knowledge of cognitive semiotics and psychiatry with a view to developing statistical programs that calculate results automatically.
“We record patients’ voices and run the recordings through an algorithm that codes the patterns and calculates how much the rhythm and prosody – the pattern of word stress and breaks between sentences – deviate from the norm,” explains Tylén. The project involves collaboration between researchers from the Faculty of Arts, the Faculty of Health and Aarhus University Hospital, combining questions, methods and data from several different fields. It will pave the way for more objective diagnoses of mental illnesses that are not based solely on the assessments of individual doctors.
“The method makes it possible to find out whether a patient is suffering from a particular mental illness, as well as the severity of the illness. For instance, there are many degrees of depression, and it’s important to distinguish between them,” says Fusaroli.
The system will be ready for implementation at hospitals within a few years, and patients can look forward to a faster diagnostic process that also saves hospital resources.
Autistic American children
The researchers are also studying the use of natural conversations in this connection.
“We’re studying everyday conversations which take place not in the clinic, but in the home or elsewhere. This will give us a more natural and accurate basis for assessing developments in the voices and interaction patterns of participants, helping us to assess their symptoms,” explains Fusaroli.
The AU researchers are working alongside researchers from the University of Connecticut in the US, who have collected data from 60 autistic American children as well as from a matched control group interacting with their mothers. The researchers analyse data derived from the children’s voices and language in an algorithm, enabling them to monitor developments from the age of 2 until the children are 5 years old.
Postdoc Kristian Tylén
Postdoc Riccardo Fusaroli