AI spots signs of mental health issues in text messages on par with human psychiatrists: UW study

It’s highly unlikely that artificial intelligence will ever become intelligent enough to completely replace humans in the workplace—let alone overthrow the entire population in a sci-fi-style rebellion—but in the realm of healthcare, at least, the robots are making some progress.

Case in point: a new AI model from the University of Washington’s medical school was proven in a recent study to be able to accurately identify potential signs of worsening mental illness after being trained to sift through everyday text messages.

According to the model’s makers, the natural language processing AI was found to be just as capable as human psychiatrists in spotting certain “cognitive distortions” that may indicate a decline in mental health, suggesting that the AI could serve as a useful tool in triaging patients in an increasingly overtaxed healthcare system.

In the study, published in the journal Psychiatric Services, researchers trained the AI to look for and distinguish between several common cognitive distortions, including mental filtering, jumping to conclusions, catastrophizing, making “should” statements and overgeneralizing.

To test its abilities, they then fed the model 12 weeks’ worth of unprompted text messages from 39 patients—spanning a total of more than 7,300 messages—which had also been annotated by human experts. Ultimately, the AI identified and classified distortions in the texts at an almost identical rate to its human counterparts, scoring well above other automated models.

The possible benefits of the technology are manifold, the researchers said in a Tuesday press release from the university. For one, the AI could pick up on potential warning signs of worsening mental health that may go overlooked by clinicians who are either overworked or haven’t been trained to recognize cognitive distortions in patients’ written words.

“When we're meeting with people in person, we have all these different contexts,” said Justin Tauscher, Ph.D., lead author of the paper and an acting assistant professor at UW Medicine. “We have visual cues, we have auditory cues, things that don’t come out in a text message. Those are things we’re trained to lean on. The hope here is that technology can provide an extra tool for clinicians to expand the information they lean on to make clinical decisions.”

The AI could also work remotely from doctors, helping to speed up the triage process for mental healthcare by identifying potential red flags in patients’ daily lives—perhaps by integrating it into a wearable health tracker or smartphone-based monitoring system.

“In the same way that you’re getting a blood-oxygen level and a heart rate and other inputs, we might get a note that indicates the patient is jumping to conclusions and catastrophizing,” said Dror Ben-Zeev, Ph.D., a co-author of the paper and director of UW’s Behavioral Research in Technology and Engineering Center. “Just the capacity to draw awareness to a pattern of thinking is something that we envision in the future. People will have these feedback loops with their technology where they're gaining insight about themselves.”