Chatbots create questions about transparency in mental health care

Jhe field of mental health is increasingly turning to chatbots to relieve growing pressure on a limited group of licensed therapists. But they are stepping into uncharted ethical territory as they face questions about the degree to which AI is involved in such deeply sensitive support.

Researchers and developers are in the very early stages of researching how to safely combine AI-based tools like ChatGPT, or even local systems, with the natural empathy offered by humans providing a support – especially on peer counseling sites where visitors can ask other Internet users for empathetic messages. These studies seek to answer deceptively simple questions about AI’s ability to engender empathy: How do peer counselors feel about AI helping? How do visitors feel once they have discovered it? And does knowing change the effectiveness of support?

They also face, for the first time, a thorny set of ethical questions, including how and when to let users know they’re participating in what is essentially an experiment to test an AI’s ability to generate answers. Since some of these systems are designed to allow peers to send supportive texts to each other using message templates, rather than providing professional medical care, some of these tools may fall into a gray area where the type of monitoring needed for clinical trials is not required. .

advertising

“The field is sometimes moving faster than the ethical discussion can keep up,” said Ipsit Vahia, head of McLean’s Digital Psychiatry and Aging Translation and Technology Lab. Vahia said the field will likely see more experimentation in the coming years.

This experimentation could carry risks: Experts said they feared inadvertently encouraging self-harm or missing signals that the help-seeker might need more intensive care.

advertising

But they also worry about rising rates of mental health problems and a lack of readily available support for many people struggling with conditions such as anxiety or depression. That’s why it’s so critical to strike the right balance between safe and efficient automation and human intervention.

“In a world where there aren’t enough mental health professionals, the lack of insurance, the stigma, the lack of access, anything that can help can really play a big role,” Tim said. Althoff, assistant professor of computer science at the University of Washington. “It must be evaluated with all [the risks] in mind, which sets a particularly high bar, but the potential is there and that potential is also what drives us.

Althoff co-wrote a to study published Monday in Nature Machine Intelligence examining how peer helpers at a site called TalkLife felt about visitor responses co-authored by a local chat tool called HAILEY. In a controlled trial, researchers found that almost 70% of supporters felt that HAILEY enhanced their own ability to be empathetic – a hint that AI guidance, when used carefully, could potentially increase the ability of a fan to communicate deeply with other humans. Fans have been informed that they may be offered AI-guided suggestions.

Instead of telling a help-seeker “don’t worry,” HAILEY might suggest that support type in something like “this must be a real fight” or ask for a potential solution, for example.

The positive study results are the result of years of progressive academic research dissecting questions such as “what is empathy in clinical psychology or in a peer support setting” and “how to measure it. you,” Althoff pointed out. His team didn’t present the co-authored responses to TalkLife visitors at all – their goal was simply to understand how supporters could benefit from the AI ​​guidance before sending the AI-guided responses to visitors, did he declare. Her team’s previous research suggested that peer helpers reported difficulty writing messages of support and empathy on online sites.

In general, developers exploring AI interventions for mental health — even in peer support — would be “well served by being conservative around ethics, rather than bold,” Vahia said.

Other attempts have already drawn ire: Tech entrepreneur Rob Morris has drawn censorship on Twitter after describing an experiment involving Koko, a peer support system he developed that allows visitors to request or offer empathetic support anonymously on platforms such as WhatsApp and Discord. Koko offered a few thousand peer supporters AI-guided suggested responses based on the incoming message, which supporters were free to use, reject or rewrite.

Site visitors were not explicitly informed that their peer helpers could be guided by AI from the outset. Instead, when they received a reply, they were informed that the post may have been written using a bot. AI specialists blasted this approach in response to Morris’ posts. Some said he should have sought support from an institutional research review board — a process university researchers typically follow when studying human subjects — for the experiment.

Morris told STAT he doesn’t believe this experiment warrants such an endorsement in part because it doesn’t involve personal health information. He said the team was simply testing a feature of the product and that the original Koko system stemmed from previous academic research that had been approved by the IRB.

Morris halted the experiment after he and his team concluded internally that they didn’t want to muddy the natural empathy that comes from pure human-to-human interaction, he told STAT. “The actual writing might be perfect, but if a machine wrote it, it didn’t have you in mind…it’s not drawing on its own experiences,” he said. “We are very attentive to the user experience and we look at platform data, but we also have to rely on our own intuition.”

Despite the fierce online backlash he faced, Morris said he was encouraged by the discussion. “Whether this type of work outside of academia can and should go through the IRB processes is a very big question and I’m really glad to see people getting super excited about it.

Leave a Comment