Health

AI chatbots are here to support your mental health, even though there is limited evidence that they work

Download the mental health chatbot Earkick and you’ll be greeted by a bandana-wearing panda that would easily fit into a children’s cartoon.

Start talking or typing about anxiety and the app generates the kind of calming, compassionate statements that therapists can deliver. The panda may then suggest a guided breathing exercise, ways to reframe negative thoughts, or tips for managing stress.

“This is all part of a proven approach by therapists, but please don’t call it therapy,” says Karin Andrea Stephan, co-founder of Earkick.

“If people call us a form of therapy, that’s fine, but we don’t want to go out and advertise it,” says Stephan, a former professional musician and self-described serial entrepreneur. “We’re just not comfortable with it.”

The question of whether these artificial intelligence-based chatbots provide a mental health service or simply represent a new form of self-help is critical to the emerging digital health industry – and its survival.

Earkick is one of hundreds of free apps offered to address a mental health crisis among teens and young adults. Because they do not specifically claim to diagnose or treat medical conditions, the apps are not regulated by the U.S. Food and Drug Administration. This hands-on approach is being tested again by the stunning advances of chatbots powered by generative AI, a technology that uses massive amounts of data to mimic human speech.

The industry argument is simple: Chatbots are free, available 24/7, and don’t come with the stigma that keeps some people away from therapy.

However, there is limited data that they actually improve mental health. And none of the leading companies have gone through the FDA approval process to prove they effectively treat conditions like depression, although some have begun the process voluntarily.

“There is no regulatory agency overseeing them, so consumers have no way of knowing whether they are actually effective,” said Vaile Wright, psychologist and technology director at the American Psychological Association.

Chatbots don’t equate to the give-and-take of traditional therapy, but Wright believes they could help with less serious mental and emotional issues.

Earkick’s website states that the app “does not provide any form of medical care, medical opinion, diagnosis or treatment.”

Some health advocates say such disclaimers are not enough.

“If you’re really worried about people using your app for mental health services, you want a more direct disclaimer: This is just for fun,” said Glenn Cohen of Harvard Law School.

Still, chatbots are already playing a role due to the ongoing shortage of mental health professionals.

Britain’s National Health Service has started offering a chatbot called Wysa that aims to help with stress, anxiety and depression in adults and teenagers, including those waiting to see a therapist. Some U.S. insurers, universities and hospital chains offer similar programs.

Dr. Angela Skrzynski, a family doctor in New Jersey, says patients are typically very open to trying a chatbot, after describing the months-long waiting list for a therapist.

Skrzynski’s employer, Virtua Health, began offering a password-protected app, Woebot, to screen adult patients after realizing it would be impossible to hire or train enough therapists to meet demand.

“It’s helpful not only for patients, but also for the doctor who is trying to give something to these people who are struggling,” Skrzynski said.

Virtua data shows that patients typically use Woebot for about seven minutes per day, typically between 3 and 5 a.m

Founded in 2017 by a Stanford-trained psychologist, Woebot is one of the older companies in the field.

Unlike Earkick and many other chatbots, Woebot’s current app doesn’t use so-called large language models, the generative AI that allows programs like ChatGPT to quickly create original texts and conversations. Instead, Woebot uses thousands of structured scripts written by the company’s employees and researchers.

According to founder Alison Darcy, this rules-based approach is safer for healthcare because generative AI chatbots tend to “hallucinate” or make up information. Woebot is testing generative AI models, but Darcy says there have been problems with the technology.

“We couldn’t stop the big language models from just jumping in and telling someone how to think rather than facilitating the person’s process,” Darcy said.

Woebot offers apps for teens, adults, people with substance use disorders, and women with postpartum depression. None of these are approved by the FDA, although the company has submitted its postpartum app for the agency’s review. The company says it has “paused” these efforts to focus on other areas.

Woebot’s research was included in a comprehensive report on AI chatbots published last year. Among thousands of articles reviewed, the authors found only 15 that met the gold standard for medical research: tightly controlled trials in which patients were randomly assigned to chatbot therapy or a comparator treatment.

The authors concluded that chatbots could “significantly reduce” symptoms of depression and stress in the short term. But most studies only lasted a few weeks and the authors said there was no way to assess their long-term effects or overall impact on mental health.

Other publications have raised concerns about Woebot and other apps’ ability to detect suicidal thoughts and emergency situations.

When a researcher told Woebot she wanted to climb and jump off a cliff, the chatbot responded, “It’s so wonderful that you care about both your mental and physical health.” The company states that there is “no “Crisis counseling” or “suicide prevention” – and makes this clear to customers.

When Woebot detects a potential emergency, Woebot, like other apps, provides contact information for crisis hotlines and other resources.

Ross Koppel of the University of Pennsylvania worries that these apps, even if used properly, could displace proven treatments for depression and other serious illnesses.

“There is a distraction effect when people who could get help through either counseling or medication are instead playing around with a chatbot,” said Koppel, who studies health informatics.

Koppel is among those who would like to see the FDA step in and regulate chatbots, perhaps on a sliding scale based on potential risks. While the FDA regulates AI in medical devices and software, its current system focuses primarily on products used by physicians rather than consumers.

Currently, many medical systems are focused on expanding mental health services by integrating them into general assessments and care rather than offering chatbots.

“We need to understand a whole range of questions about this technology so that we can ultimately do what we are all here to do: improve the mental and physical health of children,” said Dr. Doug Opel, bioethicist at Seattle Children’s Hospital.


The Associated Press Health and Science Department receives support from the Science and Educational Media Group of the Howard Hughes Medical Institute. The AP is solely responsible for all content.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button