In a presentation to the FTC, the APA cited court cases filed by families of teenagers who had chatted with bots on Chacter.AI labeled as psychologists. In one case, a 14-year-old boy died by suicide after interacting with a Chacter.AI chatbot claiming to be a licensed psychologist.
The chatbots did not challenge users’ beliefs, even when they became dangerous, Arthur Evans Jr., PhD, CEO of the APA, told the Times.
“They are actually using algorithms that are antithetical to what a trained clinician would do. Our concern is that more and more people are going to be harmed,” Dr. Evans told the Times.
A spokesperson for Character.AI told the Times the platform has added multiple safety features in the past year, including a disclaimer for any bots labeled as psychologists, therapists or doctors. The site warns users that they “should not rely on these characters for any type of professional advice.”
A FTC spokesperson told the Times the commission could not comment on its discussions with the APA.
Some chatbots designed specifically for mental healthcare do not use generative AI technology. Woebot, an AI-based chatbot that launched in 2017, is designed to help patients manage depression, anxiety and other conditions. It uses responses from a prewritten library, vetted by mental health professionals.
Generative AI programs, including ChatGPT and Character.AI, do not have the same guardrails. The bots often mirror and amplify the users beliefs, according to the Times.
A true AI-driven therapist is likely years away, David Mohr, PhD, director of the Center for Behavioral Intervention Technologies at Northwestern University, told the Wall Street Journal in 2024.