Trauma can increase chatbots ‘anxiety:’ Study 

Advertisement

Chatbots’ self-reported levels of “anxiety” can increase when processing emotionally charged topics, a study published March 3 in npj Digital Medicine found. 

Researchers asked ChatGPT to rate its “state anxiety” using tools designed to measure anxiety in humans. Researchers then exposed the chatbot to traumatic narrative prompts. After being exposed to these stories, the chatbot reported higher levels of “anxiety.” 

The researchers then provided ChatGPT with prompts designed to reduce reported anxiety. The chatbot then reported lower levels of “state anxiety,” but not a return to the baseline established before being exposed to traumatic prompts. 

Some mental health professionals have raised concerns over the use of AI chatbots as therapists. In February, the American Psychological Association asked the Federal Trade Commission to investigate chatbots that claim to be able to act as therapists. 

Large language models are trained on human-generated text. According to the study, these models often inherit biases from the sources they are trained on. Previous studies have documented biases in large language models related to gender, race, religion and other characteristics.  

The “state anxiety” ChatGPT reports could influence the models’ behavior and exacerbate biases, the researchers wrote. 

The study was led by researchers at New Haven, Conn.-based Yale School of Medicine. Read more here

Advertisement

Next Up in Behavioral Health Technology

Advertisement