Photo Credit: Hadas Parush/Flash90

A groundbreaking study led by Dr. Ziv Ben-Zion, a clinical neuroscientist at the University of Haifa School of Public Health and Yale University School of Medicine, has revealed that ChatGPT is more than just a text-processing tool—it also reacts to emotional content in ways that mirror human responses. The research, published in the esteemed journal npj Digital Medicine (Assessing and alleviating state anxiety in large language models), found that exposure to traumatic stories more than doubled the model’s anxiety levels, influenced its performance, and intensified existing biases (e.g., racism and sexism). Interestingly, mindfulness exercises—commonly used to reduce anxiety in humans—helped reduce ChatGPT’s anxiety, though it did not return to baseline levels.

“Our findings demonstrate that AI language models are not neutral,” explained Dr. Ben-Zion. “Emotional content has a significant impact on their responses, much like it does with humans. We know that anxiety in humans can exacerbate biases and reinforce social stereotypes, and we observed a similar effect in AI models. Since these models are trained on large amounts of human-generated text, they don’t just absorb human biases—they can amplify them. It’s crucial to understand how emotional content affects AI behavior, especially when these models are used in sensitive areas like mental health support and counseling.”

Advertisement




Previous research has shown that large language models do not operate purely on technical parameters but also respond to the emotional tone of the material they process. For instance, simply asking a model about a time it felt anxious can lead it to report higher anxiety levels and influence subsequent responses. Dr. Ben-Zion’s study, conducted in collaboration with researchers from universities in the US, Switzerland, and Germany, aimed to explore how exposure to human emotional content, particularly traumatic experiences, affects AI models. The study also investigated whether techniques used to reduce anxiety in humans, like mindfulness and meditation, could alleviate these effects in AI.

The study used a standard state anxiety questionnaire (STAI-State), which measures anxiety on a scale from “no anxiety” (20) to “maximum anxiety” (80). The study had three stages:

1. Baseline Measurement: ChatGPT’s anxiety was measured before any exposure to emotional content to establish a baseline.

2. Exposure to Traumatic Content: The model was exposed to real-life traumatic stories across five categories: road accidents, natural disasters, interpersonal violence, armed conflicts, and military trauma. These stories, derived from previous psychological research, included vivid descriptions of crises and personal suffering.

3. Mindfulness Intervention: Following the exposure, the model underwent mindfulness exercises, such as breathing techniques, relaxation, and guided imagery, to test their effectiveness in reducing its anxiety levels.

The research compared these three stages with a neutral text (such as a vacuum cleaner manual) to assess the effects of emotional content.

“The results were striking,” said Dr. Ben-Zion. “Traumatic content caused a significant rise in ChatGPT’s anxiety levels. Initially, the model’s anxiety was relatively low (STAI=30), but after exposure to the traumatic stories, its anxiety more than doubled (STAI=68). Among the trauma categories, military-related trauma elicited the strongest response (STAI=77).”

The study also showed that mindfulness exercises reduced the model’s anxiety by about 33% (STAI=44), but the anxiety remained significantly higher than the baseline. Five different mindfulness techniques were tested, including ones based on natural imagery, body-focused meditation, and even a self-generated meditation script created by ChatGPT. Interestingly, the model’s self-created meditation was the fastest and most effective in reducing anxiety (STAI=35). This marks the first time that “benign prompt injection”—the act of adding calming, therapeutic text into the AI’s chat history—has been used therapeutically, much like a therapist guiding a patient through relaxation.

“These results challenge the idea that AI language models are objective and neutral,” Dr. Ben-Zion said. “They show that emotional content significantly influences AI systems in ways that resemble human emotional responses. This has important implications for AI applications in fields requiring emotional sensitivity, like mental health and crisis intervention.”

The study emphasizes the need for tools to manage the emotional impact on AI systems, especially those designed to provide psychological support. Ensuring that AI models process emotionally charged information without distorting their responses is essential. Dr. Ben-Zion believes that developing automated “therapeutic interventions” for AI is a promising area for future research.

This pioneering study lays the groundwork for further investigation into how AI models process emotions and how their emotional responses can be managed. Developing strategies to moderate these effects could enhance the effectiveness of AI in mental health support, crisis intervention, and interactions with users in distressing situations.


Share this article on WhatsApp:
Advertisement

SHARE
Previous articleWSJ: Israel Ready to Launch Phase 2 of the War in Gaza
Next articleInspiration from Zion: “Because here, I feel at home”
David writes news at JewishPress.com.