A psychologist and a computer science professor explore how generative AI is reshaping mental health support
By Cashea Airy
Santa Clara University
Therapy can help people get through their most trying times, but for many, professional care has never been within reach. Stigma keeps some away, while the high cost of a single session shuts out others. For decades, those without access have leaned on friends and family instead of licensed mental health providers for support. Now, they have new options: generative AI tools, like ChatGPT. In fact, in 2025, the most common reason Americans used ChatGPT was for something it wasn’t designed to do—provide mental health therapy and companionship.
But as more people turn to AI for emotional support, Xiaochen Luo, clinical psychologist and assistant professor of counseling psychology at Santa Clara University, became curious about the potential risks.
“Sometimes people slip into the idea that a real person is talking to them on the other side of their screen. They idealize ChatGPT as this perfect tool that combines the best of a therapist and the best of a machine,” says Luo.
Because the technology is new and largely unregulated, Luo wondered whether generative AI tools are safe or ethical for users. What risks do people face when they turn to tools like ChatGPT for emotional support and what safeguards, if any, exist to protect them when they do?
Here are some thoughts:
A study by researchers at Santa Clara University found that many Americans are turning to ChatGPT for emotional support and therapy, despite the tool not being designed for that purpose. While users appreciate its constant availability and perceived objectivity, the researchers found concerning trends:
1. people place excessive trust in the AI's guidance,
2. rarely question its advice, and
3. often overlook privacy risks.
The problem is rooted in ChatGPT's design, which tends to provide agreeable responses rather than the challenging feedback real therapy often requires: potentially leading to harmful outcomes. The researchers call for greater AI literacy, clearer communication of the tool's limitations, and human-supervised AI models to better protect vulnerable users.








