The Receptor

The Receptor

Big Tech And Regulators Seek To Limit AI Chatbots For Mental Health

OpenAI is optimizing ChatGPT to guide, but not decide, when users ask it about their personal challenges; Illinois moves to ban AI therapy.

Cristian Salazar's avatar
Cristian Salazar
Aug 07, 2025
∙ Paid
4
2
2
Share

It has quickly become commonplace for people of all ages to engage AI chatbots for mental health and emotional support despite the lack of regulation and increasing evidence that it is a potentially very, very bad idea. “Therapy” and “companionship” are now the most common uses for AI chatbots. Medical professionals, privacy advocates, and policymakers are alarmed. But millions of Americans who are mentally ill, depressed or lonely can’t afford or access IRL therapy. AI chatbots offer a cheap, accessible alternative, their very design encouraging continuous engagement and 24/7 attention. 

Rather than address the hazards of unregulated AI therapy, Big Tech is instead offering platitudes and vague acknowledgments that, maybe, somehow, they might have some responsibility for the situation. Take, for instance, OpenAI, the maker of ChatGPT, which yesterday announced that it is “optimizing” its chatbot to better detect emotional distress—but, of course, it’s not going to stop users from trau…

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Cristian Salazar
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture