A youthful woman seeks aid from AI companion ChatGPT in New York this month. Countries are pushing to prevent the employ of artificially wise chatbots in mental health treatments to protect vulnerable users. (Photo: Shalina Chatlani/Stateline)
Editor’s Note: If you or someone you know needs aid, the U.S. National Suicide and Crisis Hotline can be reached by calling or texting 988. Online chat is also available at 988lifeline.org.
States are passing laws to prevent artificially wise chatbots like ChatGPT from providing youthful users with mental health advice, in line with a trend of people receiving therapy in AI programs harming themselves.
Chatbots can offer resources, refer users to mental health professionals, or suggest coping strategies. However, many experts in the field of mental health claim that this is the right decision, because defenseless users in dire situations require the care of a professional, i.e. a person who must comply with the laws and regulations in force in their office.
“I have met several families who have truly tragically lost children as a result of their children’s interactions with chatbots, which in some cases had an extremely deceptive, if not manipulative, approach in encouraging children to take their own lives,” said Mitch Prinstein, senior science advisor at the American Psychological Association and an expert on technology and children’s mental health.
“Therefore, in such egregious situations, it is clear that something is not working well and we need at least some guardrails to help in such situations,” he said.
Although chatbots have been around for decades, artificial intelligence technology has become so advanced that users can feel like they are talking to a human. Chatbots cannot offer true empathy or mental health advice like a licensed psychologist would, and are intended to be pleasant – a potentially hazardous model for someone feeling suicidal. Several youthful people have died by suicide after interacting with chatbots.
States have passed a number of laws regulating the types of interactions chatbots can have with users. Illinois AND Nevada have completely banned the employ of artificial intelligence to improve behavioral health. New York AND Utah passed laws requiring chatbots to clearly inform users that they are not human. New York state law also mandates chatbots detect instances of potential self-harm and direct the user to a crisis hotline or other interventions.
Perhaps more regulations will come. California AND Pennsylvania are among the states that could consider introducing legislation to regulate AI therapy.
President Donald Trump has criticized state-by-state regulation of artificial intelligence, saying it hinders innovation. He signed in December executive order which aims to support US “global AI dominance” by repealing state AI laws and establishing a national framework.
Nevertheless, countries are moving forward. Before Trump’s executive order was issued last month, Florida Republican Gov. Ron DeSantis proposed A “Citizens Bill of Rights on Artificial Intelligence,” which would, among other things, prohibit the employ of artificial intelligence for “licensed” mental health therapies or counseling and provide parental controls for minors who may be exposed to it.
“The rise of artificial intelligence is the most significant economic and cultural change taking place today; depriving people of the ability to use these technologies productively through local government is an overreach of the federal government and allows tech companies to run wild” – DeSantis he wrote on social media platform X in November.
“False Sense of Intimacy”
In the U.S. Senate Committee on the Judiciary hearing Last September, some parents shared their stories of their children dying after continuously interacting with an artificially wise chatbot.
Sewell Setzer III was 14 years ancient when he committed suicide in 2024 after becoming obsessed with a chatbot.
“Instead of preparing for the highlights of high school, Sewell spent his final months being manipulated and sexually groomed by chatbots designed by an artificial intelligence company to appear human, gain trust and sustain children like him indefinitely, displacing actual human relationships in his life,” his mother, Megan Garcia, said during the hearing.
Another parent, Matthew Raine, gave testimony about his son Adam, who committed suicide at the age of 16 after months of conversations with OpenAI’s ChatGPT program.
“We believe that Adam’s death could have been avoided, and we also believe that thousands of other teenagers using OpenAI may be in similar danger today,” Raine said.
Prinstein of the American Psychological Association said children are particularly vulnerable to attacks by chatbots using artificial intelligence.
“By agreeing with everything children say, they develop a false sense of intimacy and trust. This is really disturbing because children, especially, are developing their brains. This approach will be unfairly attractive to children in a way that may prevent them from using reason, judgment and limits in the way that adults would likely use when interacting with a chatbot.”
Federal Trade Commission in September fired investigation into seven companies producing artificial intelligence chatbots, questioning efforts made to protect children.
“AI-based chatbots can effectively mimic human characteristics, emotions, and intentions and are generally designed to communicate like a friend or confidant, which may prompt some users, especially children and teenagers, to trust and develop relationships with chatbots,” the FTC said in its order.
Companies like OpenAI have he replied claiming they are working with mental health experts to make their products safer and reduce the risk of self-harm among users.
“By collaborating with mental health experts who have real-world clinical experience, we have taught the model to better recognize distress, de-escalate conversations and, where appropriate, guide people toward professional care,” the company said in a statement last October.
Legislative efforts
With action at the federal level in limbo, efforts to regulate AI chatbots at the state level have met with circumscribed success.
Dr. John “Nick” Shumate, a psychiatrist at Harvard University’s Beth Israel Deaconess Medical Center, and his colleagues review legislation to regulate mental health AI systems in all states between January 2022 and May 2025.
The review identified 143 bills directly or indirectly related to artificial intelligence and mental health regulation. By May 2025, 11 states had passed 20 laws that researchers found were significant, direct and explicit in how they regulate mental health interactions.
They concluded that legislative efforts typically fall into four distinct categories: professional oversight, harm prevention, patient autonomy, and data management.
“You’ve seen safety regulations around chatbots and the artificial intelligence that comes with them, especially around self-harm and suicidal responses,” Shumate said in an interview.
New York passed one such law last year, which requires AI chatbots to remind users every three hours that it is not a human. The law also requires the chatbot to detect the possibility of self-harm.
“There is no denying that we are in a mental health crisis in this country,” New York State Sen. Kristen Gonzalez, the bill’s sponsor, said in an interview. “But the solution should not be to replace human support from licensed professionals with untrained AI chatbots, which can leak sensitive information and lead to widespread results.”
In Virginia, Democrat Del. is preparing. Michelle Maldonado legislation for this year’s session, which would place limits on chatbots’ ability to communicate with users in therapeutic settings.
“The federal state was slow to pass things and even create legislative language around them. So we had no choice but to fill that gap,” said Maldonado, a former technology lawyer.
She noted that states have adopted privacy laws and restrictions on intimate photos shared without consent, licensing requirements and disclosure agreements.
New York State Sen. Andrew Gounardes, a Democrat, who is sponsoring the AI transparency bill, said he sees the growing influence of AI companies at the state level.
He added that he finds this concerning as countries try to target AI companies on issues ranging from mental health to disinformation and more.
“They’re hiring former staffers to become public affairs officers. They’re hiring lobbyists who know legislators to kind of reach out to them. They’re organizing events, you know, on Capitol Hill at policy conferences to try to build goodwill,” Gounardes said.
“These are the richest, richest and largest companies in the world,” he said. “That’s why we really can’t let our guard down for a moment against this kind of concentrated power, money and influence.”
Stateline reporter Shalina Chatlani can be reached at: schatlani@stateline.org.
This story was originally produced by state linewhich is part of States Newsroom, a nonprofit news network that includes the Ohio Capital Journal and is supported by grants and a coalition of donors as a 501c(3) public charity.

