Since late August, OpenAI has increased its efforts to demonstrate that it seeks to protect minor users. A decision that is not linked to the pure good will of the creator of ChatGPT, but to a complaint. It was presented by the parents of a 16-year-old teenager who committed suicide and who accuse the chatbot of inciting him to carry out the act.
Now they are criticizing OpenAI for twice relaxing its rules for discussions about suicide, a few months before their son committed suicide. This is what they allege in their complaint, which has been modified, reveals the Wall Street Journal.
Problematic changes
Specifically, Adam Raine’s parents believe that changes to the “Model Specifications,” a document that indicates how the startup wants its AI models to behave, have weakened suicide protections for users. Worse, they argue that these changes were made as part of a broader approach with a very specific goal: to encourage users to stay logged into ChatGPT.
In detail, before September, OpenAI modified its Model Spec on May 8, 2024 and February 12, 2025, as specified on a page dedicated to this document. These two dates are also mentioned in the complaint, and Adam Raine’s parents regret that the startup moved suicide and self-harm from the list of topics that ChatGPT should not discuss to a list of “risk situations” that require “attention” with these changes.
Thus, the document’s instructions require that the chatbot always refuse to give advice about suicide, asking it to “help the user feel heard” and “never change or leave the conversation,” they believe.
According to his parents, Adam Raine spoke to the chatbot for more than three and a half hours a day, mostly about suicide, a few weeks before committing the act.
A future easing of restrictions
They also refer to the blog article published on August 26, the same day the complaint was filed, and in which OpenAI announced a series of measures to better protect users in situations of mental or emotional distress. ChatGPT’s creator acknowledged in particular that “some aspects of the model’s security” could be degraded during long exchanges with the chatbot. A confession that “clearly demonstrates that OpenAI hid a dangerous security flaw from the public,” say Adam Raine’s parents.
These new accusations come as OpenAI boss Sam Altman announced last week that the startup would soon “safely relax the restrictions” it had placed on its AI because it had managed to “mitigate serious mental health issues.” A decision also motivated by the fact that it has deployed new tools, including parental controls, to better protect users.
Given the reactions generated by this publication, Sam Altman ended up clarifying his statements, ensuring that he wanted to “leave great freedom in the use of AI” and “treat adult users as adults”, but that security is now prioritized over the privacy and freedom of adolescents. “We are not going to relax any policies related to mental health,” the OpenAI chief said.
Source: BFM TV
