After autumutilation, suicide and mental illnesses. Chatgpt must also face society’s evils, for the worst and best of its uses. After the death of a teenager who would have been pushed to suicide for his generative AI, Operai reacted quickly.
Because the issue “has largely shaken internal teams,” acknowledges the company against some French media, including Tech & Co. “This is an issue that affects all people who have met people or who have been affected by these mental illnesses,” said an Openai spokesman. Therefore, the company has taken the issue with the height and already announced measures that will be applied rapidly.
“We discovered more problematic uses a little late”
Since its launch at the end of 2022, Chatgpt has acquired skills, performance and “humanity.” To the point of being taken sometimes for a true interlocutor by its users and even for your psychologist. Operai acknowledges having been a bit “short” by the speed with which the uses of AI have evolved, not without admitting in the middle of a word some errors of appreciation of the most problematic uses. “This is something that we have discovered perhaps a little late, but that we are now aware and that we take very seriously,” we recognize in Open AI, who quickly published a blog post after the events.
Personal questions, emotional support, Life Councils: Chatgpt is now requested by its users during extremely difficult periods of their lives, especially by adolescents. The Californian company indicates having a team of specialized engineers in the subject of childhood and having begun to place Safezes in the issues addressed that then activate alerts in the conversations. From now on, if the discussion becomes suicide, Chatgpt will show the national listening and support number to encourage the user to approach the people who can help him.
Is it enough? I am not sure, but Openai wants to provide tools for its creation to provide answers. Should this go through an age limitation or a precautionary principle? “A healthy debate to have,” says the company that begins the deployment of a parental control system for parents to monitor the use made by chatbot adolescents (prohibited for use for children under 13 years). But Operai also wants to highlight the positive aspects that AI can bring.
Chatgpt in moral support
“Chatgpt also helps people think of complicated moments of their existence, at work as in their personal life. When you don’t know how to express what you want to say and that helps you reformulate your reflection, to appropriate this moment,” he travels through a spokesman for Openai.
“Of course, everything is not perfect, but there is also good in these tools.” In Openai, we want to continue believing that we work for the good of all, “he adds. So we play the letter from the humble and conscientious approach of these new forms of interaction and values. Never Openai has rejected the failure in the deceased adolescent case or other previous cases raised to slide the relationships between AI and their user.
On the contrary, we are looking for solutions, with the developer community, but also with experts in these social problems, in particular emotional attachment to a machine. It all started more than a year ago with a study on the subject by the MIT that gave ways of research that have been deepened. They have taken ads in the field of health.
Operai continues to repeat it: its objective is not to capture all the user’s attention so that the longest possible time is left, but “to offer tangible help.” And the success of this position will not be measured at the time spent on the platform, rather to the help provided, depending on the company.
There are already protection measures, such as not giving self -use instructions, unlike having an encouraging language to recover a support or redirect point in case of suspicion of suicide intention, mental or emotional anguish. The security rules have established and reinforced the protections for minors to block the content that may surprise (autumutilation images, for example).
In case of a suggestion of physical violence towards others, human teams are alerted and can contact the police if necessary. In response to recent events, Openai has implemented corrections and improvements, which makes the standard model automatically a safer reasoning model, as soon as questions about mental health are asked.
GPT-5 more comprehensive, less sensitive
GPT-5, launched in August, is less sensitive to emotional dependence and excessive flattery. Munda better integrated security measures and has a better capacity to understand the concept of utility and help provide. “Compared to GPT-4O, it has reduced inappropriate responses by more than 25% in mental health emergency,” Openai said.
However, for the confession of its creators, Chatgpt retains some “identified” limits, a “gray” area in which their engineers do not always manage to identify a concern. This refers to long interactions that can reduce the reliability of security measures, resulting in the degradation of model security training. Inappropriate responses at some point can continue. A time when OpenAi observed trial errors in the blockade of the content.
In any case, the speed of the response provided by OpenAI has nothing to envy at the speed of chatgpt progression. Everything will take place gradually within GPT-5. And in the road map of the American giant of AI, we also find the establishment of a Welfare Council (Welfare Council), a group composed of external experts in psychiatry, representatives of child protection and any other area related to the subject to improve the tools. This follows the implementation of the Healthbench evaluation tool that can evaluate different models to understand how they react to this new type of interaction. Operai even works together with anthropic to carry out cross-sectional tests of Cross-5 and Claude 4 in security problems to identify failures.
Ensuring its models is “a continuous work never completed” for OpenAi that announces to invest massively in new experiences to strengthen everything. This will imply additional trust contacts that the user can attach with messages of messages or by chatgpt, with a user agreement, for the most serious cases, specifically among the youngest that are subject to specific development to meet their uses and “unique needs.”
Source: BFM TV
