HomeTechnologyChatGPT hears the suicidal thoughts of more than a million people a...

ChatGPT hears the suicidal thoughts of more than a million people a week, but everything is fine, OpenAI says it has “managed to alleviate the serious mental health problems” it faces

According to figures from OpenAI, more than a million users open up to their chatbot about their suicidal thoughts. More than 560,000 show “possible signs of psychiatric emergencies.” However, Sam Altman’s start-up claims to have improved the management of conversations related to mental health.

Open assures us: conversations in which Internet users discuss suicidal thoughts with ChatGPT remain “extremely rare cases.” However, even a small percentage can represent a significant mass of users when we know that the chatbot has reached 800 million active users per month…

On Monday, October 27, Sam Altman’s startup published a blog post promoting its efforts to improve model responses to users with mental disorders. In this context, more than 170 experts were consulted.

“Higher levels of emotional attachment”

According to this report, 0.15% of active ChatGPT users experience suicidal thoughts with the tool each week. As a result, over a million people a week talk about suicide on ChatGPT. Their conversations would include explicit signs of planning or intending to commit suicide.

Additionally, the company claims that a similar percentage of users show “higher levels of emotional attachment to ChatGPT.” Welcome to the uncanny valley…

Worse still, 0.07% of monthly active users, that is, about 560,000 people, present “possible signs of psychiatric emergency” related to mania or psychosis.

In recent months, OpenAI’s chatbot has come under fire from all sides for its handling of mental health-related conversations. The company has been sued by the parents of a 16-year-old boy. They accuse ChatGPT of inciting their son to commit suicide.

For its part, the US Consumer Protection Agency (FTC) asked seven companies that offer chatbots, including OpenAI, to provide information on how they “measure, test and monitor the possible negative effects” of these AIs on young people.

GPT-5, more empathetic

As a result, since late August, the company has stepped up its efforts to display its credentials and protect its users. On October 14, Sam Altman secured the

Parental controls have also been implemented. The company hopes to introduce an age detection system to automatically detect children using ChatGPT. Whenever a question arises about the age of a user (not identified by an account), OpenAI is committed to ensuring that they access a child version of their chatbot.

His new blog article goes in the same direction. OpenAI thus claims to have strengthened GPT-5 training. The newly updated version offers approximately 65% ​​more “desirable” responses to mental health problems than the previous version.

Following an evaluation testing AI responses to suicidal conversations, OpenAI says its new GPT-5 model is 91% consistent with the company’s desired behaviors, compared to 77% for the previous model. ChatGPT would now more frequently offer empathic listening messages, reminders to be careful, and redirections to specialized helplines.

At the same time, the businessman indicated that OpenAI would relax certain restrictions, allowing adult users to engage in sexually explicit conversations with the AI ​​chatbot. Paradoxical decisions.

Author: Salome Ferraris
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here