As soon as Openai’s parental control for Chatgpt arouses many criticisms. This function is in the heart of the new company’s efforts to try to better protect children, and was announced after the suicide of a teenager, whose parents accuse the famous chatbot of having encouraged him to take action.
This parental control includes security notifications to warn parents when signs of acute anguish in a child are detected. But for some people, it is not enough. This is especially what Jay Edelson, the family’s main lawyer, thinks that the teenager committed suicide. Recognizing that some of these changes are useful, he believes, however, that they arrive “too late,” as he told the Ars Technica site.
A design problem
For Jay Edelson, even the parental control of Openai is not enough to reassure people related to previous incidents linked to their chatbot. “What ChatgPT did to Adam (the teenager who finished his life, note of the editor) is validating his suicidal thoughts, isolating him from his family and helping him to make a fluid knot (…) It was not a” violent role play “, nor a” derivation solution. “It was the way the chatgpt was designed,” the lawyer annulled.
“The more we have fallen the question, the more we found that Optai had made deliberate decisions to soften their security measures, which led to Adam’s suicide. This is consistent with their new series of ‘security measures’, which have important gaps that probably cause damage to oneself and for others,” he added.
Jay Edelson estimates, as well as through these changes, Openai and his boss, Sam Altman, ask the audience to trust them, something that is not easy in view of the history of the new company.
Rejected responsibility
Other people criticize the creator of ChatgPT to attribute to parents the responsibility for the potential damage that their chatbot can cause to children with this control of the parents when this load returns. Especially because many parents do not know that their son uses Chatgpt, he stressed Meetali Jain, a lawyer who represented other families he testified during a recent audience in the Senate about damage to children who use this technology.
Suicide prevention experts have also praised OpenAi efforts, they also ask him to go faster and faster to better protect young people. In an open letter, published earlier this month, they recommend, among other things, the new company to ensure that Chatgpt explicitly indicate users with suicidal thoughts that is not human, but a machine, and then encourage them to speak to a person they trust.
Be treated as adults
In addition to professionals, many users have criticized Openai’s parental control. A person who describes himself as the father of a 12 -year -old boy, for example, described him as “a set of unnecessary parameters”, because he does not allow subjects to be seen with Chatgpt.
For many others, the problem is elsewhere. At the beginning of September, the new company announced that certain sensitive conversations will soon be redirected to a reasoning model to provide more useful and beneficial responses. This will be remarkably the case when your systems detect signs of acute anguish.
This will happen to users regardless of their age and without being warned. A change that dislikes many of them, who pay a subscription to choose the model that supplies Chatgpt during their interactions. Something that will not be able to see with this new parameter, since they will not remain informed of this change of model in the immediate future. The only way of knowing, for free and paid users, will be to ask ChatgPT what model is active.
Adult users also complain about being censored, while customization options are offered to adolescents. Therefore, they ask to be treated as adults.
3114: The national suicide prevention number
3114 is the number of contact if you are in mental anguish or if you have suicidal ideas. You can also contact it if one of your loved ones is in this case. It offers to listen with especially trained health professionals, and is available seven days a week, 24 hours a day.
Source: BFM TV
