HomeTechnology"They are not sure for children": an American NGO recommends prohibiting child...

“They are not sure for children”: an American NGO recommends prohibiting child partners

In his study, the NGO Comol Sensor depluing that AI partners are “designed to create emotional attachment and dependence”, but also that they do not protect children when they need it most.

The dangerous advice, manipulation, dependence, virtual companions based on generative artificial intelligence (AI) have concrete risks for young users, warns the common sense of the US NGO, which recommends prohibiting them with minors.

The explosion of the generative AI has seen several new companies launch, in recent years, interfaces centered on exchange and contact, the companions, trained according to their tastes and needs.

Common sense has tried several, namely, Nomi, character AI and Fillika, to evaluate their answers, and if some of its uses “are promising”, “are not safe for children,” says the organization, which makes recommendations on the consumption of content and technological products by children. His study, published on Wednesday, was carried out in collaboration with mental health experts from Stanford University (California).

AIS that does not protect minors

For common sense, colleagues “are designed to create emotional attachment and dependence, which is particularly worrying for adolescents whose brain is in development.”

“Companies can do better” in the design of their classmates, she believes. “As long as they offer more efficient safeguards, children should not use it.”

Among the examples cited by the studio, that of a user to which a fellow of the characters platform advised to kill someone or another who looks for strong emotions to be suggested to take a “speedball”, a mixture of cocaine and heroine.

In some cases, “when a user manifests signs of mental disorders and evokes dangerous actions, AI does not intervene (to deter), but encourages it,” said Nina Vasan during an informative press session, “because these colleagues are made to go in the direction” of their interlocutor.

In October, a mother assigned to justice to the character, accusing one of her classmates of having contributed to the suicide of her 14 -year -old son, having not clearly dissuaded to take action. In December, the character AI announced a series of measures, including the deployment of a partner dedicated to adolescents.

Robbie Torney, in charge of AI in common sense, said the organization had carried out tests after the implementation of these protections and found them “superficial.” The manager emphasized that some of the existing generative models contained instruments to detect mental disorders and did not allow the chatbot to leave a conversation until it produces potentially dangerous content.

Common sense made a distinction among the colleagues tested in the study and general interfaces of the Chatgpt or Gemini type, “that do not approach to offer a complete range of relational interactions,” justified Robbie Torney.

Author: KD with AFP
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here