HomeTechnologySuicide, sexual conversations... After multiple incidents, US senators want to ban chatbots...

Suicide, sexual conversations… After multiple incidents, US senators want to ban chatbots from chatting with minors

At a time when AIs are accused, among other things, of having incited teenagers to end their lives, US senators have introduced a bill to prevent minors from interacting with them.

Since the launch of ChatGPT, many users talk to chatbots daily, some even trust them, as if they were their psychologist. However, this practice can be dangerous, especially for minors. Aware of these risks, US senators want to prevent these chatbots from interacting with young people.

They thus presented the Guard Act, a bill that, in addition to prohibiting its use by children and adolescents, will also force these AIs to reveal to all users that they are not human. They will also have to tell them that they do not have any professional qualifications.

Finally, this bill will create new crimes for companies that make chatbots capable of producing or soliciting sexual content accessible to those under 18 years of age.

Growing concerns

This proposed law comes after several incidents involving chatbots and young people. Since late August, OpenAI has been sued by two parents who accuse ChatGPT of encouraging their 16-year-old son, Adam Raine, to commit suicide.

The Character.ai platform, which allows chatting with AI, is also the subject of a complaint for a similar reason. It was submitted by a mother whose son fell in love with a chatbot with the image of a heroine from game of Thrones and ended his life in the hope of joining her. Meta’s AIs allowed minors to have sexual conversations.

These problematic incidents have already led the US consumer protection agency, the FTC, to open an investigation in September into chatbots used as companions by young people. The same month, the Senate also held a hearing on the dangers of these AIs for children, during which Adam Raine’s parents were heard in particular.

Even more efforts to protect minors

Before the authorities, but also before parents and US senators, Character.ai has already taken the lead. This October 29, the platform announced that soon those under 18 years of age will no longer be able to interact with its chatbots. Plan to implement this change no later than November 25. In the meantime, you will create “an experience suitable for those under 18” and limit your talk time with these chatbots.

The app will also begin verifying users’ ages through a new feature, to ensure that the experience offered to them is age-appropriate. It said it has already developed an age verification model, which will be integrated into dedicated tools like Persona.

Character.ai explains that it made these decisions following questions raised about the use of chatbots by minors, including from regulators. The platform had already committed to better protecting young people by the end of 2024, following the suicide of the teenager who was chatting with one of its chatbots. In this context, it allows parents to see which AI their parents are chatting with, as well as their usage time since March.

With these new measures it hopes to “set a precedent by prioritizing the safety of teenagers while offering young users the opportunity to discover, play and create.” Those under 18 will still be able to create videos and other content on Character.ai. But interactions with chatbots will end… unless their age verification model is not 100% reliable, as is the case with all tools of this type to date.

Author: Kesso Diallo
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here