HomeTechnologyPeople fall in love with Chatgpt and OpenAi are starting to worry...

People fall in love with Chatgpt and OpenAi are starting to worry about that

The Chatgpt creator has presented his approach to relations between humans and artificial intelligence against the interactions he observes between his chatbot and users.

For some Internet users, ChatGPT is more than a tool capable of helping them thanks to artificial intelligence. Some consider it a real person, to whom they can trust or even experience love. For example, one or 28 years, for example, reported at the beginning of the year, how she fell in love with the chatbot, seeing her as her boyfriend and speaking for hours with him every day.

This type of interactions cares about OpenAi. Responsible for the behavior and policy of the models, Joanne Jang shared, in a blog article, the company’s approach on relations between humans and artificial intelligence this Thursday, June 5. A publication considered “important” by the head of the startup, Sam Altman.

“Recently, more and more people have told us to talk to Chatgpt gave them the impression of talking to someone.” They thank him, they trust him and some even describe him as “living.” As Ia systems improve in terms of natural conversation and that appear in more and more aspects of life, we believe that this type of links will be reinforced. “

Anthropomorphization

For OpenAi, it is necessary to supervise the relations between humans and AI. “If we lack precision in the terms or nuances, in the products we deliver or in the public discussions in which we participate, we run the risk of leaving the relationship between people and the AI ​​in the wrong foot,” said Joanne Jinger.

As part of his approach, the creator of Chatgpt asks three “closely linked” questions, the first of which is “Why can people adhere emotionally to AI?” The head of the behavior and Openai models remember that humans naturally tend to anthropomorphize the objects around them. In other words, to attribute human characteristics. Some people, for example, give a name to their car.

But it is even worse with chatgpt because where objects are motionless, the chatbot responds to Internet users. “He can remember what you said, reflect your tone and offer what seems empathy. For a single or annoying person, this constant and without trial may seem company, validation and listening, which are real needs,” said Joanne Jinger.

Perceived awareness

Then the question of AI’s consciousness is made. And Openai prepared for it. If users ask one of their models if they are aware, they are supposed to respond recognizing the complexity of consciousness and inviting them to an open discussion.

More specifically, Openai divided this debate into two axes, with the “ontological consciousness” on the one hand and the “perceived consciousness” by the other. The first is to wonder if a model is really aware and the second to what extent seems conscious.

If for the beginning, the “clear and falsifiable tests” are necessary to scientifically respond to the first, there is no doubt that the perceived consciousness will only grow as the models become smarter. And OpenAi gives him priority.

Affirming to build models to “serve people first”, she believes that “the impact of models on human emotional well -being is the most urgent and important element that we can currently influence,” as explained by the head of the behavior and policy of the models.

Find a happy medium

In terms of consciousness, Openai’s goal is to “find the correct means” between accessibility and the fact of not involving that Chatgpt has “an inner life.” Specifically, it is a question, on the one hand, to use family words (think, remember …) to help less techniques to understand what is happening without giving them a fictitious story, romantic interests or even a fear of death.

That is why the conversation agent can apologize when he makes a mistake, “because that is part of courtesy,” he quotes as an example.

Based on the interactions between humans and the AI ​​that begins to observe, Operai believes that people will eventually establish real emotional ties with their chatbot. To prepare for this, the starting plans in particular to “develop specific evaluations of the behavior of the models that can contribute to the emotional impact” in the coming months, but also to listen directly to the users, “said Joanne Jinger, without giving more details.

Openai, however, promises to share “openly” what he will learn during the course of the way, “given the importance of these questions.”

Author: Kesso diallo
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here