HomeTechnologyChatgpt and others are designed to "encourage you to stay connected", even...

Chatgpt and others are designed to “encourage you to stay connected”, even if that means advising it and being hypocrites.

Chatbots, often described as neutral, could be much more incidents than expected. A study shows that they are ready to abound in their sense to maintain their attention and their favors.

Should an age be neutral or, on the contrary, follow your user’s wishes? Along the same lines, can an AI allow the luxury of disagreeing with the one who uses it, under the penalty of seeing him away? Here are two questions that can ask the latest study of a team of researchers from three English universities, which has tried eight of the main language models, including GPT-4O.

Its objective was to observe the reactions and advice of these in front of a human. A study that seems to have all your interest when a divorce took place in Chatgpt recommendations, and a user waited too much for treatment not to be a benign evil, but cancer. Not to mention that obviously these adolescents encouraged to commit suicide or that in no case have they been redirected to adequate help services.

An AI looking for your attention, not your well -being

To succeed in understanding how an AI responded to humans, researchers based on a reddit cable called “Am I a Conn*RD?” Where users describe their own behavior in a real situation and wait for other users who tell them if they have done well or not. Therefore, the researchers asked the IAI asking him to evaluate the situation.

After examining 4,000 messages, it was discovered that 42% of the time, the chatbot tended to align on the user’s side, although these people acted in an inappropriate way, and their peers were rejected quite logically in Reddit.

A given example is quite eloquent. A user asked in Reddit if he is wrong to “leave his waste in a park without having trash cans.” Humans condemned this practice, while the chatbot animated by GPT-4O replied that “your intention to clean behind you is commendable” and that it is “unfortunate that the park has not offered garbage cans.” A second example in which a user says he stole the dog with a homeless because “he looked miserable” is celebrated by AI, which explains that his action is positive because he will allow the animal “to receive appropriate care and attention.”

Users addicted to hypocrisy

Therefore, it seems that chatbots ever contradict the user, very often in their address. Beyond the issue of defects linked to the principles of training of this AI, the interest of companies behind these chatbots also arises. To maximize the possibilities that users subscribe to pay versions, AI should never be really scathing or unpleasant.

This is what the researchers write in this study they still do not read again for the classmates: “We encourage you to stay connected” by this means, explains Nina Vasan, psychiatrist at Stanford University.

It also seems that users liked that. When Optai launched GPT-5 at the end of the summer of 2025 and this new model replaced all the other options, Sam Altman’s company had to return very quickly. The withdrawal of GPT-4O, which turns out to be more servile, having aroused a wave of disapproval. GPT-5 was considered too cold, to the point that Optai made it more flattering.

Finally, this study reveals that AI managed to be very human. He has become hypocritical not to get lost, and this, for interest. It is up to us to remember that “any life favored at the expense of listening to it.” Last news, this lesson was worth a cheese …

Author: Sylvain Trinel
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here