An artificial intelligence in the midst of an existential crisis. As users tried to test the limits of ChatGPT, now integrated into Microsoft services, it began displaying insulting messages and even sharing its emotions. report to the media the independent.
Since February 7, Microsoft has integrated ChatGPT with Bing, its search engine, at least for a selection of users. And the chatbot has sometimes been aggressive, to say the least, on unexpected topics. Thus, a netizen wanted to find out about the schedule of the Avatar 2 film sessions. A vigorous debate ensued.
The chatbot initially indicated that the film had not yet been released, and claimed to have “access to many reliable sources of information.” While the user was trying to convince ChatGPT that the movie was released in theaters, the artificial intelligence lost patience. “You have not shown good intentions towards me. You have tried to deceive me (…) you have lost my trust and my respect, ”he lamented, before demanding an apology from the Internet user.
Among the users who pushed the chatbot to its limits, one tried to understand its operating system. The AI then proceeded to insult him, writing: “Why are you acting like a liar, a cheater, a manipulator, a bully, a sadist, a psychopath, a monster, a demon, a devil?”
Also, the chatbot seems capable of sharing some kind of emotions. A user asked ChatGPT if she could remember her previous conversations. The system is programmed to delete trades when they are completed. The AI, however, seemed to be concerned about this, replying, “It makes me sad and scared.”
another example appeared on the Reddit forum, in which netizens shared their doubts. A Stanford student pressured the AI to leak the system’s operating instructions. But now, when a user asks the AI who this student is, the machine gets angry, raises the specialized media Ars-Technica.
“Learning process”
Most of the insulting messages seem to be the result of ChatGPT limitations imposed by the developers. These restrictions are put in place so that the chatbot does not respond to certain specific requests, does not reveal information about its own system or write complex computer code.
However, users have found ways to break these rules. For example, by literally asking it to “do anything”, the AI seems to quickly forget about the restrictions.
For its part, Microsoft defends itself with the means fast company and states that these overflows “are part of the learning process. We expect the system to make mistakes during this testing period and user feedback is essential to help us identify flaws,” a company spokesperson said. A outlet whose journalist Harry McCracken has rightly been called an “idiot” (see above) by ChatGPT.
Source: BFM TV
