In just a few days, Microsoft has taken ChatGPT’s artificial intelligence to a new dimension, integrating it into its Bing search engine. Thanks to a partnership with the OpenAI company, Microsoft offers a handful of users a tool with the same functions as ChatGPT, that is, the ability to write texts or answer all kinds of questions from an Internet user. After a more complicated start than expected, Microsoft makes a first assessment, returning to the reasons why software can go off the rails.
Although the responses of ChatGPT (online free access) are highly framed by the OpenAI team, the version of the system adapted to the Microsoft browser seems more unrestrained. In recent days, many netizens who have conversed with artificial intelligence have reported sometimes surprising behavior.
No more than fifteen questions
In some cases, Bing’s version of ChatGPT responded to Internet users with incorrect information, insulting them when they tried to correct it. In others, the artificial intelligence simply claimed to have spied on Microsoft Teams by breaking into their webcams.
On its site, Microsoft explains that it has found that the risk of conversational artificial intelligence increases rapidly over time. Thus, an exchange session of more than fifteen questions with the machine can cause answers that do not conform to what might be expected, the company estimates. Thus, a button to restore a conversation could soon appear.
Microsoft also returns to the sometimes aggressive or insulting tone that its artificial intelligence uses. “The system sometimes tries to respond with the same tone as the interlocutor,” says the company, which ensures that the problem only affects a minority of users. The work to make their version of ChatGPT more reliable than previous attempts to democratize a chatbot still seems far from over.
Source: BFM TV
