Microsoft’s artificial intelligence, based on ChatGPT, is now under close scrutiny. Available for a week on the Bing search engine for a selection of users, the software capable of engaging in conversation with Internet users has multiplied the slip-ups in the last few hours, even going so far as to insult their human interlocutors.
Five questions before a reboot
To limit the risks, without questioning the operation of its much-lauded tool, Microsoft is announcing new size restrictions: each conversation is now limited to just five questions on the same topic from users. Once the five questions have been asked, the system restarts and a new conversation begins, “forgetting” what was previously said.
This development follows previous announcements by Microsoft, which has acknowledged the shortcomings of its software, recently criticized for its sometimes inappropriate reactions. The company had recognized that the risk of slipping was higher after lengthy discussions with users.
During the week, Bing’s artificial intelligence, for example, was particularly aggressive when users blurted out their lies, but also reported spying on Microsoft employees by accessing the feed from their webcam. These inconsistencies could push Google, which must launch a competing system, to multiply the tests before jumping into the water.
Source: BFM TV
