Bad faith or exercise of transparency? Microsoft, which has integrated GPT-4 into its latest version of Bing and is developing new tools in its software, with Copilot must also face criticism denouncing factual errors produced by artificial intelligence (AI). But for Microsoft, they’re “utilly wrong,” he reports. CNBC.
On March 16, Microsoft presented Copilot, a technology equipped with ChatGPT-type AI, capable of generating, for example, PowerPoints autonomously and in a few seconds from a simple document or email message.
During the presentation, Microsoft executives talked about the limitations of the software: producing inaccurate responses, and they chose to present this flaw as something useful.
According to Microsoft, the tool saves the user time, he just has to be very careful and make sure that the text does not contain errors. Users can correct inaccuracies themselves, modify paragraphs, or ask the AI to correct certain parts.
Researcher concern
Except that this approach worries researchers specializing in new technologies. They fear that users are putting too much trust in AI by taking everything written at face value, especially on sensitive and important topics like health.
Jaime Teevan, director of science and technology at Microsoft, said CNBC that if Copilot “is wrong, biased, or misused,” Microsoft has action in place.
Initially, Microsoft will only test the software with 20 client companies to work on its actual application. “We will make mistakes, but we will correct them quickly,” said the scientific manager.
Source: BFM TV
