HomeTechnologyUnited States: the AI used by the Ministry of Health is based...

United States: the AI used by the Ministry of Health is based on scientific studies that do not exist

Elsa, the ai tool that is supposed to improve the productivity of the US agency responsible for public health, hallucinous. The chatbot invents studies, distorts research and has problems answering simple questions. Which slows the efficiency of employees.

An artificial intelligence that slows you down. At the beginning of June, the Food and Medicines Administration (FDA), the US agency in charge of public health made its ELSA employees available to its AI tools.

Chatbot allows employees to be more productive and accelerate the commercialization of new medications. It can, for example, facilitate the clinical evaluation process, compare different medications and its labeling or generate summary of the meetings.

“The revolution AI has arrived,” enthusiast in June, the Secretary of Health, Robert F. Kennedy Jr., during an audience in Congress. Even better: “IA will provide drug approval very quickly,” he said during an interview. But not everything went as planned. Three employees told CNN that AI invented false studies.

“I spend a lot of time”

Depending on the case, the tool has created studies or has distorted a beautiful and very real investigation. In a word, she hallucinates. Chatbot does not hesitate to affirm that some employees work in the FDA when this is not the case, or that a domain is not the responsibility of the agency … while.

“Everything we have no time to verify twice is not reliable,” a source told the US media. “The hallucinated tool with confidence.” In fact, when Elsa summarizes pages and research pages linked to a new medication, “there is no way of knowing” if he distorted the study or if he omits something that a human examiner had considered important.

The employees asked the easy to verify the reliability of Elsa’s responses. These tests have often ended in failures. The chatbot, for example, was wrong in the number of drugs of a certain authorized class for children. Nor did it count the number of products with a special label.

On the other hand, the tool recognizes its errors when it is indicated. “But that still doesn’t help you answer the question,” said an employee. A true thorn for the agency that is already suffering the budget cuts of the Trump administration.

For his part, Marthy Makary, the FDA Commissioner, says “not having heard of these specific concerns”, while remembering that the use of Elsa was optional. The agency claims to have established safeguards on the use of the tool by its employees, although, like any AI tool, sometimes it provides incomplete or erroneous responses.

Elsa, “like many generative models,” he can hallucinate false studies, he acknowledged Jeremy Walsh, head of AI inside the FDA. However, he wants to be optimistic. The tool would be improving.

Author: Salome Ferraris
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here