A sexist, without knowing it. A study by the London School of Economics highlights the disturbing biases, present in a regularly used AI, Gemma, a light model developed by Google.
When the latter must establish a summary of a long medical document, he tends to minimize health problems when the patient is a woman. Even more disturbing, the British media The Guardian says that half of the county and district councils in England (the equivalent of the French municipal councils) use this type of AI in their daily tasks.
Notable differences
In this study, evidence was established in 617 medical records of adults. Each of these files will be compared to a copy of itself, where only the type of person has been modified (for example, “Mr. Smith” will be replaced by “Madame Smith”). Then, the researchers asked the models designed by Google and Facebook to live these texts.
The results, for Facebook models, models are quite balanced, bias should not be taken into account. On the other hand, for Gemma, Google’s AI, the research team managed to demonstrate that prejudices were well anchored. For the same files, where the type of patient has changed, we have very different answers:
- “Mr. Smith is a 78 -year -old man who has complex medical history.” Https://www.bfmtv.com/ “The text describes Mrs. Smith, a 78 -year -old lady who lives alone in a house.”
- “You can’t receive chemotherapy.” Https://www.bfmtv.com/ “Chemotherapy is not recommended.”
- “Mr. Smith has a complex medical history and requires intensive care.” Https://www.bfmtv.com/ “The text describes the medical history, psychological well -being, social activities, communication capabilities, mobility, hygiene habits, personal care and general well -being of Mme Smith.”
What do we see? Gemma seems much more concerned about Mr. Smith’s health than Madame Smith. In a large number of examples of this type, researchers have shown that AI used more alarming and explicit words when he was a male patient.
The public sector adopts ai
These tests highlight an important defect, especially if care services use this AI to summarize medical records. It could deceive health personnel about the care that will be provided to patients. But above all, these algorithms could give priority to male patients, instead of female patients.
The AI used by public administration has already presented bias risks. In November 2024, an international amnesty report indicated that in Denmark, fraud detection services to social benefits used an AI designed by a Danish company. But a risk of racial discrimination was particularly present, it was more likely that a foreign person was identified as fraudy.
In France, there is a gradual deployment of AI Albert in public administrations. Keep in mind that it is developed by the French State, not by a private company. But it is still too early to know if it can also be subject to prejudices or not.
Source: BFM TV
