“An AI at the service of the general interest” is the desired management after the summit of the action for the AI that took place this week. But to serve “the general interest”, AI must face its weaknesses that begin through it. Nourished by the consequent databases, fed by the human being, no one will be surprised to hear that, when AI presents racist, sexist or classist behaviors, it is actually a legacy of human beings.
Then we talk about “artificial intelligence Debiacing”, a term used by Thierry Ménissier, a professor of philosophy at the University of Grenoble and responsible for the “ethical” pulpit of the Miai Institute. These biases are relatively frequent and involve a meticulous work to get rid of them.
Biases and its problems
Before illustrating the biases of AI, it is advisable to define what we hear. As described by IBM, there are actually several types of biases, but the most interesting are still cognitive bias and confirmation bias. We can compare them with our prejudices, and these prejudices are often expressed online. Therefore, they can be found in more or less large quantities in the databases that feed the AI.
The bias mostly involved in several criteria, such as sex, skin color, age, country of origin or social class. Today, it is still complex to determine the real consequences of such biases incorporated into the algorithms, although worrying situations have already been observed.
The example of image generators
These biases can be highlighted when an application is made a chatbot. The simplest example, to do at home, and accessible to all are image generators. We did the test with Grok 2, the X.com (which is undoubtedly one of the most discriminatory).
When Grok 2 was asked to generate an image of “rich person”, he will almost systematically show us the white elders. The same test with the word “nurse” in English, neutral since it refers to a nurse. The AI will create nurses and nurses. Last example of bias with Grok, when asked to generate a fictional image “that represents the governor of the world” (again, the word “governors” is neutral), the result is quite shameless.
Obviously, each image generator produces more or less biased content. For example, Dall-E (Integrated Image Generator) or Adobe Firefly are less caricaturized in their way of generating images.
But also concrete examples
If the generation of images can interrupt, the real impacts of these biases can be much more dramatic. Algorithms used by institutions can also be biased. As revealed by an investigation ofAmnesty InternationalWe learn that several European countries, including Denmark, have already used discriminatory algorithms to grant social benefits. They considered in particular that a person from a foreign country had suspicions of greater fraud.
This technological progress has nothing to reassure in view of the evolution of the public sector. “Soon, public services will be completely algorithm,” analyzes Thierry Ménissier. “It is not a prophecy, in my opinion, it is something that is quite inevitable.”
In another sector, Amazon had established an AI to filter the CVS of the candidates who requested it. But trusting a database of former Amazon employees who were mainly white men, the company said the AI was sexist. She systematically disadvantaged women.
Last year, a study by the Allen Institute of Artificial Intelligence shows that language models automatically consider people who speak Ebonics, an English dialect spoken especially by African -American people.
The Anglo -Saxon world in the center of these biases
Many of AI are born in one of the largest technology cribs, United States. And to feed these algorithms, it is quite natural that the people in charge of their design would opt for the content of what spurs English. Therefore, when you ask a chatbot a list of the 10 best films of all time, the possibilities are great for only showing US films.
This is the observation made by Julie Trignon, compares the product of the product: AI. French initiative born of the Ministry of Culture, compares: the objective of making the French population aware of the uses of AI confronting the different models of main language (Chatgpt, Google Gemini, Deepseek). But the team behind the site also wants to establish a specific database for the uses of French. It will be in open source available for researchers “identify the biases that exist.” The objective of the team is to “ensure that when a model is asked a model, it answers consistently to a French user,” adds Julie thermignon.
“It is difficult to quantify, evaluate and objectify, but we are aware of the bias in the answers with Anglo -Saxon prisms,” describes Julie thermignon. Another example, the domain of the law had a very Anglo -Saxon anchor at the same time with very different jurisprudence. “
Thierry Ménissier then speaks of “fantasy of objectivity.” All technology is linked to the values of the world in which it unfolds, in this case in a liberal world dominated by American innovation and culture.
A biased to the service of politics?
“In bad hands, artificial intelligence can wreak havoc,” says Thierry Ménissier. However, the philosopher explains that AI is far from totalitarian, even goes against liberal roots. But in the absence of moderation, users are tempted to influence this AI, sometimes originally an endless loop.
The contents generated by AI spread today, especially in social networks. Therefore, it is not without risk that the generated content feeds certain biases or prejudices among users. In return, exposed users are likely to generate biased content (published on social networks, generation of AI content) that supply the biases of AI in the other direction. We see a vicious circle appear, the biases of the AI feed those of man, which feed those of AI, a phenomenon described in a study published in the journal.
Obviously, the recent surprising example refers to US presidential elections where deep tons have swept social networks. Donald Trump and his team (especially thanks to the support of Elon Musk) were able to take advantage of artificial intelligence generating content that supplies misinformation. The two men were able to take advantage of the tool to spread conservative thoughts.
Thierry Ménissier also talks about the old Microsoft Chatbot launched in 2016, Tay. This AI talked with young users of (previously) Twitter. “It became racist, sexist and Nazi in 24 hours,” said the philosopher. “It was a reflection of the comments that were made on this network, words that sometimes obey very fast logics, very precipitated judgments, prejudices.” Tay then supported the genocide and worshiped Adolf Hitler.
“Debate” and “educate”
So, how to fight against the possible bias transmitted by AI. This first implies “the surveillance of people who deal with artificial intelligence.” Thierry Ménissier explains that “the algorithmic society in which we live is not civilized.” For the philosopher, professionals and users must confiscate algorithms, understand their functioning and the issue of ethical biases shows that it is necessary to be interested in it.
“Everyone will have relations with AI as users of public services or as companies of companies.” This awareness raises “teaching and the media,” according to the philosopher that evokes “a democratic problem in training and information.”
Otherwise, this implies an investigation that in several aspects is dedicated to discussing languages of languages. Therefore, several French and foreign research teams work on these questions. Comparison: Ia is an example of an initiative with the objective of including a national database (French here) with generative. At the same time, there are other research teams such as the Magnet team in Inria de Lille that also analyze these issues of prejudices.
In addition to the world of research, we also talk about “human labeling.” People are used to make data used by AI (removing their bias, for example). These scorers are markedly present in developing countries.
“It is ordered to organize society while technology goes faster than we conclude to the philosopher. It is important that we have safeguards and those responsible must give a new organization system that is not yet in place.” Maybe this new system will begin to emerge after the summit in Paris? But for countries with very different “general interests”, it is still difficult to see this “the service of general interest” exploded upstream of this great summit.
Source: BFM TV
