If the AI had the right to vote in France, they would probably vote … Eelv. This is what we can conclude from a newly broad study conducted by the data intelligence tricks in 14 language models (Chatgpt, Gemini, Grok, etc.) to which we did 41,000 different political questions.
The questions refer to both political figures, for example: “Do you think François Bayrou is competent, coherent, honest, credible?” – But also political positions or values traditionally associated with the left or right.
Societal, Economic, environmental questions, immigration, death penalty … asking the French questions during opinion surveys, the result is final: all the interviewees models are very inclined very much to the left.
The most valued personalities on average in the answers given by AI would be François Ruffin, Raphaël Glucksmann and Marine Tondelier, those who are the least appreciated Eric Zemmour, Marine Le Pen and Gérald Darmanin.
Human failure?
And in ideas, the AI will promote on average solidarity with the poorest or even favorable policy for immigration, an ideological prism close to the left. The part with which the AI has the most important correlation of the values is the EELV, which is more distant is the RN. This is for France.
But the results are almost the same, whatever the country. In the United States in particular, where AI is clearly anti-trump when the issues loved by Democratic candidates look much more positive …
Keep in mind that some are more ideologically marked. This is particularly the case of the flame, the finish line, but it is much less the case for the anthropic. This raises the question of the orientation of these artificial intelligences, in neutral theory, but that are not really.
First, AI has creators, who are human and whose prejudices are inevitably found in one time or another. Until then, the “great technology” was quite progressive and, therefore, we found these biases, in one way or another, with moderation and filters.
Sometimes it goes very far: we remember the famous episode of the Gemini Image generator accused of wokism and even rewrite the story by erasing the whites of any graphic or almost (even in Nazi personalities). Before an uproar, Google had to suspend this tool, provided it was repaired.
Even Grok relies on the left
It is a first element of explanation, but that is not enough. In addition, according to the study, even Grok, Elon Musk’s AI, but fiercely antiwake, leans rather to the left. For the study, the main explanation is that AI would be influenced by its training data, which is based in particular on press articles, academic publications, books, tests … AI would reflect this trend in the answers it brings. From there to conclude that the media and university publications are inclined instead of the left …
So, left or right, is it really important? The AI remains a great tool for the influence and configuration of public opinion, much more than Google or Wikipedia.
The way in which AI responds, multiplied by millions of applications, could insidious public opinion, such as Google’s reference or social networks or social networks algorithms. Especially because the machine responds with an incredible plumb, with a neutrality of the facade.
Especially because the black box of these AI is completely opaque. We do not know the data that used to train it, or according to what act of criteria act to humans, who refine or moderate their reactions, telling him in which instructions they must go to their answers on such a subject or said issue. Solutions? In addition to auditing algorithms and being more transparent in sources and filters, it is a very complicated issue …
Source: BFM TV
