HE
“More dangerous than nuclear bombs.” When Elon Musk called artificial intelligence that way in 2014, his comments were largely swept away by industry specialists. Too speculative, too “sci-fi.”
However, this concern has been widely publicized in recent months and is being taken much more seriously. From ChatGPT to Midjourney to AI for music and video creation, the explosion of capabilities of “generative artificial intelligence” programs is fascinating. But it also fuels many fears: the explosion of unemployment, the multiplication of fake news, even the creation of a “conscious” AI worthy of science fiction scenarios.
To the point that hundreds of businessmen and specialists in the field are warning of the risk of an “extinction” of humanity caused by future AI. That even one of the founding fathers of modern AI, Geoffrey Hinton, decided to quit his job at Google and sound the alarm about the potential dangers of these technologies.
So should we be afraid of artificial intelligence? Status of main concerns.
The risk of a conscious … and “rebellious” AI
This is one of the main fears when talking about AI: the conscious AI that breaks its chains and rebels against its creators. A recurring motif in science fiction settings, 2001 A Space Odyssey TO I robot.
To researchers, this hypothetical “conscious AI” is called “artificial general intelligence” (or AGI). And no offense to sci-fi movie clichés, “sentient” doesn’t necessarily mean “rogue”: companies pursuing this goal, like OpenAI, believe that AI smarter than humans could help solve many problems, from disease until climate change. .
But current AIs are still a long way from awakening. “ChatGPT only generates the most probable text in relation to the user’s request, based on the billions of texts that have been used to train it,” explains Alexandre Lebrun, co-founder of several startups that use chat systems ‘AI’ to Tech&Co. .
And nothing says that current techniques allow reaching the “AGI”. Yann Le Cun, one of the world’s leading artificial intelligence researchers, says On twitter “have no doubt that superhuman AIs will eventually exist”; but he believes that at the moment “we don’t even have a draft plan” to create them, and that it is therefore too early to think about how to make such an AI harmless to humans.
A fake news explosion?
Therefore, some fears are closer to fantasy. But not all concerns are science fiction: Generative AIs already pose very real risks. “We can easily divert these tools to much less noble uses,” warns Tech & Co Laurence Devillers, a professor of computer science at the University of Paris-Sorbonne and a specialist in human-machine relations.
Some chatbots easily explain how to do napalm. ChatGPT has already been used to generate spam emails or create viruses. And of course, students quickly used the text generator tool to cheat on exams, write scientific studies that were later published in newspapers, or fake Amazon reviews.
But if there is a type of content that especially worries specialists, it is fake news. Fake images of explosion at the Pentagon or of an injured protester, TikTok videos using the voices of Joe Biden or Donald Trump: It is becoming easier to generate ultra-realistic artificial images or voices. And if these creations are not not perfect yet, are improving at breakneck speedto the point that it will soon be impossible to tell the difference between a genuine medium and one created from scratch.
Which facilitates willful misinformation, but also undermines general trust: when it’s impossible to know if the slightest image or video is true, don’t you run the risk of not believing anything anymore? Some Internet users, for example, claimed that AI had generated a real photo of CRS in front of the Constitutional Council.
Imprecise detection techniques
Is it possible to combat this risk of loss of trust? Many players, including Google, rely on “watermarking” techniques: embedding into AI-generated media a kind of signature (visible or watermark), which undoubtedly proves its artificial origin. But current methods are far from foolproof: visible signatures like that of the Dall-E imager can be easily removed; Patterns embedded as a watermark in an image are believed to be only detected by specific software, and their resistance to retouching is highly variable, according to studies.
And that’s not to mention text generators like ChatGPT: can we integrate a sequence of clearly recognizable words into a text, without damaging its quality, and still survive a simple reformulation (by a human or software)? For now, research continues, but there is no 100% reliable technique, and not everyone in the industry seems to be concerned about it.
Distinguishing the true from the false avoiding general skepticism will become an essential issue. “The genius has come out of the box”, believes Alexandre Lebrun, for whom it is above all necessary “to adapt education to this new situation”.
Unemployment Explosion or 4 Day Week?
Generative AIs will not only transform our relationship with reality: they could also have a bombshell effect on the job market. According to Goldman Sachs, generative AI could expose 300 million jobs to automation around the world. Translators, designers, singers, dubbers, authors, screenwriters and even doctors or lawyers… The professions that are already concerned with the development of AI are legion.
Synthesizing texts, writing reports, translating, creating images, human voices… At first glance, generative AIs seem capable of doing many things much faster than a human.
A potentially positive productivity gain for global growth. It could increase world GDP by 7% in 10 years, again according to Goldman Sachs; be used to generalize the 4-day week, according to a “Nobel Prize in Economics”; or be accompanied by the creation of new jobs specialized in the use of AI, such as “one-time engineer”. Some AI figures also suggest creating a universal income to “smooth the transition to the jobs of the future,” such as sam altmanhead of OpenAI.
But it is impossible to predict the real impact of these systems on employment. The Goldman Sachs study recalls that everything will depend “on their capabilities and the adoption schedule” by companies. The latter could also take the opportunity to reduce wages rather than the level of employment. An expert interviewed by the BBC takes the arrival of GPS and Uber to traditional taxis as an example: “Suddenly, knowing the streets of London by heart was worth much less.”
“We are playing with fire”
Finally, the main risk of these systems could be… that humans trust them too much. “AIs like ChatGPT are designed to generate the most probable text, not tell the truth,” recalls Jean-Gabriel Ganascia. Its designers claim to go to great lengths to make sure their systems stick to the facts, but they still regularly make mistakes and don’t “think” about what they write.
If companies decide to use these AIs to process their data or write their reports, or if chatbots like Bard replace traditional Internet research, AIs could multiply errors where they are used and, without such research, it would be difficult to separate what true from false. .
For many uses, “it will be necessary to keep a human expert in the loop,” recalls Laurence Devillers, which risks reducing the time savings these systems provide. “Si l’IA se trompe et prescrit par exemple des actes médicaux inutiles (ou pire), ça engendrerait des dépenses supplémentaires pour la société, alors que l’argent aurait pu aller à des personnes qui en ont vraiment besoin”, ajoute-t -she.
“We have to ask ourselves who creates these AIs”
Finally, by focusing only on far-off, apocalyptic risks, we might forget to take much more current consequences into account. Like the rapid concentration of power and data in the hands of the main companies in the sector, including OpenAI.
The same is true of the growing environmental impact of AIs, the risks of dangerous social biases in their results, the importance of outsourced human labor in low-income countries. “These responsibilities lie not with machines, but with their creators,” write AI ethics researchers Timnit Gebru and Margaret Mitchell, ousted from Google, in a letter published in response to the call for the moratorium.
However, for the specialists interviewed by Tech&Co, these concerns should not encourage us to put generative AI under a mirror. “Electricity also brought benefits and risks, and we created systems to control it, like any powerful tool”, compares Alexandre Lebrun. “It would be unethical not to explore these avenues given the potential benefits to society, but it shouldn’t be done anyway,” sums up Laurence Devillers. It remains to find the right method.
Source: BFM TV
