When we say “artificial intelligence”, what do you think of? If you follow industry news, maybe on ChatGPT or Midjourney. But in the collective imagination, AI is more often associated with humanoid beings than I robotto the computer of 2001 A Space Odysseyor replicants of Bounty hunter. Their common ground: conscious AIs, capable of setting their own goals.
This scenario belongs to science fiction for decades. But today, the AI industry is booming. From ChatGPT to Midjourney to AI for music and video creation, “generative artificial intelligence” programs are seeing their capabilities explode.
To the point that many wonder: would a “conscious” AI worthy of science fiction scenarios finally be possible? Is it a serious goal, a threat to be avoided at all costs, or an unrealistic fantasy?
“The holy grail of a certain aspect of research”
In the industry, this idea has a name: “artificial general intelligence” (or AGI). It’s of terminator either robocop: an AI capable of doing as well or even much better than a human in a lot of areas, and perhaps, of having a conscience.
This blurry lens, almost as old as the term “artificial intelligence” itself, has long been relegated to fantasy. But now it’s openly called for by some of the biggest companies in the industry, including OpenAI, the organization behind ChatGPT.
If created, the AGI “could help us elevate humanity by increasing abundance, accelerating the global economy, and aiding in new scientific discoveries,” writes OpenAI head Sam Altman. In particular, he believes that the AGI could help fight against global warming EITHER colonize space – in addition to provoking deep philosophical questions about human nature.
Is this a realistic goal? A quick look might suggest we’re getting close. Generative AIs display a creativity that until now was thought to be reserved for humans. Videos of humanoid robots connected to ChatGPT and voice assistants inevitably invoke comparisons with I robot. And variants like Auto-GPT are touted as being able to discover the intermediate steps that need to be completed on their own to reach a final goal, for the moment still set by a human.
A sentient AI… and a rebel?
But such a revolution could have many risks: massive job substitution, concentration of power in the hands of its creator… And what if a conscious AI decided to set its own goals? If we create an AI whose goal is to maximize clip production, it might very well conclude that the best way is to… plunder all the natural resources on the planet and eliminate the humans that could take it offline.
This deliberately cartoonish example (in English) by transhumanist philosopher Nick Bostrom justifies research on “alignment”: how to make AIs act in accordance with societal values.
But some believe that the current development of AI is too rampant to warrant implementation of such safeguards and call for a slowdown in research, such as the signatories of the call for a six-month moratorium (including Elon Musk and renowned researchers such as Yoshua Bengio ). Others go even further in radicalism calling for the bombing of data centers that do not obey the limits of AI development, such as blogger Eliezer Yudkowsky in Time.
“They Will Not Break Their Chains”
However, nothing says that science is going in that direction. Because behind the term “general artificial intelligence” there is no clear definition accepted by all. “There is no ‘general’ intelligence!” recalls Jean-Gabriel Ganascia, a researcher at the Paris VI Computer Science Laboratory and specialist in artificial intelligence, to Tech&Co.
So what would an AGI do? Consciousness? We are far from it: the current generative “artificial intelligences” do not have an iota of consciousness or reflection. “ChatGPT does not make it possible for the text to generate the most likely rapport to the user’s request, based on the billions of texts that serve the user”, explains Alexandre Lebrun, co-founder of many startups using systems of technology, to Tech&Co. ‘artificial intelligence.
Since ChatGPT is trained on human text, the text it generates may give the impression of a human reflection, but that’s just an impression. And nothing says that this barrier can be raised in the future. “It is not because we know how to jump from a meter that we know how to go to the Moon”, Alexandre Lebrun summarizes to Tech&Co.
The AI that would rebel against its creators should also remain a fantasy. “Today’s AIs are static programs, it is humans who decide when and how they are trained or updated,” recalls Thomas Wolf.
“There is no doubt that they will exist one day”
However, it’s not because the method doesn’t exist yet that it will never see the light of day. “I have no doubt that superhuman AIs will exist one day,” he says On twitter Yann Le Cun, one of the best AI researchers in the world. But for the scientist, we don’t have the right technique yet, and humanity should succeed in “aligning” these AIs in the long run.
For others, scenarios based on “general AI” and “existential risk” are more like red rags waved by the big companies in the sector, to divert attention from much more current topics. Among them, “the exploitation of workers”, the “mass theft of data”, the proliferation of counterfeiting and the concentration of power in the hands of a few companies, they list in an open letter written by figures who think about ethics applied to AI.
Meanwhile, the capabilities of generative AIs continue to advance, but they still obey humans. “Generative AIs are not terminators; what needs to be regulated is how people use them,” said Thomas Wolf.
Source: BFM TV
