They can already generate realistic text or images and continue to improve at a phenomenal rate. Should we worry about the spectacular progress of artificial intelligence? Yes, according to… the head of OpenAI, the structure behind ChatGPT, GPT-4 and other AI models among the most powerful and popular today.
The creator of ChatGPT presents artificial intelligence as “the best technology that humanity has ever designed”. But if AIs will end up, according to him, showing the “collective power and creativity” of humanity, they can also be used for much less noble uses.
Disinformation Explosion
“I am particularly concerned that these models could be used for large-scale disinformation,” Sam Altman stressed in particular with the US media. AIs like ChatGPT can be used, for example, to generate highly credible fake articles.
Dangers that OpenAI had already raised in a January report, in which it was estimated that “language models will be useful for propagandists and will probably transform online influence operations.”
“Now [que les IA] get better at writing code, [elles] they could be used to carry out cyberattacks”, added the leader. They can already make phishing text messages and emails more credible and sophisticated.
“Society has a limited time to react”
Sam Altman reminds that OpenAI tools are moderated to limit the creation of illegal, dangerous or malicious content. OpenAI, for example, conducted tests to ensure that GPT-4 would not try to take over the world by running away from its creators. But Internet users regularly find new ways to circumvent these security measures.
Sam Altman highlights in particular the risk of authoritarian governments developing their own versions of these models. Chinese giant Baidu, for example, unveiled its ChatGPT-inspired language model, and Vladimir Putin declared in 2017 that whoever came to dominate the AI sector “would become the master of the world,” according to Russian state media. .RT .
In that case, why make systems as powerful as ChatGPT available to anyone? “If we secretly develop this in our lab and release GPT-7 into the world all at once… I think it would cause a lot more problems,” explained Sam Altman, for whom “people need time to adjust, to get used to the technology and understand what the defects are and how to limit them.
Source: BFM TV
