How far will artificial intelligence go? If it now allows you to perform a surgical operation remotely, know if Santa Claus really exists or write a bill, its use and capabilities raise questions.
For OpenAI, the world leader in artificial reasoning and creator of ChatGPT, it is time to address the problem of controlling an AI that is smarter than humans.
To achieve this, the “capped” for-profit company has decided to launch the “Superalignment” team in July 2023, which serves to direct, regulate and govern the different artificial intelligence systems, according to the Open press release. AI when launching the entity. A program with a budget of 10 million dollars.
The team is led by OpenAI co-founder Ilya Sutskever and supported by Jan Leike. Three of its researchers were at the Neural Information Processing Systems (NeurIPS) conference last week in New Orleans (Louisiana, United States) to present the latest work from OpenAI, aimed at guaranteeing the “good behavior” of its AI.
“A real problem”
“Advances in AI have skyrocketed this year and I can assure you that they will not stop there,” Leopold Aschenbrenner, one of the three researchers, told the specialized media TechCrunch.
The researcher’s position may seem chilling, and yet “superintelligent” AI is on the brink of reality.
Sam Altman, creator of Open AI, even uses the Manhattan Project (a US government research project aimed at producing an atomic bomb during World War II) as a comparison to discuss Open AI’s work. The latter should allow “protecting against catastrophic risks” (in this case, the domination of AI over humans).
According to TechCrunch, the Superalignment team is attempting to create “governance and control frameworks” to “oversee” AI systems. Researchers are working to get GPT-4 (the company’s latest AI model) to work with GPT-2 as the “leader.”
Source: BFM TV

