HomeTechnology'We were wrong': OpenAI co-founder regrets initial open source approach

‘We were wrong’: OpenAI co-founder regrets initial open source approach

OpenAI, which at its inception said it wanted to “collaborate freely” with others, has released almost no information about the data used to form GPT-4, the new version of ChatGPT.

While startup OpenAI announced the formalization of GPT-4 on Tuesday, many researchers and experts lament that the new version of ChatGPT is not an open source AI model. OpenAI offered almost no information about the data used to train the system, its power costs, or the specific hardware or methods used to create it, according to The Verge.

“Given the competitive landscape and security implications of large-scale models like GPT-4, this report does not contain further details on architecture (including model size), hardware, training computation, ensemble construction data, training method or the like,” reads a section of the GPT-4 white paper.

OpenAI co-founder Ilya Sutskever told The Verge that GPT-4 “wasn’t easy to develop.” “Pretty much all of OpenAI had to work together for a long time to produce this. And there are a lot of companies that want to do the same thing, so from a competitive standpoint, you can see it as maturation.” of domain”, he developed it.

As for the security aspect, “these models are very powerful and they are getting more and more powerful,” said Ilya Sutskever. “At some point, it will be quite easy, so to speak, to cause a lot of damage with these models. And as the capabilities increase, it makes sense that you don’t want to reveal them,” the open AI co-founder justified.

focus change

This approach marks a change for OpenAI. When it started in 2015, the AI ​​research firm said it wanted to “create value for everyone rather than shareholders” and “collaborate freely with others” to “research and implement new technologies.” Founded as a non-profit organization, OpenAI later became “limited profit” to garner billions in investment.

When asked about this change in approach, Ilya Sutskever admits to getting OpenAI wrong at the start. “We were wrong. If you think, as we do, that at some point AI is going to be extremely, incredibly powerful, then it just doesn’t make sense to open source code. That is a bad idea…”, affirmed the co-founder of OpenAI with The Verge.

“I hope that in a few years it will be completely obvious to everyone that open source AI just doesn’t make sense,” he concluded.

Another reason put forward by some for this change is legal liability, as AI language models are formed in part from information on the web, some of which is copyrighted. When asked whether the GPT-4 training data may or may not include hacked content, Ilya Sutskever did not answer.

Author: Marius Bocquet with AFP
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here