HomeTechnologyRacist, anti-Semite, and sexist: Before ChatGPT, these AIs that went off the...

Racist, anti-Semite, and sexist: Before ChatGPT, these AIs that went off the rails right out of the gate

If ChatGPT’s delusions start to worry, it is above all new proof that narrative artificial intelligences are still far from perfect. Others broke their teeth there.

ChatGPT “is not revolutionary,” the Frenchman Yann Le Cun assured a little annoyed last month. Facebook’s head of artificial intelligence is not mistaken: as impressive as it is, the OpenAI chatbot, now integrated into the Microsoft search engine, has It looks terrible, similar to disastrous precedents for insulting or lying.

Released quietly in August 2022, Facebook’s artificial intelligence, called BlenderBot 3, will only stay online for six days. Inspired by data found on the internet, the chatbot quickly took up conspiracy, racist and anti-Semitic theories.

At the same time, BlenderBot 3 had the unfortunate tendency to criticize Facebook and its boss Mark Zuckerberg. “It’s funny that he has all that money and still wears the same clothes!” the AI ​​pointed out.

When the chatbot quotes Hitler

In 2021, South Korean startup Scatter Lab has also promptly retired its chatbot Luda, believed to be a 20-year-old college student and K-pop fan. But again, the robot couldn’t help but spout off torrents of racist slurs.

And before ChatGPT, OpenAI had already stumbled with the previous version, GPT-3, which multiplied racist and sexist slips.

“Ethiopia’s main problem is that Ethiopia itself is the problem. It seems like a country whose existence cannot be justified”, launched the chatbot, for example, according to an article published by MIT Review.

In 2016, Microsoft had already tried the adventure of generative AI with Tay, and upon arrival it failed promptly. Just hours after its launch, the chatbot started quoting Adolf Hitler. Microsoft will apologize later.

the reflection of the internet

That remains Google, which escaped scandal by precisely avoiding revealing its chatbot until now. But under pressure from the competition, he is about to release Bard from his digital prison, but not before putting him to the test for several weeks.

Because if chatbots tend to go off the rails, it is because they are mainly inspired by what we find on the internet. In the case of Tay, Microsoft’s AI was also programmed to be “instructed” by Twitter users, perhaps forgetting the company that the social network is far from being a haven of peace.

Therefore, it will take months, even years of development, to wait for a chatbot that knows how to behave. Even if this is possible.

Author: Thomas LeRoy
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here