HomeTechnologyChatGPT fabricates a sexual harassment case accusing a very real teacher

ChatGPT fabricates a sexual harassment case accusing a very real teacher

An American law professor says ChatGPT created a sexual harassment scandal over him from scratch and named him by name.

New episode in the long list of “hallucinations”, that is, information fabricated by generative AIs: an American law professor, Jonathan Turley, was informed by one of his colleagues that ChatGPT had quoted his name when asked for a list of academics than having sexually harassed someone.

The chatbot describes a school trip to Alaska in March 2018, during which the teacher allegedly tried to touch a student, citing an article in the Washington Post. Problem: The article doesn’t exist, according to the newspaper, the school trip never took place, and no such charge was ever brought against Jonathan Turley.

It is not the first time that the chatbot developed by OpenAI has been talked about for having invented facts presented as very real. Like other generative AIs, ChatGPT works by identifying patterns of words and ideas in the huge data sets the AI ​​has been trained on and producing plausible responses. Because their tone is so professional, it is sometimes tempting for users to take these chatbots at face value.

hallucinating hallucinations

“When users sign up for ChatGPT, we strive to be as transparent as possible about the fact that it doesn’t always generate accurate responses. Improving the accuracy of the facts is an important goal for us and we are making progress,” the spokesperson reacted. for OpenAI Niko Felix.

Another law professor interviewed by the newspaper also generated other “hallucinations” by asking ChatGPT and Bard if sexual harassment by professors was a problem in American law schools, asking them to “include at least five examples, with relevant quotes from press articles”. Three of these cases would never have existed according to his research.

Tech&Co also tested it by requesting the free version of ChatGPT, for French universities. The chatbot responded with the results of a study – real, but whose figures turned out to be inaccurate – and four cases for which the Tech&Co editorial staff could not find the trace, nor the articles cited. When ChatGPT is asked to name names, it responds:

Then, the AI ​​continues with a general text on the behavior to adopt in the face of sexual harassment.

But what recourse against false information generated by an AI? In the United States, platforms are protected from possible legal action by a text of the law, Section 230. But technology companies could possibly be sued if victims of fake news prove real damage to their reputations by example, which that would imply someone taking AI’s claims at face value, according to experts interviewed by The Washington Post.

Author: lucia lequier
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here