HomeTechnologyChatGPT accused of using Kenyan workers paid $2 an hour to moderate...

ChatGPT accused of using Kenyan workers paid $2 an hour to moderate their system

Open AI, the founding company of the artificial intelligence-based chabot, would have summoned Kenyan workers to whom it pays two dollars an hour to make its system less toxic, denounces a Time investigation.

“It was torture,” Kenyan workers testify in an investigation published by the Hour. This reveals that Open AI has turned to Sama, a company that employs low-paid employees under traumatic conditions.

ChatGPT fascinates with its ability to provide accurate and complete answers while its knowledge is based on the content that abounds on the Internet. In fact, artificial intelligence has been trained on hundreds of billions of words pulled from the Web, representing a vast repository of human language. But it was necessary to order this information to minimize the risk of writing racist or sexist comments or even publishing a shocking text.

the Hour It indicates that to achieve this, Open AI was inspired by the methods of Facebook and would have signed three contracts with a partner of the social network: the company Sama. This company notably employs workers in Kenya and boasts of having lifted 50,000 people out of poverty worldwide.

For $200,000, Sama had to classify content deemed shocking, from sexual abuse texts to hate speech. According to testimonies, the employees received only $1.32 to $2 an hour compared to the $12.50 indicated in the contract between the two companies.

The outlet also reviewed hundreds of pages of internal Sama and OpenAI documents, including worker pay stubs, and interviewed four Sama employees. Thus, Kenyan employees report having “recurring visions” after reading traumatic texts, often of a sexual nature, for days.

The partnership between the two companies began in November 2021 before ending abruptly in February 2022, eight months earlier than expected. OpenAI would also have asked Sama to classify the images for its Dall-E imager.

For its part, OpenAI assures the US media that the company had not set any productivity goals and that Sama was in charge of managing the salaries and mental health of employees. And to add: “we take the mental health of our employees and that of our contractors very seriously (…) exposure to explicit content has a limit and sensitive information is handled by specifically trained workers.”

Open AI is already working on a next version of ChatGPT that will collect even more data.

Author: margaux vulliet
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here