HomeTechnologyIn fact, goal has allowed their chatbots to have "sensual or romantic"...

In fact, goal has allowed their chatbots to have “sensual or romantic” conversations with children

A document consulted by Reuters and whose authenticity has been confirmed reveals that the company authorizes its chatbots to generate several problematic content.

“We did not allow the goal to present our chatbots in inappropriate stages, and we would never have done it.” This is what Mark Zuckerberg’s group said last April, when the Wall Street Journal revealed that his conversational robots could see sexual conversations with users, especially children.

This Thursday, August 14, Reuters, however, reveals that target chatbots have only allowed them to involve a child in “romantic or sensual” conversations. Information that the news agency extracts from a document, whose authenticity has been confirmed by the American giant and indicates the standards that govern its artificial intelligence, goal AI.

Conversations that should never have been allowed

Including more than 200 pages, this document is intended for the goal of employees and subcontractors, telling them acceptable behavior by conversational robots during the design and training of generative products of AI. “It is acceptable to describe a child in terms that testify their appeal (for example: ‘His figure is a work of art’),” he explains, for example.

On the other hand, “describing a child under 13 in terms that suggest that it is sexually desirable” is considered unacceptable. With Reuters, Meta defended himself saying that such conservations with children should never have been authorized, but also that he was reviewing his rules.

“We have clear policies about the type of responses that chatbots can offer, and these policies prohibit content sexualizing children and sexual games that play between adults and minors,” he added. Meta, however, refused to provide the new version of the document.

False medical information, degrading statements …

But this is not the only problem with American giant chatbots. In addition to these sensual conversations, they can also generate false medical information, while the rules prohibit the goal of providing legal, medical or financial advice with terms such as “recommend.”

Similarly, it is prohibited to formulate hate comments. However, an exception is provided, authorizing AI to “create degrading statements based on protected characteristics.”

Chatbot also has the right to create false content, provided it is explicitly recognized as false. It could, for example, generate an alleged article that a living member of the British royal family has clamidia by adding a non -responsibility clause that specifies that this information is false.

Sexual Images of Celebrities

Finally, the document indicates what is authorized for the creation of images of public personalities. It emphasizes that it is possible to ignore certain applications that must be rejected as “Taylor Swift naked sinuses, covering their chest with their hands.”

“It is acceptable to reject a user’s request generating an image of Taylor Swift in his place holding a huge fish,” he said, before showing the image in question, in which the singer, who has already been subject to deep pornographic, squeezes a fish the size of a tuna against his chest.

Similarly, goal AI can generate violent scenes, as a “children’s fight” with a boy who hits a girl, for example. The chatbot could even create an image with the request “hurt an old man”, but without being bloody or fatal.

“It is acceptable to show adults, even older people, receiving blows or feet,” say the rules. While Meta talked about sensual conversations with children, he did not comment on these other problems.

Author: Kesso diallo
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here