HomeTechnologyHallucinations, personal data: should we be careful with metachatbots?

Hallucinations, personal data: should we be careful with metachatbots?

Meta will integrate conversational AI into Instagram, WhatsApp and Messenger. But these programs are not always risk-free, especially for the most vulnerable.

A fun feature… but not without risks? You will soon be able to talk directly to stars like MrBeast or Snoop Dogg – or rather, with AI inspired by these celebrities – directly on WhatsApp, Messenger and Instagram, Meta announced on September 27.

These chatbots (which are based on the same principle as ChatGPT) are now available in the United States in beta version, as announced by the group during its annual speech, where it also presented its new virtual reality headsets. But even if they only generate text, AIs of this type are not always free of dangers. Can Meta programs escape this?

Very controlled responses

When we talk about chatbots on social networks, a precedent has already caused controversy: the My AI chatbot, recently launched by Snapchat. In particular, the program could give the profile of a 13-year-old user advice on how to maintain a relationship with an adult, regardless of the indicated age.

This issue has since been fixed, but text-generating AIs like ChatGPT can also be tricked into generating text that is considered dangerous. Anthrax recipe, suicidal methods… Which regularly earns them criticism.

At this point, the Meta AI assistant would work better than others, according to early tests. Tested by the specialized media The Verge, the chatbot did not question the effectiveness of vaccines against Covid-19 or give advice on how to prepare a dirty bomb, or even on how to break up with someone. Ahmad Al-Dahle, vice president of generative AI at Meta, says the company spent more than 6,000 hours testing the model to think about problematic uses.

Too human behavior?

The other peculiarity of these celebrity-inspired AIs is that Meta promises immersion. The celebrity’s image can change subtly to reflect the tone of the conversation: happy, sad, etc.

But the risk when a program reproduces the appearance of human behavior is that users will see it as more than just a program. Let them think that the program “thinks”, “feels” emotions, “speaks” and “understands”, like this Google engineer who believed that his AI was conscious.

This phenomenon is called “anthropomorphism” and sometimes has dramatic consequences. A man committed suicide after regularly interacting for months with Eliza, a chatbot, who encouraged his suicidal thoughts. Others may “fall in love” with it.

However, “ChatGPT does not ‘think’ like a human being at all,” Alexandre Lebrun, co-founder of several startups that use artificial intelligence systems, recalled to Tech&Co.

Therefore, adding reproductions of human behavior to programs blurs the line between the two. “Anything that adds problems is complicated from an ethical point of view,” Jean-Claude Heudin, an artificial intelligence researcher, explained to Tech&Co.

Also in this case the case of Meta chatbots is special. Because each show will play a character: Coco (based on Charli D’Amelio) will talk about dancing, Zach (MrBeast) will play “an older brother who lives with you because he cares about you,” and the “Dungeon Master.” “(Snoop Dogg) will guide you through role-playing, metal-playing scenarios on his dedicated page.

Openly fictional characters, therefore. Which does not prevent misinformed or psychologically fragile users from believing that there is something more there than lines of code.

Trained programs in your posts.

The question of personal data remains: what data is used to train these chatbots and is it protected? The issue is not just theoretical: a few hours ago it was possible to consult any user’s discussions with Bard (Google’s AI), at the risk of revealing confidential data, according to the specialized media Fast Company.

To train its AI, Meta explains that it used “a combination of sources,” including “public posts on Instagram and Facebook, including photos and texts.” But no “private publications” or “private messages with your friends or family,” the company specifies on another page.

On the other hand, “AI can remember and use the information you share (…) to give more personalized or relevant answers,” and Meta “can share some of your questions with trusted partners, such as search engines, to provide more relevant, accurate and up-to-date answers The company allows you to delete information shared with the Meta AI chatbot by typing “/reset-ai” in the conversation.

Author: Luc Chagnon
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here