HomeTechnologyHow AIs like ChatGPT can manipulate the way we think

How AIs like ChatGPT can manipulate the way we think

According to a recent study conducted by several researchers at Cornell University in the United States, chatbots have a subtle influence on our opinions without us realizing it.

Contrary to appearances, an algorithm is not neutral and its interaction with Internet users can lead to deviations, such as effects known as “filter bubbles” or “echo chambers”. And new generative AIs like ChatGPT or Bard are no exception to the rule either.

According to a study published on April 19 by several researchers at Cornell University in the United States, this new type of assistants has the potential to change our views, without us realizing it.

“You may not even know that you are influenced”

In this study, the researchers asked participants to write an article using AI about the effects of social media around the world. So two groups were created: one with a fairly positive AI on the subject and another equipped with a more critical assistant. Whether in the first group or the second, all the participants seem to have been influenced by the chatbot assigned to them.

Furthermore, a survey at the end of the study showed that some of them even changed their minds along the way. “You may not even know you’re being influenced,” says Mor Naaman, a professor in Cornell University’s Department of Information Studies, who calls this phenomenon “latent persuasion.”

The need for greater transparency in algorithms

This form of psychological influence is more or less the same as that analyzed by sociologists in human interactions, through the discourse of the media or on social networks. If the effects of algorithmic confinement are increasingly documented, this is the first time that research has focused on the subject of chatbots. Researchers at Cornell University point out that the best defense remains awareness, until regulators manage to demand more transparency from the algorithms of this new type of AI.

The OpenAI team, the creators of ChatGPT, recently said that they are “committed to fixing this issue.” [de biais] and be transparent about both our intentions and our progress.” The latter ensures that its algorithms will not take a position on “culture war” issues. Algorithms can serve bad intentions in terms of opinion manipulation.

Author: Peter Berthoux
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here