Artificial intelligence lies seem more real than ours. The University of Zurich has been interested in the perception of false information, while AI is booming. In a study published in the journal Science Advances, three researchers have shown that false information is believed more if it is written by an AI model.
To obtain this result, they exposed 697 people to Twitter posts, details the MIT Technology Review website. The study used the GPT-3 language model to generate ten true and ten false tweets. The panel was also exposed to actual messages posted on Twitter. Again partly true and partly false.
Results that would worsen with GPT-4
The results of the study show that participants were 3% less likely to believe fake tweets written by humans than those written by GPT-3. Researchers have not been able to clearly identify the cause of this perception. However, they point to the way the software orders the information.
Giovanni Spitale believes that this same study would be even more significant if it were conducted with the latest version of OpenAI’s AI model, GPT-4.
Version 3.5 of the company’s text generator, which served as support for ChatGPT, had already been criticized several times for its ability to produce false information. Since then, a Newsguard investigation has revealed that around 50 news sites are using AI to create their content, resulting in false information.
Source: BFM TV
