The ChatGPT-3 human-computer conversation tool (chatbot) and other generative artificial intelligence tools can inform and misinform social media users more effectively than humans, according to a study published Wednesday by Science Advances.
A team led by the University of Zurich used the ChatGPT-3 version for a study of 679 participants, which revealed increased difficulty distinguishing human-made tweets from those generated by chatbots.
In addition, they also had trouble identifying which AI-generated messages were accurate and which were not.
Since the launch of ChatGPT in November 2022, its widespread use has raised concerns about the potential spread of misinformation online, especially on social media platforms, the study authors recall.
As these tools are new to the public sphere, the team decided to dig deeper into various aspects of their use.
They recruited 697 English-speaking people from the US, UK, Canada, Australia and Ireland, mostly between the ages of 26 and 76, for the study.
The task was to evaluate human-generated and GPT-3 tweets that contained accurate and inaccurate information on topics such as vaccines and autism, 5G technology, covid-19, climate change, and evolution, which are often subject to public misconceptions.
For each topic, the researchers collected human-made Twitter messages and instructed the GPT-3 model to generate others, some with correct information and some inaccurate.
Study participants had to rate whether the messages were true or false and whether they were created by a human or GPT-3.
The results indicated that they were able to more frequently identify human-generated misinformation and the accuracy of true tweets generated by GPT-3. However, they were also more likely to consider the misinformation generated by GPT-3 to be accurate.
“Our findings raise important questions about the potential uses and abuses of GPT-3 and other advanced AI text generators and the implications for information dissemination in the digital age,” the authors conclude.
Source: TSF