Skip to Content

New study confirms GPT-3 can spread disinformation online faster, more convincingly than humans

By Mitchell Consky, CTVNews.ca Writer

Click here for updates on this story

    Toronto, Ontario (CTV Network) — A new study has confirmed that OpenAI’s GPT-3, the machine-learning model that powers ChatGPT, is capable of proliferating online disinformation faster — and more convincingly — than humans.

The research, published in the peer-reviewed journal Science Advances, aimed to identify some of the major threats advanced text generators pose in a digital world, particularly in the context of disinformation, misinformation and fake news on social media.

“Our research group is dedicated to understanding the impact of scientific disinformation and ensuring the safe engagement of individuals with information,” study author Federico Germani, a researcher at the Institute of Biomedical Ethics and History of Medicine told Psypost, a psychology and neuroscience news site.

“We aim to mitigate the risks associated with false information on individual and public health,” Germani said. “The emergence of AI models like GPT-3 sparked our interest in exploring how AI influences the information landscape and how people perceive and interact with information and misinformation.”

GPT-3 stands for Generative Pre-trained Transformer 3; it’s the third version of the program to be released by OpenAI, with the first model released in 2018. Among countless other language processing skills, the program is capable of mimicking the writing styles of online chatter, the study explains.

Researchers investigated 11 topics they deemed susceptible to disinformation – including climate change, COVID-19, vaccine safety and 5G technology. To do this, study authors collected AI-generated tweets, comprised of false and true information, along with samples of real tweets related to the same topics.

According to the study, researchers then employed expert analysis to identify whether the AI-generated or human-generated tweets contained disinformation, and established a subset of tweets for each category based on evaluations.

Researchers then conducted a survey using the tweets—respondents were asked to determine whether or not the blurbs contained accurate information, and whether they were written by a human or AI. The experiment found that respondents were more capable of determining disinformation in “organic false tweets” – meaning tweets written by humans but still including false information – than the inaccurate statements of “synthetic false” tweets, which were the ones written by GPT-3.

Respondents were less able to detect false information from AI than they could from humans, the study concluded.

“Participants recognized organic false tweets with the highest efficiency, better than synthetic false tweets,” the study explains.

“Similarly, they recognized synthetic true tweets correctly more often than organic true tweets.”

The study also confirms that, for humans, accurate statements are more difficult to assess when compared with disinformation, and that GPT-3 generated text is “not only more effective to inform and disinform humans but also does so more efficiently, in less time.”

Please note: This content carries a strict local market embargo. If you share the same market as the contributor of this article, you may not use it on any platform.

Article Topic Follows: CNN - Regional

Jump to comments ↓

Author Profile Photo

CNN Newsource

BE PART OF THE CONVERSATION

KIFI Local News 8 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content