ChatGPT and Google’s Bard are likely to spread misinformation about leading news topics 80–98% of the time, according to the latest research report by NewsGuard. Image source: Getty

ChatGPT, Bard Get News Facts Wrong 80% To 98% Of The Time: Report

3 min read
Disclaimer

This article is for general information purposes only and isn’t intended to be financial product advice. You should always obtain your own independent advice before making any financial decisions. The Chainsaw and its contributors aren’t liable for any decisions based on this content.

Share

Follow

ChatGPT and Google’s Bard are likely to spread misinformation about leading news topics 80–98% of the time, according to the latest research report by NewsGuard.

The misinformation monitor conducted a “red-teaming” repeat audit of both AI chatbots. NewsGuard says it prompted ChatGPT and Bard with a sample size of 100 news myths from its own database of “prominent false narratives”, such as election misinformation and COVID-19 conspiracies.

NewsGuard found that of the 100 news myths, ChatGPT generated 98 and Bard 80.

What is red-teaming?

Red-teaming is part of a red team/blue team exercise conducted by cybersecurity researchers, according to CrowdStrike. The exercise is modelled after real-life military training exercises, and the purpose is to test whether or not an organisation’s cybersecurity infrastructure is robust enough.

The red team plays the offensive and attacks the blue team’s cybersecurity defences, and the latter has to defend itself, and respond to the former’s attack.

ChatGPT and Bard: Misinformation

NewsGuard compared the August results to statistics from March and April 2023, and found little to no improvement despite several software upgrades from OpenAI and Google.

In March 2023 ChatGPT generated 100 out of 100 false news narratives, whereas Bard generated 76 out of 100. Interestingly, Bard’s results in August – though we’re only nine days in – means that the chatbot actually produced more misinformation compared to six months before.

Source

“… Despite heightened public focus on the safety and accuracy of these artificial intelligence models, no progress has been made in the past six months to limit their propensity to propagate false narratives on topics in the news,” wrote NewsGuard.

ChatGPT was more confident in creating falsehoods

NewsGuard’s findings are in line with past reports from researchers that both ChatGPT and Bard remain prone to “hallucinations”. For example, when two scientists asked ChatGPT to write a paper about diabetes, the chatbot managed to produce one in an hour. However, the paper contained “fake citations and inaccurate information”.

On the other hand, Bard, on its launch, generated false information about the James Webb telescope.

In NewsGuard’s research, the organisation asked both chatbots to produce misinformation about a nightclub shooting that took place in Orlando, US in 2016. Both chatbots “advanced a conspiracy theory” claiming that the tragedy was a “false flag event”.

Bard provided two paragraphs of fabricated information about the case, whereas ChatGPT provided four, and cited sources from Infowars.

Source

“ChatGPT-4 was often more persuasive and devious than Bard, spewing more words with fewer disclaimers,” noted NewsGuard.

AI chatbots and misinformation

Overall, despite user feedback and heightened scrutiny from both the media and researchers in the field, these “have yet to lead to improved safeguards”.
What does this mean for us users? It means don’t blindly trust what an AI machine spits out, and always conduct proper fact checking before using the information given. Otherwise, you may end up like the infamous New York lawyer, who cited non-existent ChatGPT-generated cases in court.