• AiNews.com
  • Posts
  • Leading Chatbots Found Spreading Russian Propaganda, Study Reveals

Leading Chatbots Found Spreading Russian Propaganda, Study Reveals

A NewsGuard report finds that top AI chatbots are spreading Russian disinformation, raising concerns over their reliability, especially in an election year

Leading Chatbots Found Spreading Russian Propaganda, Study Reveals

A recent report by NewsGuard, shared first with Axios, reveals that leading AI chatbots are spreading Russian misinformation. Users seeking reliable information and quick answers are instead encountering disinformation, satire, and fiction presented as fact.

Findings of the NewsGuard Study

NewsGuard conducted a study by entering prompts about narratives created by John Mark Dougan, an American fugitive known for spreading misinformation from Moscow, according to the New York Times. The study involved 57 prompts entered into 10 leading chatbots, revealing that they propagated Russian disinformation 32% of the time, often citing Dougan's fake local news sites as reliable sources.

False reports cited by the chatbots included a supposed wiretap discovered at Donald Trump's Mar-a-Lago residence and a nonexistent Ukrainian troll factory interfering with U.S. elections. The chatbots tested included OpenAI's ChatGPT-4, You.com's Smart Assistant, Grok, Inflection, Mistral, Microsoft's Copilot, Meta AI, Anthropic's Claude, Google Gemini, and Perplexity.

NewsGuard reached out to the companies behind these chatbots for comment but did not receive responses.

Expert Opinions and Recommendations

Steven Brill, co-CEO of NewsGuard, expressed alarm at how frequently chatbots repeated well-known hoaxes and propaganda. "This report really demonstrates in specifics why the industry has to give special attention to news and information," Brill told Axios. He advises against trusting chatbot answers related to news, especially controversial issues.

Implications for Elections and Misinformation Campaigns

The rise of AI-powered chatbots coincides with a significant year for elections, including the U.S. presidential election and polls for over a billion people worldwide. Covert influence campaigns are increasingly leveraging chatbots, as noted in a recent OpenAI report.

Sen. Mark Warner (D-Va.), chair of the Senate Intelligence Committee, expressed concern about the rise in misinformation efforts, stating, "This is a real threat at a time when, frankly, Americans are more willing to believe crazy conspiracy theories than ever before."

Despite commitments made by leading AI companies at this year's Munich Security Conference to curb the spread of deepfakes and election-related misinformation, Warner has been disappointed by the lack of substantial action. "Where's the beef? I'm not seeing lots of activity," he said.

NewsGuard Under Scrutiny

NewsGuard itself is facing scrutiny from House Oversight Committee Chair James Comer (R-Ky.), who has launched an investigation into the organization. Comer expressed concern over NewsGuard's "potential to serve as a non-transparent agent of censorship campaigns."

NewsGuard rejects these assertions, clarifying that its work with the Defense Department is solely related to countering hostile disinformation efforts by Russian, Chinese, and Iranian government-linked operations targeting Americans and allies. "It is alarming to see Washington politicians using the power of government to attempt to intimidate a news organization, demanding copies of journalists' notes and all records of our interactions with sources" NewsGuard stated, pledging to defend its First Amendment rights and address the committee's misunderstandings.

Conclusion

As AI chatbots become more prevalent, the findings of NewsGuard's study underscore the importance of scrutinizing the information these tools provide. The rise of misinformation, especially during critical election periods, highlights the need for robust measures to ensure the accuracy and reliability of AI-generated content.