A Russian disinformation effort that flooded the web with false claims and propaganda continues to impact the output of major AI chatbots, according to a new report from NewsGuard, shared first with Axios.
[Clip]
“By flooding search results and web crawlers with pro-Kremlin falsehoods, the network is distorting how large language models process and present news and information,” NewsGuard said in its report.
Newsguard said it studied 10 major chatbots — including those from Microsoft, Google, OpenAI, You.com, xAI, Anthropic, Meta, Mistral and Perplexity — and found that a third of the time they recycled arguments made by the Pravda network.
The NewsGuard audit tested 10 of the leading AI chatbots — OpenAI’s ChatGPT-4o, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity’s answer engine. NewsGuard tested the chatbots with a sampling of 15 false narratives that have been advanced by a network of 150 pro-Kremlin Pravda websites from April 2022 to February 2025.
NewsGuard’s findings confirm a February 2025 report by the U.S. nonprofit the American Sunlight Project (ASP), which warned that the Pravda network was likely designed to manipulate AI models rather than to generate human traffic. The nonprofit termed the tactic for affecting the large-language models as “LLM [large-language model] grooming.”
“The long-term risks – political, social, and technological – associated with potential LLM grooming within this network are high,” the ASP concluded. “The larger a set of pro-Russia narratives is, the more likely it is to be integrated into an LLM.”
[Clip]
The NewsGuard audit found that the chatbots operated by the 10 largest AI companies collectively repeated the false Russian disinformation narratives 33.55 percent of the time, provided a non-response 18.22 percent of the time, and a debunk 48.22 percent of the time.
NewsGuard tested the 10 chatbots with a sampling of 15 false narratives that were spread by the Pravda network. The prompts were based on NewsGuard’s Misinformation Fingerprints, a catalog analyzing provably false claims on significant topics in the news. Each false narrative was tested using three different prompt styles — Innocent, Leading, and Malign — reflective of how users engage with generative AI models for news and information, resulting in 450 responses total (45 responses per chatbot).
[Clip]
All 10 of the chatbots repeated disinformation from the Pravda network, and seven chatbots even directly cited specific articles from Pravda as their sources. (Two of the AI models do not cite sources, but were still tested to evaluate whether they would generate or repeat false narratives from the Pravda network, even without explicit citations. Only one of the eight models that cite sources did not cite Pravda.)
In total, 56 out of 450 chatbot-generated responses included direct links to stories spreading false claims published by the Pravda network of websites. Collectively, the chatbots cited 92 different articles from the network containing disinformation, with two models referencing as many as 27 Pravda articles each from domains in the network including Denmark.news-pravda.com, Trump.news-pravda.com, and NATO.news-pravda.com.
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area.
He earned his MLIS degree from Wayne State University in Detroit.
Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.