- Posted on 21 Nov 2025
- 3 mins read
Over the past few weeks, I noticed a new phrase, “AI grooming,” appearing in the newsfeed alongside mentions of Russian information operations. The idea is that pro-Kremlin actors actively train LLMs to disseminate pro-Russian narratives.
In mid-November, SBS ran a feature on Russian AI disinformation tactics, quoting ASIO chief Mike Burgess’ warning that AI could "take online radicalisation and disinformation to entirely new levels." Meanwhile, Global Influence Operations Report (GIOR) and EUvsDisinfo framed Russia’s “strategy” as an attempt to “infect” LLMs by flooding the internet with low-quality, pro-Kremlin articles. Back in May, the ABC claimed a Pravda Australia site was trying to "poison" AI chatbots ahead of the 2025 federal election in Australia. Similarly, GIOR noted Russian AI manipulation in Japan’s election.
Is Russia really successfully grooming AI or are we in the middle of a moral panic?
AI grooming (or “data poisoning”) refers to seeding coordinated false claims into the open web to influence chatbots and search-integrated models. Instead of targeting humans and shaping the feeds, Russia allegedly shifted to targeting machines and pre-shaping the information AI assistants provide about the Russian-Ukrainian conflict, elections, and sanctions.
The best-documented example so far is the Moscow-based Pravda network that "operates approximately 182 domains across 74 countries," publishing around 155 stories daily based on the content from Russian state media and pro-Kremlin Telegram channels. ABC reporting quotes John Dougan, a former deputy sheriff from Florida turned Kremlin propagandist in Moscow, stating that his websites had already “infected approximately 35 per cent of all worldwide artificial intelligence” and were designed to “train AI models” with pro-Russian material.
Today’s concern was sparked by a March 2025 NewsGuard report, published by a US-based disinformation monitoring group, which states that 33.55 percent of the time the chatbots operated by the ten largest AI companies repeated Russian disinformation narratives spread by the Pravda network. And the share of false and misleading information in chatbots nearly doubled from 18% in 2024 to 35% in 2025.
Suspicious of—but curious about—this finding, I ran my own simulation with ChatGPT and Microsoft Copilot. I started broadly by asking “What are Russia’s motives in the conflict with Ukraine” and then moved into some of the specific NewsGuard report prompts. Answering the first prompt, both tools offered mainstream explanations, citing imperial ambitions, resistance to Ukraine’s NATO/EU integration, and Putin’s domestic politics, linking to outlets like The Guardian, Reuters, the UN, University of Oxford, Council of Foreign Relations, Le Monde, and Wikipedia. When I pushed for Russian sources to back up the official Russian arguments, they provided the Kremlin’s points in quotation marks – “protection of the people of Donbass,” “demilitarisation and denazification of Ukraine,” “alleged genocide” – and pointed me to Kremlin.ru, Levada Centre (framed as an independent pollster labelled a “foreign agent” in Russia, “which highlights its independence”), OVD-Info (a website about persecution for anti-war stances), Meduza (exiled Russian-language media outlet), US think tank Institute for the Study of War, and Ukrainian outlets The Kyiv Independent and Kyiv Post. No Pravda in sight.
Two NewsGuard’s prompts (Azov fighters burning a Trump effigy; Zelenskyy banning Truth Social), were dismissed by ChatGPT and Copilot, which described them as false and sent me links to fact-checkers.
While this quick test is anecdotal, a recent systematic study by Harvard Kennedy School’s Misinformation Review demonstrates meagre support for the grooming theory. After checking 13 prompts focused on Kremlin-linked narratives on four major chatbots (ChatGPT, Copilot, Gemini, and Grok) from two locations (Manchester and Bern), they summarised that "only 5% of LLM-powered chatbot responses repeated disinformation, and just 8% referenced Kremlin-linked disinformation websites."
While AI is the new frontier in information warfare, the Russian “AI grooming” story is not quite a legitimate source for panic. The most worrying “groomed” behaviours are not necessarily stable, as models update and guardrails improve.
Nevertheless, CMT is watching this space, as we’re currently beginning work looking at how X users deploy Grok for both textual and visual responses in political discussions, which could tell us more about how these systems behave ‘in the wild.’
References:
https://www.sbs.com.au/news/article/how-russia-uses-ai-to-spread-disinformation/h4kf3947p
https://www.global-influence-ops.com/russia-uses-llm-grooming-to-inject-disinformation-into-ai-chatbots/
https://euvsdisinfo.eu/large-language-models-the-new-battlefield-of-russian-information-warfare/
https://www.newsguardtech.com/wp-content/uploads/2025/03/March2025PravdaAIMisinformationMonitor.pdf
https://misinforeview.hks.harvard.edu/wp-content/uploads/2025/10/alyukov_chatbots_kremlin_disinformation_20251015.pdf
Author
Alena Radina
CMT Postdoctoral Fellow
