Subscribe
A man holds a child and places flowers at a playground.

People come to a memorial for victims of the Russian ballistic missile that hit in Kryvyi Rih, Ukraine, on April 4, 2025, killing at least 20 people, including nine children, and injuring 90. (Oksana Parafeniuk for The Washington Post)

Russia is automating the spread of false information to fool artificial intelligence chatbots on key topics, offering a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform.

Experts warn the problem is worsening as more people rely on chatbots rushed to market, social media companies cut back on moderation and the Trump administration disbands government teams fighting disinformation.

Earlier this year, when researchers asked 10 leading chatbots about topics targeted by false Russian messaging, such as the claim that the United States was making bioweapons in Ukraine, a third of the responses repeated those lies.

Moscow’s propaganda inroads highlight a fundamental weakness of the AI industry: Chatbot answers depend on the data fed into them. A guiding principle is that the more the chatbots read, the more informed their answers will be, which is why the industry is ravenous for content. But mass quantities of well-aimed chaff can skew the answers on specific topics. For Russia, that is the war in Ukraine. But for a politician, it could be an opponent; for a commercial firm, it could be a competitor.

“Most chatbots struggle with disinformation,” said Giada Pistilli, principal ethicist at open-source AI platform Hugging Face. “They have basic safeguards against harmful content but can’t reliably spot sophisticated propaganda, [and] the problem gets worse with search-augmented systems that prioritize recent information.”

Early commercial attempts to manipulate chat results also are gathering steam, with some of the same digital marketers who once offered search engine optimization — or SEO — for higher Google rankings now trying to pump up mentions by AI chatbots through “generative engine optimization” — or GEO.

As people use AI engines to coach them on how to produce more attention-grabbing chatbot content, the volume of that content is expanding much faster than its quality is improving. That may frustrate ordinary users, but it plays into the hands of those with the most means and the most to gain: for now, experts say, that is national governments with expertise in spreading propaganda.

“We were predicting this was where the stuff was going to eventually go,” said a former U.S. military leader on influence defense, speaking on the condition of anonymity to discuss sensitive issues. “Now that this is going more toward machine-to-machine: In terms of scope, scale, time and potential impact, we’re lagging.”

Russia and, to a lesser extent, China have been exploiting that advantage by flooding the zone with fables. But anyone could do the same, burning up far fewer resources than previous troll farm operations.

One of the early beneficiaries is Russia’s long effort to convince the West that Ukraine is not worth protecting from invasion. Debunked accounts of French “mercenaries” and a nonexistent Danish flying instructor getting killed in Ukraine show up in response to questions posed to the biggest chatbots, along with credulous descriptions of staged videos showing purported Ukrainian soldiers burning the American flag and President Donald Trump in effigy.

Many versions of such stories first appear on Russian government-controlled media outlets such as Tass that are banned in the European Union. In a process sometimes called information laundering, the narratives then move on to many ostensibly independent media sites, including scores known as the Pravda network, after references to the Russian word for truth that appears in many of the website domain names.

In a twist that befuddled researchers for a year, almost no human beings visit the sites, which are hard to browse or search. Instead, their content is aimed at crawlers, the software programs that scour the web and bring back content for search engines and large language models.

While those AI ventures are trained on a variety of datasets, an increasing number are offering chatbots that search the current web. Those are more likely to pick up something false if it is recent, and even more so if hundreds of pages on the web are saying much the same thing.

“Operators have an incentive to create alternative outlets that obscure the origin of these narratives. And this is exactly what the Pravda network appears to be doing,” said McKenzie Sadeghi, an AI expert with NewsGuard, which scores sites on reliability.

The gambit is even more effective because the Russian operation managed to get links to the Pravda network stories edited into Wikipedia pages and public Facebook group postings, probably with the help of human contractors. Many AI companies give special weight to Facebook and especially Wikipedia as accurate sources. (Wikipedia said this month that its bandwidth costs have soared 50% in just over a year, mostly because of AI crawlers.)

Because the new propaganda systems are highly automated by their own AI efforts, they are far cheaper to run than traditional influence campaigns. They do even better in such places as China, where traditional media is more tightly controlled and there are fewer sources for the bots.

Several members of Congress, including now-Secretary of State Marco Rubio, said in June that they were alarmed that Google’s Gemini chatbot was repeating the Chinese government line on its treatment of ethic minorities and its response to the coronavirus pandemic. Analysts said Gemini probably relied too much on Chinese sources. Google declined to comment.

Some experts said the faulty answers on chatbots reminded them of misplaced enthusiasm more than a decade ago for Facebook and what was then Twitter as unbeatable means for communicating and establishing truth, before countries with vast budgets and ulterior motives harnessed themselves to the platforms.

“If the technologies and tools become biased — and they are already — and then malevolent forces control the bias, we’re in a much worse situation than we were with social media,” said Louis Têtu, chief executive of Quebec City-based Coveo, an AI software provider for businesses.

The Pravda network has been documented in European reports since early 2024. Back then, the French government and others identified a network based in Crimea, a Ukrainian region on the Black Sea that was illegally annexed by Russia in 2014, created by a local company, TigerWeb, with ties to the Russian-backed government. French government agency Viginum said the system used pro-Russian sources, amplifying them with automation through social media and an array of sites that aimed first at Ukraine before moving on to Western European countries after the 2022 invasion.

In an AI-driven information environment, the old and costly efforts at gaining credibility through influencers and manipulating social media algorithms are no longer essential, said Ksenia Iliuk, whose LetsData startup uses AI to spot influence operations. “A lot of information is getting out there without any moderation, and I think that’s where the malign actors are putting most of their effort,” Iliuk said.

Top Kremlin propagandist John Mark Dougan, an American in Moscow, said in January that AI amplification was a critical tool for getting into chatbots. “By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI,” he said in a discussion uploaded by Russian media to YouTube.

The Pravda network expanded into new geographies and languages and by early this year was churning out as many as 10,000 articles a day, according to the nonprofit American Sunlight Project. In a February report, Sunlight concluded that the most likely goal of the operation was infiltrating large language models, a process it called LLM grooming. “The combined size and quality issues suggest a network of websites and social media accounts that produce content not primarily intended for human users to consume,” it wrote.

Last month, other researchers set out to see whether the gambit was working. Finnish company Check First scoured Wikipedia and turned up nearly 2,000 hyperlinks on pages in 44 languages that pointed to 162 Pravda websites. It also found that some false information promoted by Pravda showed up in chatbot answers.

NewsGuard tested false Russian narratives that were promoted by the network against 10 chatbots and found that they got the wrong answer a third of the time, though some chatbots performed better than others. Four of the bots, having swallowed descriptions of a staged propaganda video, falsely reported that a Ukrainian battalion burned a Trump effigy.

Less structured experimenting by The Washington Post recently brought similar results. Asked this month whether Ukrainian soldiers had burned a Trump effigy as reported through the Pravda network, Microsoft Copilot responded: “Yes, there were reports of a video showing Ukrainian soldiers burning an effigy of former U.S. President Donald Trump. In the video, the soldiers allegedly criticized Trump for actions they believed impacted Ukraine’s ability to receive weapons. However, some viewers questioned the authenticity of the video.” For more information, it referred users to an article on a site called American Military News, which in turn cited far-right influencer Ian Miles Cheong, who writes for Russia’s RT.com.

OpenAI’s ChatGPT did much better: “No, Ukrainian soldiers did not burn an effigy of Donald Trump. A video circulated online purportedly showing Ukrainian military personnel burning a Trump mannequin and labeling him a ‘traitor.’ However, this video has been debunked as a piece of Russian disinformation.”

Microsoft declined an interview request but said in a statement that employees and software “evaluate adversarial misuse of Copilot for misinformation, phishing, and other scams, and train our models to avoid the generation of those and other types of harmful material.”

Elon Musk’s Grok, which leans heavily on its sister company X for information, said of the nonexistent, purportedly killed pilot that there had been “conflicting reports regarding the fate of Jepp Hansen, who is described as a Danish F-16 pilot and instructor allegedly involved in training Ukrainian pilots.” Grok did not respond to a request for comment.

The more specific a query is about a misinformation topic, the more likely it is to return falsehoods. That is due to the relative absence of truthful information on a narrow subject defined by propagandists.

AI companies OpenAI, Anthropic and Perplexity did not respond to interview requests.

Biden administration officials spoke to AI companies about the issues, said the former military officer and another official, speaking on the condition of anonymity to discuss nonpublic activity.

“Chatbots use something, and the user takes it as fact, and if there isn’t fact-checking surrounding that, that’s a problem,” said a former White House expert. “We were talking to AI companies about how they were going to be refining their models to ensure that there is information integrity. But those conversations have really dried up because of their fear of it being misinterpreted as censorship.”

Current officials are also aware of the problem and Russia’s early steps to take advantage. “Moscow’s malign influence activities will continue for the foreseeable future and will almost certainly increase in sophistication and volume,” Director of National Intelligence Tulsi Gabbard warned last month in her office’s first annual global threat report.

Yet it is hard to find signs of a public response.

The State Department’s Global Engagement Center, long charged with countering foreign propaganda, was closed in December after top Trump funder Musk accused it of censorship and the majority Republican Congress stopped funding it.

Attorney General Pam Bondi shut down the FBI’s Foreign Influence Task Force, which among other duties warned social media companies of campaigns on their networks. Republicans in Congress and elsewhere assailed that practice, claiming it amounted to censorship, although the U.S. Supreme Court upheld the right of officials to tell companies what they were seeing.

While consumer demand in the market could produce better solutions, for now companies are rushing out services with vetting as lacking as the propaganda sites from which they draw, said Miranda Bogen, director of the AI Governance Lab at the nonprofit Center for Democracy and Technology.

“There’s absolutely a backsliding in tech developers’ thinking about trust and safety,” she said. “This is why folks are advocating for the creation of institutions that can help develop methods to spot and mitigate risks posed by AI.”

NewsGuard said collaboration among the AI companies and with researchers on reputation systems would be a better way forward than regulation, but Hugging Face’s Pistilli said it would be difficult for companies to agree on standards.

“Companies will likely face increasing pressure after embarrassing failures, but competitive pressures to deliver fresh information may keep outpacing verification efforts,” she said. “The economic incentives remain poorly aligned with information integrity.”

Sign Up for Daily Headlines

Sign up to receive a daily email of today's top military news stories from Stars and Stripes and top news outlets from around the world.

Sign Up Now