Russia, Iran and China are using artificial intelligence tools as they increase their efforts to sway the American population ahead of the November election, U.S. intelligence officials said Monday, with Moscow especially set on denigrating Vice President Kamala Harris.
Russia, the most aggressive and skilled of the three countries, is emphasizing stories and comments that demean the Democratic presidential candidate’s personal qualities or positions, officials from the Office of the Director of National Intelligence and the FBI said in a briefing for reporters. The ODNI also released a one-page summary of its assessment, the latest in a series on foreign influence during the campaign.
Russia has doctored clips of Harris’s speeches to replace some of her words, an ODNI official told The Washington Post, and has used generative AI to create false text, photos, video and audio.
Officials said they agreed with a determination by Microsoft researchers a week ago that Russia was behind a viral staged video in which an actress falsely claimed that Harris had injured her in a hit-and-run car accident, garnering millions of view.
The officials, who spoke on the condition of anonymity for security reasons, said they did not study speech by Americans and so could not say which pieces of disinformation got more traction or boosts by high-profile figures.
But they did point to a recent indictment and related documents this month alleging that Russian officials invested $10 million in a Tennessee media company that paid well-known right-wing influencers for videos that promoted Russian interests, such as opposing U.S. aid to Ukraine. The influencers themselves were not charged with any crimes, and most have said they did not know the company was backed by Russia.
Russia is continuing to use unwitting or witting Americans to spread its messages, the officials said, as well as imitating websites of established media and using human commenters to drive traffic to those sites, which contain articles generated by AI.
The national intelligence officials said generative AI was an accelerant for influence efforts rather than a revolutionary change in them. For it to have a bigger impact, adversaries would need at least one of three things, they said: the ability to circumvent usage restrictions on some large language models, the ability to create their own models, or an effective means of distributing the content in the target country.
The intelligence officials said they had compared notes with U.S. AI companies and social media companies about tactics, while leaving it to the FBI to have any contacts with firms about specific accounts. In all cases, they said, decisions about what to do with the content or the accounts was left strictly to the companies.
Like Russia, Iran and China have promoted content that aims to exacerbate domestic divisions, the group said. Iran has been seeking to build on differences over the war in Gaza and using AI to create faked news articles in English and Spanish. China has focused on drug use, immigration and abortion.
Iran has acted to hurt Republican candidate and former President Donald Trump’s prospects, including by breaching his campaign and sending stolen documents to the media. China is more interested in lower-level campaigns where the candidates might support or oppose its priorities, the officials said.