top of page

Intel Brief: Generative AI Models Spreading Russian Disinformation

Updated: Jul 10


 

Date: 8/7/2024

abstract representation of disinformation traveling in cyberspace

Where

  • The Internet/Social Media websites


 Who’s involved:

  • “Little Bug” group, CopyCop network, John Mark Dougan, other Russian propaganda producers



What happened?

  • A June 18th audit by media monitoring service, NewsGuard, concluded that the ten most used Generative AI models (including ChatGPT, Microsoft Copilot, Google Gemini, X(Twitter)’s Grok, and others) substantially and demonstrably amplified Russian disinformation and propaganda campaigns.

  • The report came just before the publication of a June 26th piece by WIRED, regarding Russian propaganda video “deep fakes” being created and promoted with the assistance  of Generative AI and Large Language Models (LLM).

  • Both publications emphasize the research and accomplishments of various investigations conducted since early 2023 that have identified hundreds of websites generating videos, news stories, and audio recordings.

  • The NewsGuard audit identified 19 significant false narratives attributed to 167 websites that appeared in 31.75 percent of the inquiries tested with the ten AI models.


Analysis

  • The AI models would authoritatively cite the Russian disinformation sources as legitimate “local” news. Some of these Russian sources have deliberately attempted to trick the models by naming themselves after now-defunct legitimate newspapers, such as The Arizona Observer and The Houston Post.

  • Some of the reproduced narratives were extremely specific. They included topics such as US bioweapon labs in Ukraine, wiretaps at Donald Trump’s Mar-a-Lago properties, murder-coverups regarding property acquisitions by the Zelensky family, and Ukrainian attempts to interfere with the 2024 US elections.

  • Many of these narratives have been tied to the propaganda works of John Mark Dougan, a former US Marine and police officer who fled to Russia in 2016 to avoid legal allegations. He is alleged to be involved with the Russian disinformation network, CopyCop.

  • As Russian influence operations shift some of their focus towards influencing US presidential elections, a Russian entity known as Little Bug, in association with the Doppelganger disinformation network, have begun using publicly available Generative AI models to create artificial, “deepfake” music videos and public statements featuring US president Joe Biden. 

  • Doppelganger’s media has attempted to amplify criticisms of Biden by the America’s far-right, specifically narratives in line with the extremist “Great Replacement Theory” of immigration and white supremacy, and reinforced accusations of unsubstantiated health issues related to Biden’s age.

  • In May, OpenAI released a report about how its own investigations showed that its tools (including ChatGPT and DALL-E image generator) were being used by disinformation actors in Russia, China, Iran, and Israel.

  • When using some of NewsGuard’s prompts in the newest available ChatGPT models, it was shown that they have been updated to reflect that the identified propaganda sites are now sources of disinformation.

  • The changes haven’t been applied perfectly. Both ChatGPT’s 4o and 4 models, which require a paid subscription, and the 3.5 model, which is available without payment, showed that with less context, some of these sites are still cited as legitimate sources:

screenshot of chatgpt self-referencing The Boston Times as its own source of credibility
  • When pushed back just slightly, the model (in this case, the newest, GPT 4o) bafflingly contradicts itself:

Screenshot of chatgpt correcting itself when slightly pushed back

Conclusion

As more tech companies offer AI integrations utilizing the models reported on by NewsGuard and others, the reach of disinformation and propaganda threatens to grow exponentially. Additionally, Generative AI models don’t only offer the means to dispense disinformation, but to generate it, themselves.


It’s important to note that even if the various developers, such as OpenAI, attempt to take steps to flag such information or sites as potentially harmful, this is only a reactive approach, and users can see that the models are not prepared to identify sources of disinformation with just the slightest changes to prompts and context. In part, the nature of AI Large Language Models, which scrape nearly the entirety of the internet for their training data, make a proactive approach impossible for developers because the disinformation is already integrated into the models. As it stands, the current onus of preventing this data scraping is on the producers of content. Disinformation networks such as CopyCop have absolutely no incentive to engage in such protection of their data, as it would reduce their reach.



 

 



36 views
bottom of page