In early March 2024, researchers from Insikit Group, discovered an influence network called Copy Cop. This network is believed to be operated from Russian government. It uses a large language model (LLM) to create and spread fake news stories.
CopyCop published over 19,000 fake news articles. The AI takes content from mainstream media and alters it to include potential bias. These altered stories are often then spread through fake media outlets in UK, US, and France. The network focuses on various issues, often supporting Russian perspectives and criticizing Western policies.
The network has been particularly active in promoting pro-Russian views on war between Ukraine and Russia. During the 2024 US elections, CopyCop supported Republican Candidates and criticized President Joe Biden and other democrats.
CopyCop infrastructure includes the disinformation website DCWeekly, run by John Mark Dougan, a US citizen who fled to Russia in 2016. The network also receives support from other Russian state-sponsored groups. CopyCop boosts content from other Russian propaganda sources linked to Russia’s GRU Unit 54777.
Despite its large operations, CopyCop has made several mistakes. Some articles included notes showing how the AI was instructed to a conservative or cynical tone. For example, over 90 French articles were changed with instructions to criticize Macron administration and support working-class citizens.
These methods are similar to disinformation techniques used by the KGB in 1980s. However modern technology has made the process much faster. An investigation published on May 9th by Recorded Future reveals that CopyCop uses AI, likely from OpenAI, to modify legitimate news stories to include biased views.
The rise of AI generated fake news presents news challenges. As CopyCop continues to produce more fake content, it threatens the credibility of real media outlets and the integrity of public information. This highlights the need for strong measures to detect and counteract the spread of false narratives.