Here is the report (pdf)
Security researchers at Insikt Group identified a malign influence network, CopyCop, skillfully leveraging inauthentic media outlets in the US, UK, and France. This network is suspected to be operated from Russia and is likely aligned with the Russian government. CopyCop extensively used generative AI to plagiarize and modify content from legitimate media sources to tailor political messages with specific biases. This included content critical of Western policies and supportive of Russian perspectives on international issues like the Ukraine conflict and the Israel-Hamas tensions.
CopyCop’s operation involves a calculated use of large language models (LLMs) to plagiarize, translate, and edit content from legitimate mainstream media outlets. By employing prompt engineering techniques, the network tailors this content to resonate with specific audiences, injecting political bias that aligns with its strategic objectives. In recent weeks, alongside its AI-generated content, CopyCop has begun to gain traction by posting targeted, human-produced content that engages deeply with its audience.
The content disseminated by CopyCop spans divisive domestic issues, including perspectives on Russia’s military actions in Ukraine presented in a pro-Russian light and critical viewpoints of Israeli military operations in Gaza. It also includes narratives that influence the political landscape in the US, notably by supporting Republican candidates while disparaging House and Senate Democrats, as well as critiquing the Biden administration’s policies.
The infrastructure supporting CopyCop has strong ties to the disinformation outlet DCWeekly, managed by John Mark Dougan, a US citizen who fled to Russia in 2016. The content from CopyCop is also amplified by well-known Russian state-sponsored actors such as Doppelgänger and Portal Kombat. Also, it boosts material from other Russian influence operations like the Foundation to Battle Injustice and InfoRos, suggesting a highly coordinated effort.
This use of generative AI to create and disseminate content at scale introduces significant challenges for those tasked with safeguarding elections. The sophisticated narratives, tailored to stir specific political sentiments, make it increasingly difficult for public officials to counteract the rapid spread of these false narratives effectively.
Public-sector organizations are urged to heighten awareness around threat actors like CopyCop and the risks posed by AI-generated disinformation. Legitimate media outlets also face risks, as their content may be plagiarized and weaponized to support adversarial state narratives, potentially damaging their credibility.
I’m well insulated from the threat thanks to not reading any news source.