A covert Russian government-operated social media bot farm that used generative AI to spread disinformation to global users has been disrupted by a joint FBI-international cybersecurity forces operation.
Affiliates of a Russian state-sponsored media organization Russia Today (RT), used Meliorator — an AI-enabled bot farm generation and management software–to set up over 1000 X (formerly Twitter) bots for spreading disinformation in and about many countries, including the US, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel.
“The social media bot farm used elements of AI to create fictitious social media profiles — often purporting to belong to individuals in the United States — which the operators then used to promote messages in support of Russian government objectives, according to affidavits unsealed today,” the Justice Department said in a press statement.
The operation was conducted by the FBI and Cyber National Mission Force (CNMF), in partnership with the Netherlands General Intelligence and Security Service (AIVD), Netherlands Military Intelligence and Security Service (MIVD), the Netherlands Police (DNP), and the Canadian Centre for Cyber Security (CCCS).
Multiple bot personas for disinformation
According to an advisory issued on the details of the operation, the perpetrators used Meliorator for mass generating “authentic” looking personas (identities).
“Affiliates of RT used Meliorator to create fictitious online personas, representing a number of nationalities, to post content on X,” said authorities in the advisory. “Although the tool was only identified on X, the authoring organizations’ analysis of Meliorator indicated the developers intended to expand its functionality to other social media platforms.”
To create these personas, the affiliates used Meliolator’s admin panel called “Brigadir” with a seeding tool “Taras.” With a couple of handy features like “souls” and “thoughts,” which represented identity and actions respectively, Brigadir was used for logging bots (X accounts) with specific parameters, which were even grouped based on their ideological alignment and biographical data.
While it isn’t clear when the perpetrators first used Meliorator to create bots, the press statement noted that since at least 2022, the RT sought the development of alternative means for distributing information beyond RT’s standard television news broadcasts.
“The development of the social media bot farm was organized by an individual identified in Russia,” said the Justice Department. “In early 2022, Individual A worked as the deputy editor-in-chief at RT, a state-run Russian news organization based in Moscow.”
Mitigations include reinforcing authentication
To avoid detection, the actors implemented a number of obfuscation techniques that included IP masking, bypassing dual-factor authentication, and changing the user agent string. “Operators avoid detection by using a backend code designed to auto-assign a proxy IP address to the AI-generated persona based on their assumed location,” the advisory added.
Authorities, in the advisory, recommended social media organizations implement mitigating steps to reduce the impact of Russian actors using such platforms.
A few steps added to the recommendations were implementing processes to validate that accounts are created and operated by a human person, reviewing and making upgrades to authentication and verification processes, using protocols for identifying and subsequently reviewing users with known suspicious user agent strings, and making user accounts Secure by Default by using default settings such as MFA.
Additionally, the advisory shared details on the associated infrastructure used in the disinformation campaign, which included IP addresses, SSL certificates, and Mail server domains used by the actor.
More security news: