Why Facebook is more revealing on nudity than on Russian disinformation
Researchers, election observers, and governments alike all struggle to measure the scope and impact of Russian disinformation campaigns. So far, sporadic press releases by online platforms constitute the bulk of first-hand evidence of these campaigns, as is the case with Facebook’s latest takedown of Russian operations targeting African countries. More systematic access to this evidence will be key to tackling disinformation and election meddling.
Even though Facebook and the Russian government have their Brussels offices next door to each other, they appear to live in worlds apart.
Last week, Russia’s state-operated news agency RIA Novosti proclaimed that “Russia does not interfere in internal affairs of African States”. Just two days later, Facebook announced the takedown of a major disinformation campaign targeting at least eight African countries. On its news blog, the company explicitly attributed these campaigns to “entities associated with Russian financier Yevgeniy Prigozhin”, popularly known as Putin’s chef, who has previously been indicted by the US Justice Department for involvement in Russia’s interference in the 2016 US election. Indicating tactical evolution, Russian operatives also worked with locals in the African countries to set up Facebook accounts that were disguised as authentic to avoid detection.
But Facebook’s press release is interesting for another reason: for the first time on its news blog, the company explicitly refers to a Russian campaign as “foreign interference”.
So far, experts are divided about whether Russia’s disinformation campaigns constitute foreign interference. In line with international law, this would boil down to classifying the Kremlin’s operations as acts of coercion. Some say this is a stretch, given that Russian social media operations merely “impact people’s opinions, which may or may not have impacted subsequent votes”. Others are more hawkish in their assessment, noting that the Kremlin’s efforts indeed constitute coercion insofar as they are “purposively designed to exert control over a sovereign matter”.
Not only scholars of international law struggle to assess the scope and impact of disinformation campaigns. By refusing to grant systematic access to public interest data, the leading online platforms currently monopolise the ability to assess whether elections may have been compromised by manipulative campaigns.
For the EU, for instance, election observation missions are a key tool for supporting democracy and promoting human rights around the world, including in African countries. Yet with these companies failing to systematically provide evidence of malicious activity on their platforms, it is virtually impossible to assess the degree to which the Kremlin’s operations may have distorted electoral processes in African countries, or violated national or international electoral laws. Sporadic updates on the news blogs of online platforms are an insufficient basis for democratic actors to do their job.
As The Guardian recently noted, less than 10 percent of Facebook’s users live in the US, arguing that in order to protect the remaining 90 percent from harm, the company should tailor its transparency and integrity policies to respective social and political contexts. In countries that are targeted by Russian disinformation campaigns, this also implies reporting regularly and comprehensively on the degree of information operations via the different Facebook services.
Currently, Facebook systematically reports on violations of its adult nudity policy, but nowhere discloses the full volume and extent of foreign interference campaigns on its platform. This means that when platforms take down malicious networks emanating from Russia or Iran, for instance, they don’t do so on grounds of foreign interference, but based on other provisions of their terms of service, such as those relating to fake accounts or so-called “inauthentic coordinated behaviour”.
Besides systematic self-reporting, researchers have also called for insights into accounts that platforms themselves have taken down and attributed to foreign actors. This would help researchers identify behavioral patterns, and thus detect future disinformation campaigns faster.