Since 2016, at least 43 countries have proposed or implemented measures aimed at combating influence campaigns on social media.
This is according to a study by the NATO Strategic Communications Centre of Excellence, whose authors note that new approaches to tackling disinformation are “flourishing” as online efforts to manipulate public opinion become an increasingly “pressing policy concern.”
The study, titled “Government Responses to Malicious Use of Social Media,” breaks down the new regulations into 10 categories: content takedowns by social media platforms, transparency of online ads, data protection, criminalisation of disinformation, expanding the definition of illegal content, media literacy and watchdogs, journalistic controls, parliamentary inquiries, creation of cybersecurity units, and monitoring initiatives.
Ireland, Italy, and Australia, for instance, are among the countries that introduced criminal penalties for producing or sharing disinformation, or for organising a bot campaign targeting a political issue.
Among other measures, Croatia recently funded a new media literacy initiative, the U.S. Congress is investigating Russian interference in the 2016 U.S. presidential election, and G7 countries are developing a Rapid Response Mechanism to fight disinformation and foreign interference in elections.
The authors, however, caution that the countermeasures adopted over the past two years are often “fragmentary, heavy-handed, and ill-equipped” to curb harmful content online.
They point out that most of the government initiatives so far have focused chiefly on regulating free speech on social media rather than on addressing the deeper systemic problems that lie beneath attempts to influence public opinion online.
Some authoritarian governments, they say, have also co-opted the fight against disinformation to introduce legislation aimed at tightening their grip on the digital sphere and legitimising censorship online.
Instead, the report urges policymakers to demand greater accountability and cooperation from social media platforms.
“A core issue is a lack of willingness of the social media platforms to engage in constructive dialogue as technology becomes more complex,” the authors note.
The report encourages governments to shift away from measures aimed at controlling online content and work together to “develop global standards and best practices for data protection, algorithmic transparency, and ethic product design.”
The European Union has stepped up its own efforts to counter disinformation and in December presented an Action Plan aimed at tackling online disinformation in EU countries and beyond.
The Action Plan will also ensure that tech companies comply with the European Commission’s Code of Practice, a document that commits online platforms to increase transparency for political advertising and to reduce the number of fake accounts.
The platforms are required to report to the Commission on a monthly basis ahead of the European elections in May and face regulatory action if they fail to meet their commitments.