Protecting freedom of speech against information manipulation
Kate Jones is a researcher and consultant on human rights and emerging technology. She has written extensively on disinformation and human rights, technology governance and artificial intelligence, and is an Associate Fellow with Chatham House. She previously spent many years as a lawyer and diplomat with the UK Foreign and Commonwealth Office, serving in London, Geneva and Strasbourg with a focus on international human rights law.
In this interview with Kate, we examine aspects of freedom of expression, disinformation and manipulation. What are the “rules of the road” for governments when running information campaigns? We also look at attempts to hide malign information operations behind claims of freedom of expression and what the recent court ruling in the case of RT France vs the European Council illustrates.
How do you see the balance between upholding freedom of expression and media and protecting our societies against information manipulation or other malign activity aimed at undermining societies?
Kate Jones (KJ): Let’s be clear. Freedom of expression does not mean that governments are free to engage in deliberate disinformation and manipulation campaigns, whether directed at their own populations or internationally. Nor does it mean that governments are powerless to protect their own populations against deliberate disinformation campaigns by overseas regimes.
Freedom of expression is vitally important, and, indisputably, doesn’t only protect true information. Freedom of expression protects the expression of all kinds of information and ideas, regardless of whether they are fact or opinion, true or false, sincere or satire. Freedom of expression allows anyone to say what they wish, save for narrow exceptions imposed by criminal law.
But freedom of expression doesn’t protect techniques of manipulation. We’re used to that in other contexts: for example, we accept that misrepresentation nullifies a contract, and that fake product reviews aren’t acceptable marketing tools.
Action can be taken against a manipulation campaign aimed at undermining society if it is disrupting national security or public order. It is the disruptive intention that is key, rather than whether the material used is true or false. Freedom of expression can be restricted for specific purposes, including to protect national security or public order, provided that the restrictions are lawful, necessary and proportionate to the aims being pursued. This is the basis on which the European Court of Justice has judged the current European sanctions against RT to be consistent with freedom of expression.
The source of disinformation is also significant. Governments have responsibilities in respect of information that ordinary citizens do not, because people rely on what they say. The freedom to seek information that accompanies freedom of expression entails that governments have a responsibility to proactively put information of public interest in the public domain, as well as to promote diverse sources of information. This responsibility is increasingly important today, when it can be so hard for individuals to spot what’s true and what’s false. The converse of this responsibility is that governments, and their agents, should not generate or promote disinformation.
How was the balance between freedom of expression and disinformation addressed prior to the rise of social media?
KJ: Disinformation has been with us for centuries. Propaganda was a major concern in the aftermath of World War II. Indeed, it was seen as second only to the atomic bomb as a threat to global peace and security. At that time attention focused on radio propaganda, as radio sets had recently become widely available, meaning that large numbers of people could receive information directly from overseas in real time.
The drafters of the core human rights instruments perceived a significant risk that foreign government disinformation campaigns could lead to societal unrest and destabilise governments. René Cassin, as French delegate to the negotiations for the International Covenant of Civil and Political Rights, described propaganda as a ’vicious phenomenon,…[a] mind-conditioning and spiritual rape of the masses.’ There was a widespread view that freedom of expression should not entail a risk of exposure to unlimited propaganda and disinformation.
At the same time, there were also strong arguments that propaganda should be defeated by voluntary measures and the ’marketplace of ideas’, and so more extreme proposals were not accepted. For example, the USSR’s proposal to amend the draft Universal Declaration of Human Rights provision on freedom of expression by prohibiting ’war-mongering and fascist speech’ was defeated by 41 votes to 6, with 9 abstentions. Ultimately, the International Covenant on Civil and Political Rights bans “propaganda for war” as well as advocacy of hatred that constitutes incitement to discrimination, hostility or violence. The specific parameters of these bans have long been controversial.
The International Covenant on Civil and Political Rights is the United Nations treaty by which governments agree to protect civil rights and fundamental freedoms ranging from the rights to life and to be free from torture to freedoms of expression and association. Many of its provisions are similar to the European Convention on Human Rights, but it is open to all UN states to become parties rather than merely the 46 countries which are members of the Council of Europe.
Nonetheless, propaganda and disinformation became principal weapons of the Cold War. Some of the recent disinformation campaigns we have seen appear to have drawn on tactics developed in the 1950s. Like some 1950s campaigns, the Internet Research Agency’s 2015/16 campaigns against the US presidential elections relied on flooding the information environment with very large numbers of messages and adverts, obscuring the origins of messages and using local proxies to spread them, and impersonating real publications to give an air of authenticity to their messaging.
What should a human rights based approach to countering disinformation look like?
KJ: A human rights based approach should prioritise both freedom of expression and freedom of thought and opinion: it should allow people to speak freely and to have access to a range of diverse sources of information so that they can make up their own minds free of manipulation. Care must be taken to ensure that any restrictions on disinformation do not give governments or powerful private entities a green light to exercise unlimited control over what people say, or what they are able to hear. This risk may be mitigated by involving independent regulators with a mandate to safeguard human rights.
There are several dimensions to a human rights based approach. The EU is leading the way on many of these. Media and digital literacy for everyone, from children to the elderly, is essential, to help people find their way around the information environment. And it’s crucial to support an independent, pluralistic, properly-funded media so that everyone has access to reliable independent sources of information. Debunking disinformation, and labelling questionable information, are important pieces of the jigsaw too. There are now many brilliant examples of disinformation reporting tools and fact-checking initiatives all over the world.
But giving individuals the tools and means to access a range of information, while vitally important, isn’t enough to defend against deliberate manipulation campaigns by powers seeking to undermine national security or public order. There is more that governments and social media platforms can do to defend and preserve a free and pluralistic information environment.
First, it is open to governments to ban or sanction deliberate manipulation by foreign powers, provided that the ban is strictly limited to what’s needed to protect national security and public order. The British Government’s proposed offence of foreign interference is an interesting example. This new, carefully drafted offence would criminalise intentional manipulation of how a person exercises their public functions or participates in political processes, where two conditions are met. The first condition is that the manipulation involves either the commission of an offence or an act of coercion or misrepresentation, and the second is that the manipulation is carried out by someone associated with, or intending to benefit, a foreign power. As regards sanctions, the US Treasury operates a regime of sanctions in response to foreign interference in US elections.
Second, governments can ban specific instances of disinformation campaigns that are affecting national security or public order, like the EU’s sanctions against RT, Sputnik, and three other Russian government-controlled outlets, provided that these bans are in accordance with the law, necessary and proportionate in all the circumstances.
Third, governments and social media platforms can tackle how platforms are being used as channels for manipulation, by identifying and disrupting patterns of manipulative behaviour. We have seen that the design of platforms has often lent itself to disinformation campaigns, and that platforms have taken various steps to tackle disinformation since 2017, such as Facebook’s battle against coordinated inauthentic behaviour and Twitter’s against platform manipulation. There is more to be done. New legislation and policies are gradually requiring platforms to strengthen their efforts, including the revised EU Code of Practice on Disinformation and the British Government’s intention to designate the new foreign interference offence as a priority offence in the Online Safety Bill (meaning that platforms will have a duty to use proportionate systems and processes to minimise individuals’ exposure to foreign interference).
What is the dividing line between legitimate communication campaigns and illegitimate disinformation campaigns or operations?
KJ: There is a world of difference between campaigns designed to offer people the facts and a diversity of viewpoints, and campaigns designed to manipulate people into a distorted world view, or towards war or violence. However, the boundaries of legitimate activity in the field of information operations have not yet been clearly expressed or agreed internationally. This ambiguity plays into the hands of those who seek to manipulate public opinion through disinformation: it makes it difficult to respond effectively to illegitimate activity and gives scope for unfounded comparisons and allegations of hypocrisy.
It would be helpful to clarify the boundaries between legitimate information operations on the one hand, and illegitimate disinformation and manipulation campaigns on the other. This is not a straightforward exercise, in part because information operations take a wide range of forms and span times of peace and armed conflict, to which different legal regimes apply. But it’s an important one: setting out agreed parameters for information operations would make it easier to take a human rights-based approach to combating manipulation campaigns.