Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
23/07/2020

Content Regulation in France, the UK and China: Our Work with Sciences Po Students

Content Regulation in France, the UK and China: Our Work with Sciences Po Students
 Institut Montaigne
Author
Institut Montaigne
 Sciences Po School of Public Affairs
Author
(Master in Public Affairs)

The issue of who regulates the Internet, how, and why has long been a hot topic for policymakers in the digital sphere. Once seen as an unregulated "Wild West", the boundless arena of cyberspace is one in which traditional state actors increasingly want to perform. But how can states impose their will in an ecosystem without frontiers, replete with Virtual Private Networks (VPNs), end-to-end encryption (E2E), The Onion Router (TOR) and everything they entail? And more importantly, what do these systems of regulation say about states and the values and principles by which they seek to define themselves?

For the last three months, Institut Montaigne has worked closely with five students from the Sciences Po Master in Public Affairs: Fiona Olivier, Camila Albornoz, Seung Cheol Ohk and Mei Moriizumi. Under the supervision of Dominique Cardon, Director of the Sciences Po Medialab and Théophile Lenoir, Head of the Digital Program at Institut Montaigne, they have analyzed and compared some of the emerging systems of content regulation in China, France and the United Kingdom (UK).

Three specific harms were looked into: disinformation, hate speech and terrorist content. These three harms represent the gamut of online content, from content universally condemned across the globe, to contested terms yet to be clearly defined by law. China, France and the UK are among the first in the world to develop a clear regulatory system for online content (Germany was left aside in this study given its similarity to France on content regulation).

Our findings show the extent to which the design and delivery of online content regulation allow states to impose methods of control and cultural norms, and thus shift the balance of power in their direction. However, this greater sovereignty from the states makes international regulation more difficult because of a lack of agreement on what constitutes problematic content.

Freedom of speech vs. national security?

There are several rationales which justify the regulation of the Internet, varying from the protection of national security, safeguarding human dignity and ensuring physical safety of children, to the promotion of economic well-being. Among them, governments are generally willing to implement tougher regulations when national security is at stake.

China, France and the UK have different views on online discourses as national security threats. Countering terrorist content online is where the three countries consider the harm as a national security threat and implement the strictest regulations. British society accepts the government’s online surveillance and curtailments on their freedom of speech if it leads to the prevention of terrorism. France also considers terrorism as one of the most serious threats to international and domestic peace and security. Police in both the UK and France actively monitor social networks for terrorist material, with social media platforms also used as domains for intelligence-gathering and counter-radicalization. Therefore, on terrorist content, the regulatory practices of the UK, France and China are similar.
France, China and the UK take a different approach on disinformation. In China, disinformation is considered, like terrorist content, as a national security risk and a crime, justifying strict control over content. In the UK, although disinformation is not a crime, the government sees it as a potential national security threat, with foreign actors seeking to influence its citizens. Therefore, in the UK, disinformation is qualified as "fourth-generation espionage".

Aside from publishing its Online Harms White Paper to set out a new framework on online content regulation, the government has established new organizations to tackle and monitor online disinformation.

If disinformation is considered as a threat by the French Defense Ministry, it is not qualified as such in French legal texts and has not resulted in the creation of ministerial organizations.

The UK also set up the National Security Communications Team and the Rapid Response Unit within the government in 2018. The National Security Communications Team is to increase the UK’s resilience to serious national security threats and the government’s capability to use communications effectively during its response. The Rapid Response Unit is monitoring news and information online to identify misinformation and disinformation. While the role of the government, the police and the judiciary in policing disinformation is small, the government uses strategic communications to tackle it.

France focuses specifically on disinformation around electoral campaigns. As a result of the electoral interference in 2017, France targets these threats to democracy by requiring platforms to meet certain standards of conduct during the three months preceding elections, and arming judges with the power to order content removal during election campaigns. If disinformation is considered as a threat by the French Defense Ministry, it is not qualified as such in French legal texts and has not resulted in the creation of ministerial organizations.

Different systems that reflect different national identities

The severity of regulation differed both between and within countries, depending on the type of harm in question, showing different meanings of sovereignty. For all harms, the toughest regulation was found in China.

Under the Chinese regulatory system, forged by the Counter-Terrorism Law and the Cyber Security Law, companies operate under licenses granted by the government, which mandate the monitoring and strict policing of users, with non-compliance accompanied by tough criminal sanctions and the closure of platforms. In all three types of content observed (disinformation, hate speech and terrorist content), the government determines what is tolerated, with no oversight from citizens and civil society organizations. Legal regulations are also backed by an Internet architecture that allows the government to restrict and monitor almost all content via domestically located servers. While there are some minor differences between the three harms, attitudes towards terrorism, hate speech and disinformation are very similar, for the simple reason that any content questioning the official state narrative is viewed as an attack on the Chinese way of life and therefore, as a form of terrorism.

For France and the UK, the sanctions’ seriousness varies according to the type of content. Concerning terrorism, whilst it would be wrong to claim the Franco-British approach is identical to the Chinese one, there are many parallels. The government defines what is unacceptable content, both private companies and the police monitor and censor content, offenders face criminal sanctions, and at the end of the day there is little oversight by citizens and civil society. Whilst in both cases the regulatory system has defined an independent regulator, the French SDLC (Sous-direction de la Lutte contre la Cybercriminalité) is more of an administrative authority, whilst the UK’s Ofcom is bound to enforce a code of practice written directly by the Home Office.

In the case of hate speech, less severe regulatory systems in France and the UK enable greater freedom of speech and more room for dissenting and contradictory views. The legal and regulatory definitions of hate speech in France and the UK, as opposed to China, are aimed at protecting citizens from abuse, rather than the State, and focus on ensuring that people do not suffer discrimination from a set of protected characteristics (gender, ethnicity, etc.).

However, there are some important differences in France’s and the UK’s methods of regulation. France’s recently voted Loi Avia has hit headlines for its mandated removal of hateful content within 24 hours, which was considered unconstitutional by the French Constitutional Council. Less has been made of the fact that this request was created directly by policymakers and not by an independent regulator (as envisaged in the French report, Creating a French Framework to Make Social Media More Accountable).

The legal and regulatory definitions of hate speech in France and the UK, as opposed to China, are aimed at protecting citizens from abuse, rather than the State

By contrast, the UK regulator Ofcom drafts "codes of practice" instructing companies on individual harms. This difference is subtle but profound: under the French system, the role of the regulator (or in this case an administrative authority, itself under the jurisdiction of the French police) is limited to enforcement, whereas under the British system the regulator defines the rules for companies to follow, and is entirely separate from government.

These differences in online regulation reflect existing variations in values of the nation-states. In France, the Avia Law is the symptom of a centralized and vertical approach, whereby the government clearly sets out the rules they expect platforms and citizens to follow. In the UK, the approach leads to a co-creation by both the government and an independent regulator, just as the Common Law system and the unwritten Constitution heavily rely on forming consensus between different parties.

Governments manage to work with platforms on terrorism, not so much on other forms of content

Platforms cannot be expected to have the answers to finding the balance between freedom of expression and protection from harm without help from governments. But governments and platforms often struggle to work together, because of a knowledge gap between governments and the technological giants, thus frustrating efforts to combine regulation and technological innovation that protect users and citizens. Yet it is not enough for the government to define the sanctions without defining the detailed rules, because there needs to be discussion on the technicality of content removal. Even in China, where the government takes a heavy-handed approach across all harms, it is generally up to the platforms to establish a monitoring system.

Interestingly, in the case of terrorism, where everyone agrees on its impact on national security, governments and platforms have shown their ability to work together. International fora such as the G7 and the Global Internet Forum to Counter Terrorism (GIFCT), along with a side event at the UN General Assembly, and the Christchurch Call initiative, all focus on technology industry engagement. Nevertheless, governments and platforms have stepped up collaboration in areas where the situation is acute and relatively clear cut, such as addressing issues of public health and security. For example, they have worked together to address disinformation on protecting public health,such as around vaccines and the Covid-19 crisis.

Can states be sovereign in a context of international regulation?

Interestingly, in the case of terrorism, where everyone agrees on its impact on national security, governments and platforms have shown their ability to work together.

Content moderation has an international dimension, which entails a tension between the sovereignty of states and of supranational organizations. Governments have recognized that supra national effort to counter harmful content is the appropriate approach to provide a more effective response to a phenomenon that takes place in a field without borders. The European Digital Services Act, currently discussed at the European Commission, is the proof of that. It is also an acknowledgement that supranational regulation is a better tool to address the increasing power of tech giants in European economies and democracies, especially in small countries.

In the case of terrorist content, States seem to agree on the degree of intervention. Overall, initiatives aim to highlight the role of platforms in combating terrorist content. One of the most recent initiatives is the so-called "Christchurch Call" led by France and New Zealand, following the mass shootings that took place in Christchurch mosques. As part of this effort, online services providers agreed to take action to prevent the uploading of terrorist content and to halt its dissemination on social media. The EU Crisis Protocol, built within the framework of the Christchurch Call, outlines the roles and obligations of relevant actors, as well as the procedures and tools for exchanging information among platforms and law enforcement in order to provide a rapid response to the spread of terrorist content on the Internet. This protocol has as its core the protection of fundamental rights and the legal frameworks at the European level, such as the General Data Protection Regulation.

Regarding hate speech and disinformation, the initiatives have been more difficult to coordinate. This may be explained by the cultural, political and philosophical differences within Europe that underpin different approaches to illegal content. Overall, the lack of common definitions and agreement around what is acceptable may soften and even hamper efforts. That is why international efforts have largely focused on education, prevention, best practice sharing and reporting. For example, European actors have focused on creating tools based on voluntarism, such as the Code of Practice on Disinformation, deciding not to define illegal content at the level of the European Union (although the Digital Services Act is now going towards that direction).

Conclusion

Our study has highlighted clear variations between the approaches of the three countries to regulating speech online. China, the UK and France share similar approaches to dealing with terrorist content. However, whilst both disinformation and hate speech are qualified as a national security threat in China, it is not the case in France and the UK, where actions taken to delete such content are moderate.

There are significant differences between like-minded countries such as France and the UK, which reflect cultural approaches. In France, the government attempted to set out the rules they expect platforms and citizens to follow under the failed Avia Law. In the UK (and in the French report Creating a French Framework to Make Social Media More Accountable, which has not led to legislation), the approach focuses on cooperation between the government and an independent regulator.

For all harms, the approaches taken by countries to regulate online speech carry hallmarks of historic cultural identities, forged over years of debate and resistance between governments, citizens and civil society organizations. Whilst the technologies may be new, many of the issues about what level of speech should be tolerated and where the line of censorship should be drawn have existed for as long as states themselves. As we move forward into a new age of controlled digital spaces, and as states seek to reassert their sovereignty through regulation, we are likely to see a far greater national variation in the way platforms operate.

 

Copyright: PHILIPPE HUGUEN / AFP

Receive Institut Montaigne’s monthly newsletter in English
Subscribe