Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
14/06/2019

Challenges of Content Moderation

Challenges of Content Moderation
 Théophile Lenoir
Author
Fellow - Misinformation and Digital Policy

Content moderation is drawing much attention worldwide. As governments are asking private actors to take concrete action to prevent the circulation of certain types of content online, public policy analysts and academics debate the extent to which such initiatives are dangerous for freedom of expression. In order to address this issue, Institut Montaigne will ask several experts their opinions in the coming weeks. This is the introductory article of the series.

"There is no race. There is no gender. There is no age. There are no infirmities. There are only minds. Utopia? No. The Internet." This excerpt from American telecommunications company MCI’s 1997 advert crystallizes the hopes and dreams of empowerment and freedom sparked by the rise of cyberspace in the 1990s. We have come a long way since then, and the ideals assuming that equity and pacific communication online was a given have been proved wrong. There has indeed been a proliferation of harmful speech and abuse on social media platforms in recent years, which are leading many countries to take action to regulate what is often referred to as "the global village".

This explains why content moderation is attracting so much attention in France. On March 20th, Laetitia Avia submitted a law to tackle hateful speech online. On May 10th, Emmanuel Macron welcomed Mark Zuckerberg at the Elysée Palace to discuss platform regulations, and more specifically the issue of hate speech on Facebook. This took place a year after the Tech for Good summit, when Facebook and the French government had launched a collaboration to improve content moderation on the social network. This initiative resulted in the publication of a report on May 10th, which offers recommendations to improve the social responsibility of platforms. Five days later, Emmanuel Macron met with Jacinda Ardern to discuss the Christchurch Call following the Christchurch mosque attacks on 15 March, which was streamed live on the platform.

Various types of content concerned

There are several types of content that governments want to see banned. These include illegal contents, such as expressions of racism, discriminatory speech based on religion, gender or sexual preferences, or arguments denying crimes against humanity such as the Holocaust. Another type of content that governments are fighting against are those defending and even serving as propaganda for terrorist association ; there is also the issue of pornography and sex on public social networks, as well as for the youth ; there is that of violent content circulating live, such as the Christchurch shootings ; finally, there is disinformation and purposely fake information, fabricated to destabilize election cycles both within and without the countries concerned by the elections.

The content governments want to erase is often the most extreme: blatantly false, violent, pornographic, racist, discriminatory…

In all these situations, the content governments want to erase is often the most extreme: blatantly false, violent, pornographic, racist, discriminatory… (consensus is easily reached as to why such content should be deleted). However, there are always grey areas that bring questions difficult to answer. For example, in France, the law combatting the manipulation of information, approved in 2018, creates a legal injunction to stop the circulation of disinformation during election campaigns.

It will be the role of an interim judge to qualify a piece of news as "fake", based on three criteria:

  1. the fake news must be manifest,
  2. be disseminated deliberately on a massive scale,
  3. and lead to a disturbance of the peace or compromise the outcome of an election.

Dealing with grey areas: True or false? Legitimate or illegitimate?

A lot of ink was spilled contesting the project : what does "manifest" mean ? Is there really a line between fake and true, and is a judge the right person to decide? Those questions are impossible to ignore, especially given that social scientists from a wide array of disciplines, such as Bruno Latour, have long questioned the very notion of objective facts: in this line of thought, facts are not merely "out there" but constructed by scientists in a human environment shaped by power dynamics. Therefore, the question of who asserts the objectivity of a fact matters a great deal. 

When dealing with content moderation more generally, the point of view from which a content is considered to be "appropriate" is essential.Content moderation is about deleting content and therefore, to some extent, censorship. In the end, the tricky question is: "how many people need to agree that a specific content should be censored for censorship to be legitimated?". Let’s take for example those who deny the existence of climate change. Not many do in France, but in the United States, there is a range of people who doubt that this phenomenon is real. Should this argument be banned because a large majority of scientists have proved it to be wrong? Such debates create important divides.

Technical complications

In addition to these considerations of what is legitimate or not, there is a technical issue involved in deleting content. Today, platforms such as Facebook are investing a lot of money to build tools using artificial intelligence to detect specific types of content (nudity, fake news, hate speech). Facebook recently announced it had invested $7.5 million to improve video and image analysis technology in order to identify duplicates and delete them quicker in the wake of the Christchurch scandal. The technology’s efficiency varies according to the type of content. For example, Nicholas Thompson and Fred Vogelstein report in a Wired article that on Facebook "the success rate for identifying nudity is 96 percent." Yet it is only 52% for hate speech.
 
Despite the accuracy of the technology in categorizing a content ("nude"), in some cases it still makes erroneous choices. Tarleton Gillespie’s book on content moderation, Custodians of the Internet, starts by the example of the Nick Ut 1972 famous photograph, the Terror of War, on which a little girl runs naked whilst a village in the background burns. It is nude, but  should it be deleted? Here, it is difficult for the algorithm to take into consideration the picture’s historical dimension. Such misclassifications will inevitably happen. If platforms are punished for leaving inappropriate content (in this case nude content) on their platform, some argue it will lead them to filter more content than necessary to avoid paying large fines.

Two examples: initiatives in France and in the United Kingdom

As previously mentioned, the French government has begun to take action in order to provide answers to these questions and counter the spread and curation of harmful speech online. The bill presented by Laetitia Avia in March, which focuses on the issue of hateful speech, proposes to oblige platforms to remove any content inciting to hatred within the 24 hours after which it is posted. It also recommends to simplify the current processes by which platform users can signal hateful speech, and emphasizes the need for platforms to be transparent about the processes they themselves employ. Laetitia Avia has also made clear that the bill, which will be reviewed by the French Parliament in July, will propose to establish platforms’ failure to remove hateful content as a criminal offense. Conversely, the French deputy also noted that excessive censorship from platforms should equally be sanctioned. It is also expected that a regulator will be appointed to oversee the implementation of this new legislation.

The May 10th French report on the increase of social media platforms’ responsibility with regards to content moderation adopts a different approach to that of the sanctions-focused bill just mentioned. The report introduces new forms of co-regulation and aims to encourage platforms’ sense of civic responsibility by promoting their internalization of societal goals through processes like accountability by design. According to this report, the role of French and European public authorities should mainly consist of establishing obligations of transparency, in order to make sure that platforms remain accountable and that the self-regulation mechanisms the report advocates are efficient and deliver the expected outcomes. 

The bill also recommends to simplify the current processes by which platform users can signal hateful speech, and emphasizes the need for platforms to be transparent about the processes they themselves employ.

France is not the only country to have attempted to set up a regulatory framework for online content moderation. The United Kingdom’s Department for Digital, Culture, Media & Sport and Home Office presented a White Paper on Online Harms to the British Parliament in April, which should lead to the drafting of a legislation in the summer. It is similar to the French report published in May in that it stresses the responsibility of social media platforms and demands more transparency from the latter. It recommends that public authorities focus on ensuring that platforms comply with the duty of care thus defined, and also involves the designation of a regulator, able to impose fines in cases of non-compliance.

In anticipation of the regulatory decisions expected this Summer in both countries, Institut Montaigne has gathered the viewpoints of experts from France, the United Kingdom and the US, in order to delve into the intricacies of the grey areas identified, and to reflect upon the best ways to strike a balance between freedom of speech, and respectful and harmless communication online. This series will be coordinated with the help of Manon de La Selle, former Project Manager at Institut Montaigne and student in media and communications at the London School of Economics and Political Sciences (LSE).

Receive Institut Montaigne’s monthly newsletter in English
Subscribe