Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
27/06/2019

Challenges of Content Moderation: Define "Harmful Content"

Interview with Claire Wardle

Challenges of Content Moderation: Define
 Claire Wardle
Executive Chair at First Draft

France and other countries (among which Germany and the United Kingdom) are currently investing efforts in attempting to regulate the moderation of harmful content on social media platforms. If such initiatives are often supported by public authorities, they are the source of heated debates regarding their potential impact for freedom of expression. Here, Claire Wardle, Executive Chair at First Draft and member of Institut Montaigne’s previous working group on the French media ecosystem, argues that a multi-national reflection on issues of content moderation would benefit countries that have so far been working in silos.

The French report on the regulation of social networks published on May 10th stresses the importance of the transparency of the algorithms used for content moderation, and of accountability by design. What do you think of this approach? Do you believe it can be efficient in decreasing online harms?

My biggest concern about the growing conversations on government regulation of "harmful" content, is that we have no clear definition of what we mean by that. We know how to define illegal speech, such as terrorism related content or children sexual abuse content. Yet we lack clear definitions and boundaries for any speech that lies outside this category. In my work, I regularly witness the weaponization of context: genuine images or posts that could be harmless are framed in such a way so as to become potentially damaging. Decisions about what should be deemed "problematic" or "harmful" are very complex, and our societies haven’t yet agreed on the types of content they expect social media platforms to either curate, or moderate. This explains why I am disappointed by the use of the word ‘harm’ without a clear definition of what we mean by that or how we measure harm.

Decisions about what should be deemed "problematic" or "harmful" are very complex, and our societies haven’t yet agreed on the types of content they expect social media platforms to either curate, or moderate.

Given that definitions aren’t clear, the focus on algorithmic transparency and accountability is definitely justified and worth exploring further. Previous attempts to promote such an approach have led companies to argue that a) their algorithms are commercially sensitive and/or b) these algorithms are intrinsically and structurally too dynamic for the process they use to be explained or made more transparent. I believe it is crucial that the results produced by these algorithms start to be audited.

It is important that we understand the kinds of sensitive and potentially harmful content (e.g. vaccination information, election integrity,  hate speech, abuse and harassment) people are "seeing" on their social media feeds, that we organize society-wide discussions about the types of content we consider to be harmful, and that we decide on the kinds of policies platforms should adopt with regards to these contents.

I am much more interested in governments forcing companies to require independent auditing than having them write their own transparency reports. I would also argue that we need to gather much more evidence and data before any serious regulation on content moderation is drafted.

In the US, Mark Zuckerberg’s calls to regulate harmful content online have partly been met with scepticism, in particular due to the country’s long-standing attachment to free speech, defined in the broadest sense. Do you think the regulatory approaches being developed in France and more generally in Europe might nonetheless influence the US?

Conversations about the regulation of content moderation have certainly flourished in the US recently. Europe has taken a tougher stance than the US, partly because its recent history provides the continent with a serious understanding of the severe consequences hate speech can have. The US is more tolerant towards certain types of speech, such as hate speech, which has meant conversations around regulations for moderating content are less common. Yet the US is watching closely what is happening in Europe, and GDPR has certainly sparked significant conversations about the mechanisms that should be deployed to protect personal data. However, I suspect any US regulation related to platforms will focus on anti-trust rather than content moderation.

First Draft News works in many parts of the world, including Brazil, Indonesia and Nigeria. In your opinion, what content moderation initiatives undertaken across the world have been most significant so far?

We have witnessed a number of regulatory proposals from around the world, from Brazil to Australia and Russia. There is now a proposed law in Singapore. All of these have been rushed through, and are problematic in different ways. The focus is on moderating content, and too often putting the burden on the user for sharing misinformation.All of these initiatives are based on poor definitions of what is problematic or harmful. The failure to think collectively is also worrying. The social media companies concerned are global, and national responses are thus insufficient, if not problematic. A multi-national inquiry including representatives from various countries who discuss both with experts and platforms is a much more interesting way of collecting evidence and thinking about the broader responses these issues imply.

Receive Institut Montaigne’s monthly newsletter in English
Subscribe