Skip to main content
Ex: Europe, Middle East, Education

Challenges of Content Moderation: Human Rights First!

Interview with Victoria Nash

INTERVIEW - 4 July 2019

France and other countries (among which Germany and the United Kingdom) are currently investing efforts in attempting to regulate the moderation of harmful content on social media platforms. If such initiatives are often supported by public authorities, they are the source of heated debates regarding their potential impact for freedom of expression. In this interview, Victoria Nash, Deputy Director and Associate Professor at the Oxford Internet Institute, advocates for a human rights approach to content moderation, and highlights the need for an ambitious research agenda to define categories of online harm. She also discusses the efforts invested by the UK government so far on this issue, particularly in relation to the benefits and harms that children encounter online.

The debate around content moderation is often framed as an opposition between freedom of speech on the one hand, and the idea that harmful content should be removed on the other. What is your view on this dichotomy?

I think the ideal approach to content moderation for any country would be one that focuses on the protection of human rights online. The development of such an approach by countries like France and the UK would deter more authoritarian regimes from misusing the models they conceive and implement. Moreover, such a framework would allow governments to defend a variety of rights, not just that of freedom of expression. While the latter is very important, there are other crucial values I think we should protect online, such as the right to information and the right to participation, which should all be thought about together.

There are other crucial values I think we should protect online, such as the right to information and the right to participation, which should all be thought about together.

Indeed, governments should certainly be worried about the over-removal of content and its impact on freedom of expression, but they also need to be concerned about minorities not being able to participate in public debates. A human rights approach provides justification for things like the right to appeal and the focus on transparency, and puts a greater onus on governments to give clear definitions of the types of content they want to see removed. It also requires the setting up of regulatory bodies or empowered courts to deal with the most controversial disagreements in terms of content moderation and removal.

The French report published on May 10th emphasises that regulation should focus on monitoring the efforts of social media to moderate content and stresses the importance of transparency in this regard. What do you think of this approach?

In an ideal world, I would probably want to exhaust all the self-regulatory mechanisms before introducing very tough regulations, and to that extent I think such measures are useful first steps. Yet I am aware that this may not be entirely sufficient. One concern is that we currently have a system where big companies act as "private sheriffs", with no clear framework about the exact types of speech that must be removed. This leads to unaccountable companies holding too great power, which allows them to choose either to remove too little or too much content. 

The other concern regards the effect these measures will have on different kinds of platforms. Companies like Facebook, Youtube or Twitter have large amounts of resources to devote to content moderation, and we already know that companies like Facebook are considering recourse for appeal, and have deployed clear reporting processes and frameworks for how moderation decisions should be made. While this is very positive, I am concerned about the extent to which these models can be replicated across small and more niche companies, and I wonder whether governments will require the same standards to apply to all. For instance, a company like Reddit clearly tolerates more free speech than Facebook does, and it would be important to me that these differences stand, at least as far as it doesn’t extend to illegal content.  

In the UK, I believe the question regarding what type of content should be removed and why is actually being muddied by the Online Harms White Paper, because of the wide range of issues that it is asking companies to act on. The moderation of harmful speech like bullying requires very subjective decisions to be made, and it is unclear what transparency measures could or would look like in those circumstances. This entails that we might ironically be moving towards a less transparent framework.

As you’ve mentioned, clearly delineating what content is fake or harmful is very complex. Who should be in charge of defining those terms?

At the moment, the government is proposing that definitions of harmful content be produced by the companies and the regulator, which, according to me, is the biggest issue with the Online Harms White Paper. It is mainly problematic because the propensity of a content to harm depends in part on who is consuming it. For instance, evidence for harm around eating disorder content is very mixed. This type of content may indeed be harmful for existing vulnerable populations who have eating disorder tendencies, but not necessarily for the general population. We are thus confronted with a fundamental conceptual problem: one can attempt to define harmful content by setting out some categories, but it is not clear that one will ever have evidence that the content is actually harmful.

The UK’s current duty of care approach is appealing in some sense, because it asks platforms to consider what types of content are likely to be harmful on the basis of their users, which allows for some nuance. On the other hand, we do need legislation that clarifies what types of content should be prioritized for removal, perhaps in relation to particular groups of users. This involves setting up a big research agenda for the next five years.

For this purpose, it is important that the government asks platforms to work with them in order to generate a useful research base.

For this purpose, it is important that the government asks platforms to work with them in order to generate a useful research base. This may mean opening up some of platforms’ data in a selection of areas, such as disinformation, eating disorders, self-harm, misogyny, etc. The research would involve understanding patterns of use, and conducting small-scale studies with particular groups. The goal would be to work with specific groups and platforms in order to better understand the former’s social media habits, and the ways in which this feeds into their mental wellbeing, so as to determine the effect caused by the content in question. This research agenda could be a good accompaniment to the monitoring approach just discussed.

Should content on social media platforms be moderated according to users’ age?

It is worth noting that the Online Harms White Paper is the result of very heavy lobbying from children’s charities, and of a fierce campaign of attacks on companies by newspapers, based on stories of harm caused to children online. Yet the evidence of harm to children is not nearly as strong as we would expect it to be, and these discussions rarely tackle the many opportunities and benefits these technologies represent for children. Of course, there are many areas where children are underserved by platforms, in particular between the ages of 9 to 15, as children at this age are too old to use children-only platforms, but perhaps too young for non-gated content. There is evidence showing that most of the harm children report to have encountered online is caused by hateful content, including bullying and angry speech. 

In this regard, the Online Harms White Paper is a legitimate response to the fact that children will find adult-oriented content on social media upsetting. Yet its proposals go further than this, which leads me to worry that we will end up in a situation where risk-averse platforms will simply prevent access to users under 18, in order to avoid having to deal with the responsibility of curating potentially harmful content to children. The overly dramatic response advocated by the Online Harms White Paper could therefore lead to a shutting down of the opportunities and benefits that exist online for children.

A less strict approach would involve devoting much more resources to education on a wide variety of issues, both for adults and for children. Sex education has just become compulsory in schools, and there are currently consultations on how to discuss online pornography and sexting, which is crucial for all children to be adequately equipped to deal with these sorts of challenges. Another useful approach would be for governments to understand the business models behind these kinds of content, and to come up with clever ways of disrupting those. In the disinformation area for instance, this would involve doing more work to track and trace the originators of the most significant campaigns. Of course, such initiatives require more resources than the regulatory measures focusing on the companies that host such content.

 

Add new comment

Commentaire

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type='1 A I'> <li> <dl> <dt> <dd> <h2 id='jump-*'> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
  • Only images hosted on this site may be used in <img> tags.

Envoyer cette page par email

L'adresse email du destinataire n'est pas valide
Institut Montaigne
59, rue la Boétie 75008 Paris

© Institut Montaigne 2017