Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
19/06/2019

Challenges of Content Moderation: On the Importance of Cooperation

Interview with Ben Nimmo

Challenges of Content Moderation: On the Importance of Cooperation
 Ben Nimmo
Senior Fellow at the Atlantic Council Digital Forensic Research Lab (DFRLab)

France and other countries (among which Germany and the United Kingdom) are currently investing efforts in attempting to regulate the moderation of harmful content on social media platforms. If such initiatives are often supported by public authorities, they are the source of heated debates regarding their potential impact for freedom of expression. In this interview, Ben Nimmo, Senior Fellow at the Atlantic Council Digital Forensic Research Lab (DFRLab) and member of Institut Montaigne’s previous working group on the French media ecosystem, highlights the complexity involved in attempting to achieve both clear definitions and transparency in content moderation, and promotes cross-sector collaboration to address these difficulties.

The French framework to make social media platforms more accountable stresses the importance of the transparency of the algorithms used for content moderation, and of accountability by design. What do you think of this approach? Do you believe it can be efficient in decreasing online harms?

Platforms, and especially Twitter, have traditionally emphasized the importance of free speech, and thus kept moderation to a minimum. This particular stance is one of the reasons for their success. However, they are now under increasing pressure to go beyond this position and to moderate content, yet without being provided clear rules to help them determine what content is acceptable, and what content is not. Even the categories that might seem most obvious and easy to define, such as "hate speech," involve many grey areas. Indeed, there is no universal agreement on where legitimate expression of dislike ends, and where hate speech begins.

The publication of the guidelines platforms’ moderators currently refer to in order to determine what content is unacceptable could contribute to increasing transparency.

We face the same problem when dealing with incitement to harassment. As an example, should the following statements "everyone go and harass person X", "person X deserves to be harassed" and "I wish someone would harass person X" be dealt with differently? Answering such questions requires making complex linguistic and legal decisions, which means that any algorithm attempting to categorize speech will need to be backed up by a human moderator - at least until the algorithms have been adequately trained, which will take a long time. The publication of the guidelines platforms’ moderators currently refer to in order to determine what content is unacceptable could contribute to increasing transparency.

Yet there is a non-neglectable risk that malicious actors would find ways to circumvent these guidelines and manage to publish harmful content by side stepping platforms’ rules.

Ultimately, it is crucial that legislators provide social media platforms with a detailed list of the kinds of behavior that should be forbidden, along with legal definitions of significant terms, such as "hate speech," "harassment" and "disinformation". The more specific and precise the information legislators provide the platforms on what should be allowed and what shouldn’t, the easier it will be for the latter to set up appropriate moderation processes, and for legislators to establish accountability mechanisms allowing them to gain some oversight on platforms’ efforts to moderate content.

How do France’s efforts compare to the United States’ approach to harmful content online?

France is visibly investing significant efforts to develop a coherent approach to these issues. US authorities have to make progress in a more complex environment, because the First Amendment sharply limits the possibilities of content moderation. Moreover, some US politicians have accused platforms of being biased against conservative ideas, based on (at best) anecdotal evidence. Platforms’ room for maneuver in the US would thus be very restrained if they were to attempt to establish more active moderation processes. I expect to see much stronger pressure towards content moderation in Europe than in the US in the coming years.

The French government collaborated with Facebook to issue the report published on May 10th. What is your take on such a collaboration? Should it be encouraged in other parts of the world?

My team at the Atlantic Council Digital Forensic Lab has a cooperation agreement with Facebook, so the fact that I think cooperation has a lot of potential, and needs to be increased, shouldn’t come as a surprise! In reality, the size of most platforms - not just Facebook -, is so important in comparison with the number of people they employ that it is impossible for them to control everything from within the company. In fact, it would even be deeply disturbing if they tried to do so. Hence, it is crucial that other players contribute to solving the problems these platforms face, and I believe collaborating with them is a good way to do so.

Challenging policy questions such as the one surrounding the issue of content moderation need to be subject to broad public discussions, and require that terms be precisely defined by the law, in order to be adequately responded to. It is also worth noting that different types of content moderators will base their analyses on different types of evidence: platforms may look at the data profiles associated to the accounts being examined, while open-source researchers might have a different approach, and investigate different kinds of indicators. Such complementarity may thus be virtuous, and add nuance to the selection and moderation process.

Platforms are the only ones able to curate what is shared: they thus need to be able to trust that the information they give visibility to will not be misused.

Therefore, building a trusting relationship between platforms, their users, researchers, and policymakers is a key necessity for this matter. Platforms are the only ones able to curate what is shared: they thus need to be able to trust that the information they give visibility to will not be misused. On their end, users need to trust that the information they share on platforms will not be abused. Finally, governments and platforms both need to trust one another to be honest brokers. As a consequence, dealing with this issue requires setting up a multi-stakeholder conversation, which would gather platforms, legislators, researchers, privacy advocates, and privacy lawyers around the same table at the same time.

Receive Institut Montaigne’s monthly newsletter in English
Subscribe