Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
18/09/2020

Digital Services Act: Moderating Content and Protecting Minors

Digital Services Act: Moderating Content and Protecting Minors
 Gilles Babinet
Author
Former Advisor on Digital Issues
 Thierry Jadot
Author
Chairman of Dentsu Aegis Network in France, Middle East North Africa and Turkey.
 Théophile Lenoir
Author
Fellow - Misinformation and Digital Policy

In the last two years, social networks have been strongly criticized for the way they deal with problematic content. That includes disinformation, terrorist content, online hate, child pornography, all types of content that are problematic for adults and minors alike and that circulate online. Platforms are currently exempt from liability for such content. However, the European Commission intends to end this legal regime, to encourage platforms to take proactive measures for detecting and removing problematic content. But where should the line be drawn between content moderation and freedom of expression? This final article in Institut Montaigne's series on the Digital Services Act looks at the circulation of online content and the thorny issue of content moderation, particularly with regard to the risks for minors. It is based on the report French Youth: Online and Exposed, published by Institut Montaigne in April 2020, and on Tackling Disinformation: Going Beyond Content Moderation, published in November 2019.

The challenges of content online

There is a lot of problematic content circulating online today. Whether it be hateful content or disinformation, society as a whole is concerned. However, let's face it, the issue is difficult to grasp. While it is easy enough to consider terrorist content as a risk to national security, doing the same for false news or insults online can make us liable to setting up a generalized censorship regime, in which content is removed for political reasons.

The stakes are high and require a reaction from the public authorities. The issue of problematic online content is obvious when we look at minors (although they are not the only ones concerned). Our report, French Youth: Online and Exposed found that nearly 56% of young people say they have been victims of cyber violence, and 35% repeatedly so. Among other forms of cyber-violence, 13% of them say they have been victims of rumours, 9% of threats, and 5% say they have had intimate images of themselves posted online without their consent. Let’s not forget that young girls were particularly affected during the lockdown, especially because of "trolling" accounts in which young boys would share sexual content they’d obtained in their past relationships. At the same time, 19% of French people and 17% of French younths report having already been exposed to racist, anti-Semitic or homophobic content.

The problem of misinformation affects young people, even if they are critical of the content they consult. In fact, 74% of them say they have often or sometimes realized that they have read information that has turned out to be false. Young people and the French as a whole are well aware of the issues raised by the question of misinformation. 83% of 11-20 year olds and 82% of the French believe that such content should be regulated by law. But what should content regulation look like?

Limitations of the current legal framework

Today, platforms are protected at the European level by the e-Commerce Directive, which allows platforms to benefit from exemptions from liability for content circulating on their networks, depending on how their activity is defined. Article 12 protects platforms that have a role of mere conduit of information, provided that they do not modify the content, do not select the recipient and do not initiate the transfer of information. Similarly, Article 14 protects platforms that play a role as content host from any liability for illegal content of which they are unaware.

These categories no longer seem relevant when social networks play an important editorial role, choosing the content that is offered on news feeds, and when their algorithms actually contribute to making certain problematic content particularly visible (disinformation, conspiracy theories or extremist articles).

Distinguishing illegal content

It should be noted, however, that content whose illegality is beyond doubt, such as child pornography or terrorist content, is relatively well taken into account. That is particularly due to the creation of communication channels between the public authorities and the platforms, with the significant involvement of civil society, to bring these types of content to the attention of the platforms and set off removal procedures.

But how can we deal with the rest of the problematic content? How can we create categories that are flexible enough to not remove just any and every insult online, while ensuring that individuals affected by cyberstalking are properly protected? Harassment or stigmatizing insults can be generated by as few as ten accounts, sporadically, and can even be specific to a certain region. How can we encourage platforms to develop tools that are precise enough to detect such cases?

Our knowledge of what can be legitimately deleted online within the limits of freedom of expression is still too weak to focus most of the efforts of legislators and platforms on content deletion.

In France, the law against hate content, known as the Avia law, required platforms to delete, within 24 hours, all hateful speech they became aware of via a notification system. However, due to the lack of a precise definition of what constitutes hateful content, and the risk of over-moderation on the part of the platforms in order to avoid sanctions, the Constitutional Council revoked the article in question, depriving the law of its flagship measure. The conclusion is simple: our knowledge of what can be legitimately deleted online within the limits of freedom of expression is still too weak to focus most of the efforts of legislators and platforms on content deletion.

Voluntary codes of practice are insufficient

But should nothing be done? In order to fight hate speech, the European Commission has set up a code of conduct developed with Facebook, Microsoft, Twitter and Youtube, and which actors such as Instagram, Dailymotion or Snapchat have ratified. By adhering to this code of conduct, these platforms commit to examining requests to remove content within 24 hours. A similar initiative has been undertaken against online disinformation. While this voluntary method is commendable, it proves insufficient, as the European Commission noted in a press release dated September 10, 2020 on the evaluation of the code of good practice on disinformation. Among the issues raised is the lack of common definitions among member countries. Above all, public authorities today have too few means to evaluate the extent and effectiveness of the measures taken by platforms to moderate content.

In addition to these content concerns, there is also the issue of age control on the Internet, which is essential to effectively protect minors online. On this point, in 2017, English law had mandated the British government to set up an age verification system on the Internet. This had to then be abandoned in 2019, when it came under fire for infringing on the respect of freedoms on the Internet. A similar initiative is being considered in France in the form of an article within the law to protect victims of domestic violence, the results of which are still awaited.

 How can we move towards greater accountability?

First of all, Institut Montaigne makes two specific proposals for the protection of minors in its French Youth: Online and Exposed. In order to effectively supervise the use of the Internet by minors, age verification must be made possible. Institut Montaigne therefore suggests studying the feasibility of an age verification system at the time of purchase of a device (phone, laptop, etc.), optionally offered to the parent. This would provide an unmodifiable configuration of the operating system of the smartphone, tablet or computer. Next, Institut Montaigne proposes to prepare a set of rules for the protection of youth using digital intermediaries. These rules could include making their online profiles private by default, so that they cannot be visible on search engines, or strictly limiting their access to content intended for adults. At the same time, the sanctions applied under the GDPR could be strengthened with regard to the protection and use of personal data of minors.

Concerning content as a whole, the main challenge facing public authorities and civil society today is the lack of information available on two subjects: on the one hand, on the measures implemented by platforms to detect and remove content, and on the other hand, on the procedures deployed to improve these procedures. As demonstrated by the Constitutional Council's decision to cancel the Avia law, the understanding of legislators, civil society and platforms regarding the definition of what constitutes harmful content is still too incomplete to enforce removal measures without hindering freedom of expression. Improving definitions and detection systems is therefore a priority.

The understanding of legislators, civil society and platforms regarding the definition of what constitutes harmful content is still too incomplete to enforce removal measures without hindering freedom of expression.

To do so, it is essential to evaluate the actions of platforms with regard to content in an ongoing, reliable and independent manner, and to measure the consequences of those actions. The aim of this investigation should not be to find individual cases of content that the platforms have moderated poorly (disinformation or hateful content), but to uncover possible systemic errors by the platforms in their content moderation methods.

This system could be based on the principle of stress tests, used in particular in the financial sector. Applied to digital platforms, digital players could be forced to submit to an independent annual audit of their actions. Among the elements that the auditors would be entitled to request would be the objectives set in terms of content moderation, the human and technological means deployed to achieve them (including procedures for verifying the quality of the training data of content detection systems), the difficulties encountered, the way in which these difficulties were circumvented, etc. The objectives set could thus be modified according to the progress margin of the platforms. Sanctions would be considered if these audits did not reveal satisfactory results in terms of minimizing risks for users.

Some of this information could be made public so that actors such as associations for the defense of liberties can judge the adequacy of definitions and procedures. It is also important that the population as a whole be made aware of what is at stake. In this respect, we recommend the mandatory publication of annual "barometers", aimed at sharing information with the general public on any harassment or misinformation campaigns that may have taken place.

Audits carried out by independent actors have proven their effectiveness vis-à-vis companies and the financial world. The use of intermediary auditors ensures that the State does not have to develop expertise that it does not have and that the cost of these audits is not borne by the taxpayer. Moreover, it allows the obligations to be defined gradually in terms of means and not results, which would probably be more difficult if the State were to become the main interface for these companies.

 

 

Copyright : Loic VENANCE / AFP

Receive Institut Montaigne’s monthly newsletter in English
Subscribe