In the last two years, social networks have been strongly criticized for the way they deal with problematic content. That includes disinformation, terrorist content, online hate, child pornography, all types of content that are problematic for adults and minors alike and that circulate online. Platforms are currently exempt from liability for such content. However, the European Commission intends to end this legal regime, to encourage platforms to take proactive measures for detecting and removing problematic content. But where should the line be drawn between content moderation and freedom of expression? This final article in Institut Montaigne's series on the Digital Services Act looks at the circulation of online content and the thorny issue of content moderation, particularly with regard to the risks for minors. It is based on the report French Youth: Online and Exposed, published by Institut Montaigne in April 2020, and on Tackling Disinformation: Going Beyond Content Moderation, published in November 2019.
The challenges of content online
There is a lot of problematic content circulating online today. Whether it be hateful content or disinformation, society as a whole is concerned. However, let's face it, the issue is difficult to grasp. While it is easy enough to consider terrorist content as a risk to national security, doing the same for false news or insults online can make us liable to setting up a generalized censorship regime, in which content is removed for political reasons.
The stakes are high and require a reaction from the public authorities. The issue of problematic online content is obvious when we look at minors (although they are not the only ones concerned). Our report, French Youth: Online and Exposed found that nearly 56% of young people say they have been victims of cyber violence, and 35% repeatedly so. Among other forms of cyber-violence, 13% of them say they have been victims of rumours, 9% of threats, and 5% say they have had intimate images of themselves posted online without their consent. Let’s not forget that young girls were particularly affected during the lockdown, especially because of "trolling" accounts in which young boys would share sexual content they’d obtained in their past relationships. At the same time, 19% of French people and 17% of French younths report having already been exposed to racist, anti-Semitic or homophobic content.
The problem of misinformation affects young people, even if they are critical of the content they consult. In fact, 74% of them say they have often or sometimes realized that they have read information that has turned out to be false. Young people and the French as a whole are well aware of the issues raised by the question of misinformation. 83% of 11-20 year olds and 82% of the French believe that such content should be regulated by law. But what should content regulation look like?
Limitations of the current legal framework
Today, platforms are protected at the European level by the e-Commerce Directive, which allows platforms to benefit from exemptions from liability for content circulating on their networks, depending on how their activity is defined. Article 12 protects platforms that have a role of mere conduit of information, provided that they do not modify the content, do not select the recipient and do not initiate the transfer of information. Similarly, Article 14 protects platforms that play a role as content host from any liability for illegal content of which they are unaware.
These categories no longer seem relevant when social networks play an important editorial role, choosing the content that is offered on news feeds, and when their algorithms actually contribute to making certain problematic content particularly visible (disinformation, conspiracy theories or extremist articles).
Distinguishing illegal content
It should be noted, however, that content whose illegality is beyond doubt, such as child pornography or terrorist content, is relatively well taken into account. That is particularly due to the creation of communication channels between the public authorities and the platforms, with the significant involvement of civil society, to bring these types of content to the attention of the platforms and set off removal procedures.
But how can we deal with the rest of the problematic content? How can we create categories that are flexible enough to not remove just any and every insult online, while ensuring that individuals affected by cyberstalking are properly protected? Harassment or stigmatizing insults can be generated by as few as ten accounts, sporadically, and can even be specific to a certain region. How can we encourage platforms to develop tools that are precise enough to detect such cases?
Add new comment