What could "legitimate and democratic" content regulation look like?
It is still unclear what "legitimate and democratic" regulation of the online communication space would look like. One of the solutions that most often appears in discussions is to let an interim judge decide on the deletion or not of online content. This solution is at the heart of the French 2018 law against disinformation, aka the "fake news law". Although it adds a democratic touch to content regulation, this solution remains problematic in many ways.
First of all, this mechanism only solves the problem once the harm is already done. Post virality makes it hard, and even nearly impossible, to identify and take down problematic content quickly enough that it may contain the damage. This does not make the legal route obsolete, as it does hold the various actors (individuals as well as platforms) more accountable. However, although the process may be expedited in cases such as the suspension of a president’s account, it will inevitably come too late in the majority of cases.
Legal proceedings also primarily target individual content, whereas the amount of potentially problematic messages calls for the scope to be broadened. In order for that to happen, one needs to understand how platforms with hundreds of millions of users, like Facebook or Twitter, can even approach this problem. From a computer science perspective, the issue is to be able to create categories that are broad enough to contain as many situations as possible (hate speech, fueling violence, misleading political content), but also specific enough as to allow for exceptions to be acknowledged (for instance high schoolers humorously calling for an insurrection against their professor). These exceptions are inevitable, but they also have to be discussed. In certain cases, new criteria have to be taken into account, such as the user’s number of followers or their industry.
Are categories the solution?
If public authorities are to find efficient solutions they need to look into the categorization of content. What categories already exist? What criteria are attached to them? How are these criteria bound to evolve? What decisions are taken for each category? How is content treated when it meets several criteria? We need to have open and democratic discussions about them.
Add new comment