Moderation decisions must then be notified to those they concern, and must be questionable through appeal procedures. The final decision can only be made by the judiciary.
Second, it is important that citizens be educated on these issues for them to understand the decisions made, and take part in the design of the principles that govern them. Transparency must be a common thread, present from the design of moderation rules to the rendering and explanation of decisions.
It is essential that digital means be mobilized to respond to the difficulties posed by content moderation. The moderator itself must be a social network, a space that fosters dialogue. In this respect, the problem of moderation by algorithms, which favor some content over others, must be addressed: we must be able to question them and demand more transparency on the ways in which they work. This is obviously complex since it is where platforms’ business model lies. Algorithmic moderation can be democratized, but only if it is done well, which requires extensive public research in the field, as the detection of hate speech is particularly difficult. I stress the importance of the public nature of this research – we should not delegate this type of investigation to platforms.
Sharing annotated data corpuses is also essential to train algorithms and make moderation accessible to smaller companies. Finally, we must not forget to question the model of big platforms: the problems we face today are also due to the excessively dominant, even monopolistic, positions they occupy.
Add new comment