It is important that we understand the kinds of sensitive and potentially harmful content (e.g. vaccination information, election integrity, hate speech, abuse and harassment) people are "seeing" on their social media feeds, that we organize society-wide discussions about the types of content we consider to be harmful, and that we decide on the kinds of policies platforms should adopt with regards to these contents.
I am much more interested in governments forcing companies to require independent auditing than having them write their own transparency reports. I would also argue that we need to gather much more evidence and data before any serious regulation on content moderation is drafted.
In the US, Mark Zuckerberg’s calls to regulate harmful content online have partly been met with scepticism, in particular due to the country’s long-standing attachment to free speech, defined in the broadest sense. Do you think the regulatory approaches being developed in France and more generally in Europe might nonetheless influence the US?
Conversations about the regulation of content moderation have certainly flourished in the US recently. Europe has taken a tougher stance than the US, partly because its recent history provides the continent with a serious understanding of the severe consequences hate speech can have. The US is more tolerant towards certain types of speech, such as hate speech, which has meant conversations around regulations for moderating content are less common. Yet the US is watching closely what is happening in Europe, and GDPR has certainly sparked significant conversations about the mechanisms that should be deployed to protect personal data. However, I suspect any US regulation related to platforms will focus on anti-trust rather than content moderation.
First Draft News works in many parts of the world, including Brazil, Indonesia and Nigeria. In your opinion, what content moderation initiatives undertaken across the world have been most significant so far?
We have witnessed a number of regulatory proposals from around the world, from Brazil to Australia and Russia. There is now a proposed law in Singapore. All of these have been rushed through, and are problematic in different ways. The focus is on moderating content, and too often putting the burden on the user for sharing misinformation.All of these initiatives are based on poor definitions of what is problematic or harmful. The failure to think collectively is also worrying. The social media companies concerned are global, and national responses are thus insufficient, if not problematic. A multi-national inquiry including representatives from various countries who discuss both with experts and platforms is a much more interesting way of collecting evidence and thinking about the broader responses these issues imply.
Add new comment