For this purpose, it is important that the government asks platforms to work with them in order to generate a useful research base. This may mean opening up some of platforms’ data in a selection of areas, such as disinformation, eating disorders, self-harm, misogyny, etc. The research would involve understanding patterns of use, and conducting small-scale studies with particular groups. The goal would be to work with specific groups and platforms in order to better understand the former’s social media habits, and the ways in which this feeds into their mental wellbeing, so as to determine the effect caused by the content in question. This research agenda could be a good accompaniment to the monitoring approach just discussed.
Should content on social media platforms be moderated according to users’ age?
It is worth noting that the Online Harms White Paper is the result of very heavy lobbying from children’s charities, and of a fierce campaign of attacks on companies by newspapers, based on stories of harm caused to children online. Yet the evidence of harm to children is not nearly as strong as we would expect it to be, and these discussions rarely tackle the many opportunities and benefits these technologies represent for children. Of course, there are many areas where children are underserved by platforms, in particular between the ages of 9 to 15, as children at this age are too old to use children-only platforms, but perhaps too young for non-gated content. There is evidence showing that most of the harm children report to have encountered online is caused by hateful content, including bullying and angry speech.
In this regard, the Online Harms White Paper is a legitimate response to the fact that children will find adult-oriented content on social media upsetting. Yet its proposals go further than this, which leads me to worry that we will end up in a situation where risk-averse platforms will simply prevent access to users under 18, in order to avoid having to deal with the responsibility of curating potentially harmful content to children. The overly dramatic response advocated by the Online Harms White Paper could therefore lead to a shutting down of the opportunities and benefits that exist online for children.
A less strict approach would involve devoting much more resources to education on a wide variety of issues, both for adults and for children. Sex education has just become compulsory in schools, and there are currently consultations on how to discuss online pornography and sexting, which is crucial for all children to be adequately equipped to deal with these sorts of challenges. Another useful approach would be for governments to understand the business models behind these kinds of content, and to come up with clever ways of disrupting those. In the disinformation area for instance, this would involve doing more work to track and trace the originators of the most significant campaigns. Of course, such initiatives require more resources than the regulatory measures focusing on the companies that host such content.
Add new comment