Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
15/01/2021

Twitter V. Trump: Where Do We Go From Here?

Print
Share
Twitter V. Trump: Where Do We Go From Here?
 Théophile Lenoir
Author
Fellow - Misinformation and Digital Policy

The permanent suspension of Donald Trump’s Twitter account following the January 6 Capitol riots has been met with unanimous condemnation in French and European political circles. European commissioner for Internal Market Thierry Breton, who in December presented a bill aiming to strengthen the accountability of Internet platforms, stated that "the fact that a CEO can pull the plug on POTUS’s loudspeaker without any checks and balances is perplexing". German Chancellor Angela Merkel and French Economy Minister Bruno Le Maire also condemned the move. These debates illustrate the difficulty of creating legitimate and democratic processes that could help determine what is acceptable online, and what isn’t.

Private actors and freedom of speech

The transfer of control of civil liberties, such as freedom of speech, from democratic institutions to private actors raises a host of questions. Companies like Twitter or Facebook should not be able to decide on their own who is entitled to participate in public debates. However, these platforms are also asked to regulate public debates, for instance by limiting the spread of disinformation around the pandemic.
 
The terms and conditions of the various social networks currently determine what is acceptable or not (Donald Trump’s Twitter, for instance, was suspended because it was said to glorify violence). Public authorities work alongside platforms on a number of topics, such as terrorism or child pornography, in order to identify and take down content and users that engage in illegal activities. However, platforms retain a considerable degree of autonomy when it comes to deciding which problematic but not illegal content needs to be deleted. This is the case for several areas in which "grey zones" still exist, such as fake news or hate speech.

The transfer of control of civil liberties, such as freedom of speech, from democratic institutions to private actors raises a host of questions.

In the United States, these platforms are exempt from liability for this content. They are regulated by Section 230 of the 1996 Communication Decency Act, which the American president had asked to update after Twitter took down two of his messages in May (one related to mail-in ballots and the other to the protests in Minneapolis). However, the First Amendment to the American constitution protects citizens’ free speech and makes it extremely difficult to regulate content posted on these platforms. In Europe, the Digital Services Act will aim to regulate online content, but the application of new rules promises to be complicated given the American context.

What could "legitimate and democratic" content regulation look like?

It is still unclear what "legitimate and democratic" regulation of the online communication space would look like. One of the solutions that most often appears in discussions is to let an interim judge decide on the deletion or not of online content. This solution is at the heart of the French 2018 law against disinformation, aka the "fake news law". Although it adds a democratic touch to content regulation, this solution remains problematic in many ways.

First of all, this mechanism only solves the problem once the harm is already done. Post virality makes it hard, and even nearly impossible, to identify and take down problematic content quickly enough that it may contain the damage. This does not make the legal route obsolete, as it does hold the various actors (individuals as well as platforms) more accountable. However, although the process may be expedited in cases such as the suspension of a president’s account, it will inevitably come too late in the majority of cases.

Legal proceedings also primarily target individual content, whereas the amount of potentially problematic messages calls for the scope to be broadened. In order for that to happen, one needs to understand how platforms with hundreds of millions of users, like Facebook or Twitter, can even approach this problem. From a computer science perspective, the issue is to be able to create categories that are broad enough to contain as many situations as possible (hate speech, fueling violence, misleading political content), but also specific enough as to allow for exceptions to be acknowledged (for instance high schoolers humorously calling for an insurrection against their professor). These exceptions are inevitable, but they also have to be discussed. In certain cases, new criteria have to be taken into account, such as the user’s number of followers or their industry.

Are categories the solution?

If public authorities are to find efficient solutions they need to look into the categorization of content. What categories already exist? What criteria are attached to them? How are these criteria bound to evolve? What decisions are taken for each category? How is content treated when it meets several criteria? We need to have open and democratic discussions about them.

The suspension of Donald Trump’s account, or taking down his content, has been contested by politicians for its arbitrariness, as if Twitter and the "digital oligarchy" (in the words of French Minister of the Economy Bruno Le Maire) had acted on a whim. Twitter’s justification for the permanent suspension of Trump’s account mentions the context in which the American president’s messages are "interpreted". This wording is awkward, as "interpretation" is always a subjective exercise. Twitter meticulously analyzes the President’s messages in light of the political climate, before deciding that they constitute an incitement to violence, thereby violating the platform’s terms and conditions.

The suspension of Donald Trump’s account, or taking down his content, has been contested by politicians for its arbitrariness, as if Twitter and the "digital oligarchy" [...] had acted on a whim.

The question is the following: could the decision to permanently suspend the account be considered legitimate if the method used to interpret the President’s message, as well as the criteria and categories invoked to justify the decision, had been approved by democratic institutions? We at Institut Montaigne are convinced that therein lies the solution to break free from the bind we find ourselves in-a situation in which each decision is criticized by the same actors for opposite reasons and in which we are torn between protecting freedom of speech and protecting law and order.

An audit mechanism to make decisions legitimate

Democratically defining the categories of content that pose a problem is a laborious and never-ending process. This is why, in our reportFrench Youth: Online and Exposed, we suggest that national authorities should develop their auditing capabilities of social media platforms’ moderation processes. As we explain in the report, transparency around the decisions made is a sine qua non condition for any regulation aiming to take down problematic content. However, transparency is built on reciprocal trust between platforms and public authorities. This trust can be buttressed by verification mechanisms.

It would be naive for public authorities to solely rely on platforms’ good faith. But demanding legal access to the information posted while building verification processes (as is the case in the financial sector) will boost confidence in the process. From the platforms’ point of view, there are considerable risks to unveiling the workings of content regulation, the multiplicity of conflicts and the often imperfect decision-making processes. However, it is only by collectively understanding the complexity of these mechanisms that we will be able to move forward. Public authorities must in the first place encourage transparency by striking down only systematic and repeated violations of their rules (for instance the circulation of terrorist or child pornography-related contents).
 
This audit mechanism must be flexible in order to account for new problematic situations. It should be able to create an environment in which public authorities and platforms share information on the method used to flag and delete content. It should finally also help us reach a deeper understanding of the complexity of the situation we find ourselves in.

 

 

Copyright : Olivier DOULIERY / AFP

Receive Institut Montaigne’s monthly newsletter in English
Subscribe