Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
16/06/2020

Can the US and Europe Agree on How to Hold Social Media Accountable?

Can the US and Europe Agree on How to Hold Social Media Accountable?
 Théophile Lenoir
Author
Fellow - Misinformation and Digital Policy

What does Donald Trump's conflict with social media, especially with Twitter, mean for free speech? The platform's intervention following the President's tweets led him to sign an executive order against social networks, accusing them of bias against Republicans. At the same time, it forced companies like SnapChat and Facebook to take a stance on the subject of moderating content from political figures, generating strong criticism from American civil society and academia. Twitter, for its part, has continued its campaign against disinformation by testing a new feature that asks users when they are about to retweet content if they would like to read it before sharing it.

All these incidents seem to be leading to more accountability from social networks. But the main issue for the platforms is not whether the executive order leads to the creation of a new accountability regime. The risk for them is that the US regime will be different from Europe’s.

What happened?

Donald Trump's tweets generated messages from the platform for two reasons: distortion of the facts on one hand, and glorifying violence on the other. First, Twitter tacked a message on two of the President’s tweets about the decision from California’s Democratic Governor Gavin Newsom to allow citizens in California to vote by mail because of Covid-19. Donald Trump claimed that all California residents, including immigrants, would have access and that mail-in voting would be fraudulent. In accordance with its terms and conditions, and with the approval of its CEO Jack Dorsey, Twitter placed a warning over the President's tweets, offering readers the possibility to "get the facts" about mail-in voting.

Twitter placed a warning over the President's tweets, offering readers the possibility to "get the facts" about mail-in voting.

Secondly, a few days later, following the third night of protests over the death of George Floyd and the violence that took place in Minneapolis, a new tweet from Donald Trump was labeled. This time, Donald Trump announced the army's support for Minnesota Governor Tim Walz, if he wanted its help to intervene to calm the protests.

This time, Donald Trump announced the army's support for Minnesota Governor Tim Walz, if he wanted its help to intervene to calm the protests. The conclusion of the tweet, "when the looting starts, the shooting starts", generated reports from users, following which Twitter hid Donald Trump's tweet and covered it with this message:

"This tweet violated the Twitter rules about glorifying violence. However, Twitter has determined that it may be in the public’s interest for the tweet to remain accessible"

Now users would have to click on the message to see the President's tweet. They could no longer "like" or reply to the tweet and could only share it by commenting on it. A similar action was taken again by Twitter on Tuesday June 22nd on a Tweet from the President concerning a threat to use "serious force" if protesters set up an autonomous zone in Washington, D.C.

Impact on the platforms

This tug-of-war between the Trump administration and the platforms intensified as soon as Twitter's message regarding mail-in voting was posted, after which Donald Trump issued an executive order against the platforms (see below). Subsequently, the violent protests following the death of George Floyd, the President's online reaction, and Twitter's intervention once again shed a light on the issue of content moderation, especially that of political figures. This is unfolding in a context where disinformation campaigns and coordinated online actions have very real consequences in the actual world: for example, in Oregon, armed groups mobilized following rumors of an anti-fascist ("antifa") gathering.

The handful of companies running social networks have therefore had to take a position. While Snapchat sided with Twitter by removing the President's account from its "Discover" function, Facebook preferred not to intervene, creating opposition within the company and in the academic world. Facebook's neutrality in this context of social tensions, both internally and around the world, once again demonstrates the obstacles for its business model in tackling the thorny issue of moderating political content.

It’s hardly in Facebook’s best interest to stand against political leaders: the platform would run the risk of seeing measures taken against it, including being censored, which could deprive the company of important advertising markets. The social network might also simply lose users politically aligned with these leaders. In a highly polarized context, as is the case in the United States, it is this second risk that prevails. Trump has obviously understood this, and has not hesitated to accuse the platforms of biased interference in the political debate.

While Snapchat sided with Twitter by removing the President's account from its "Discover" function, Facebook preferred not to intervene

The challenges of regulation...

While Facebook did not directly oppose the President (Mark Zuckerberg, its CEO, simply announced a review of content moderation policies related to posts concerning the use of force by the state), the social network may not come out unscathed.

Following Twitter's first efforts (urging its users to check the facts on mail-in voting), the President signed an executive order calling for an investigation into the business model of content platforms, a look at possible biases in social networks, and a review of Section 230 of the Communications Decency Act of 1996. While the executive order will have little effect on federal law, it could lead to a long-term review of Section 230, which states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider". As we noted in our Tackling Disinformation: Going Beyond Content Moderation article, it effectively protects social networks from any liability and has withstood most legal challenges.

On Wednesday June 17, the US Department of Justice revealed recommendations for a reform of Section 230. One of their aims is to "encourage platforms to be more transparent and accountable to their users”, by providing more information on the content they delete in “good faith". It thus calls for a statutory definition of "good faith" to clarify its purpose. Recommendations also include removing the platforms’ protection from government civil lawsuits and antitrust liability under Section 230. Whilst Congress would need to get involved for any of these recommendations to go any further, this reveals the Trump administration’s motivation to prevent platforms from deleting content without facing consequences.  

The question of how to hold platforms accountable for the content they host and promote is not new for social networks. The European Commission has already taken the lead on the issue of their responsibility towards content, after countries like France and Germany drafted and implemented laws requiring platforms to play a more active role in moderation, such as disinformation or illegal content. For example, the 2000 European e-commerce directive, which protects platforms, is undergoing a major overhaul under the Digital Services Act, and a public consultation on the issue was launched on June 2.

This new text aims to create a legal framework for the liability of digital platforms, ranging from issues such as content dissemination to the employment of workers like Uber drivers. What they have in common is the issue of transparency of socio-technical systems (a mix of technical tools, for example using artificial intelligence, and human decisions that influence them), and their responsibility in the decisions that they lead to.

This is not a new subject for social networks. The European Commission has already taken the lead on the issue of their responsibility towards content.

… in a context of international competition

Publicly, the platforms are not opposed to regulation, quite the opposite. In fact, Facebook has publicly called for more regulation by the authorities, particularly in sharing responsibility for the dissemination of false or violent content. However, it is in the company’s best interest to ensure that the decisions taken do not cause it financial difficulty.

This has become particularly clear in the face of strong competition with Asian companies like TikTok and WeChat. The stakes surrounding this issue are more complex for Facebook than for Twitter as, in conjunction with the question of responsibility for content, the American and European authorities are looking closely at the matter of the competitive advantage of access to data that a platform like Facebook (but also TikTok) can benefit from.

The main challenge for social networks is to ensure that international regulations apply equally across the board. As far as content is concerned, their being held accountable is inevitable. The risk is thus not so much the modification of the American Communications Decency Act following Donald Trump's executive order, but rather the fact that the American text would differ from the European text. This balancing act will continue, calling for more regulation at the international level while simultaneously limiting its effects.

 

 

Copyright : AFP

Receive Institut Montaigne’s monthly newsletter in English
Subscribe