Free speech and regulating disinformation: the US vs. the EU
Freedom of expression – or freedom of speech in the American context – is a pillar of democracies on both sides of the Atlantic. Yet, U.S. and European interpretations of free speech are very different. In the United States, the First Amendment codifies citizens’ right to free speech, with few exceptions. Hate speech, for example, is considered protected speech, except in narrow instances when there is an incitement to violence or true threat. The American interpretation of free speech therefore follows the Ancient Greek tradition of parrhesia, the ability to speak freely without the fear of government censorship or retribution. There is an expectation that the government will not inhibit free expression (except when it is likely to produce "imminent, lawless action"), but there is not an expectation that the government will protect citizens from, among other things, online mobs that seek to intimidate and silence opposing views. In short, the American view of free speech is grounded in the notion that free expression is a right that needs to be protected from rather than provided by the government.
Europe, meanwhile, borrows from the concept of isegoria, a more egalitarian concept that refers to the equal rights of citizens to participate in public debates. European governments traditionally play a more active role in protecting individuals’ right to free expression and European case law has deemed limited restrictions necessary to protect "the rights of others". European restrictions on certain kinds of expression, e.g. Holocaust denial in Germany, are therefore justified by the isegorian interpretation of free speech where one’s inalienable right to free expression needs to be protected not only from the government but also from from one’s fellow citizens.
These different interpretations of free speech are not merely definitional, they affect public attitudes. A Pew research poll in 2015 revealed that while both Americans and Europeans strongly support the right to criticize governments (95% and 91% respectively), Americans were far more tolerant of offensive speech. For instance, 77% of Americans (compared with 38% of Germans) supported the right of others to make offensive statements about their own religion, and 67% (compared with 27% of Germans) supported the right to make offensive statements about minority groups.
But different tolerance levels in the United States and Europe towards offensive speech do not, alone, explain the contrasting attitudes towards regulation in the transatlantic space. Section 230 of the U.S. Communications Decency Act (CDA) of 1996, which states that no "internet computer service shall be treated as a publisher or speaker of any information provided by another information content provider", effectively shields social media companies from legal liability. The immunity offered by Section 230 has withstood most legal challenges, signaling that any attempt to regulate content on online platforms in the United States would first require an amendment to the CDA.
Given the challenges of content-based regulation, a more likely approach for American legislators is to focus on regulation that demands greater transparency and data protection from the tech sector. This is an area where the United States and Europe could find significant common ground. Moreover, by adopting common standards that address social media practices rather than the speech of social media users, American and Europeans can take the lead in crafting industry norms that truly adhere to our shared, democratic values. These norms could also provide a possible framework for emerging democracies to follow, thus lessening the appeal of the more draconian measures being advanced by authoritarian regimes.
Suggestions to go forward
Recommendation #1: tackle inauthentic behaviour rather than content
Legislation that attempts to regulate content will invariably come into conflict with different free speech principles. In limited cases already defined by preexisting laws or standards, content moderation is necessary. However, disinformation rarely meets those thresholds, and new legislation designed specifically to tackle false or misleading content risks government overreach. A far more effective approach is to address the malign behavior used to disseminate false content rather than the content itself. This includes but is not limited to impersonation and the use of coordinated inauthentic behavior that can be deployed to deceive users (e.g. presenting oneself as coming from a country when operating from another, using multiple accounts to fake a mass movement, stealing pictures to create fake personas). Mitigating intentionally deceptive behavior is a far more effective approach to tackling disinformation, as it addresses systemic vulnerabilities rather than issues related to protected speech that tech companies are ill-equipped to adjudicate.
Recommendation #2: audit social media companies
One of the current complaints levied by governments against the major social media companies is that governments and, indeed, individual users do not possess the information necessary to determine whether or not tech companies are fairly and effectively enforcing their own terms of service. This creates an inherent power disparity. One possible solution is to mandate independent, third party audits of the major social media platforms – like those required in the financial sector. This would allow for independent oversight of the tech companies, without putting oversight directly in the hands of government regulators. Audits could include looking at algorithms, trying to identify disproportionate impacts of social networks: phenomena that affect a population significantly more than another.
Recommendation #3: legislate advertising requirements
In most EU countries and in the United States, online political ads are currently not subject to the same disclosure requirements as those that exist for ads in offline media. Although many of the major social media platforms have instituted their own requirements for political advertisements, these requirements should be codified through legislation. In addition, a lack of transparency in the online ad sector means that many advertisers may unwittingly be supporting sites that peddle disinformation for profit. At a minimum, third party ad tech companies should be forced to disclose the sites where advertisements are placed, providing companies and the public with the information necessary to understand where ad dollars are being spent. On the long run, transparency principles should be extended to all recommended content on platforms: viewers should be given the possibility to know why a piece of content appears on their screen (how the user was categorized and how this categorization impacts the recommendation). It is also worth debating whether the United States should institute elements of the EU’s General Data Protection Regulation (GDPR) that prohibit advertisers from microtargeting based on data in certain protected categories (e.g. ethnicity, genetics, political opinion).
Recommendation #4: do not tackle disinformation in isolation
Actors who resort to disinformation often combine it with other tools of political interference. For instance, the disinformation campaigns that polluted the 2016 American presidential election and its 2017 French counterpart were enabled by cyberattacks that procured politically inflammatory material. In addition, authoritarian states are looking to increase their media presence in democracies across the Atlantic. Securing the European information space thus requires regulation that goes beyond anti-disinformation laws, such as effective screening processes for foreign investments in the EU’s media sectors, and stricter anti-money laundering procedures that ensure that the proceeds of authoritarian regimes’ corruption stay out of Western banking systems.
Recommendation #5: bigger is not always better
Large, all-encompassing pieces of legislation in the vein of the GDPR have agenda-setting qualities but take years to draft and implement. With social media platforms based in authoritarian regimes already gaining a larger presence worldwide, European democracies cannot afford to wait for a one-size-fits-all text to solve all the problems of the 21st century’s information space. Instead, they should think tactically and pass smaller, targeted, measures that address the most pressing threats and largest players now, which can later be complemented by more granular laws and regulations.
Recommendation #6: reinforce international cooperation
As mentioned above, the United States and Europe have widely different approaches to the notion of free speech, which undoubtedly explains the contrast in how governments address disinformation. Still, to tackle the spread of false or misleading content, international governments should become more aligned in the way they regulate disinformation. In the short term, the American government could encourage companies to adopt codes of practice that are complementary to those initiated by the EU; the EU should also partner with countries such as India or Brazil to agree on standards. In the longer term, a platform should be created for international political leaders to share their respective experience and advancements. Going beyond a strengthened transatlantic cooperation, monitoring disinformation in real time through Europe’s Rapid Alert System should include as many countries as possible outside the European Union.
______
Authors
Julian Jaursch, Project Director “Strengthening the Digital Public Sphere | Policy”, Stiftung Neue Verantwortung
Théophile Lenoir, Policy Officer, Institut Montaigne
Bret Schafer, Media and Digital Disinformation Fellow, Alliance for Securing Democracy, German Marshall Fund of the United States
Etienne Soula, Research Assistant, Alliance for Securing Democracy, German Marshall Fund of the United States
Signatories
Gilles Babinet, Digital Advisor, Institut Montaigne
Michel Duclos, Special Advisor - Geopolitics, Institut Montaigne and Former Ambassador
Olivier Jay, Partner, Brunswick
Bruno Patino, Editorial Director, Arte and Dean of the Sciences Po School of Journalism
Laetitia Puyfaucher, Chair, Pelham Media Ltd.
Véronique Reille-Soult, CEO, Dentsu Consulting
Ben Scott, Director of Policy & Advocacy, Luminate
Dan Shefet, Lawyer and Chair, Association for Accountability and Internet Democracy
Ethan Zuckerman, Director, MIT Center for Civic Media
Claire Wardle, PhD, Executive Chair, First Draft
Add new comment