Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
15/11/2019

Tackling Disinformation : Going Beyond Content Moderation

Tackling Disinformation : Going Beyond Content Moderation

In Europe, the variety of political, regulatory and media ecosystems creates different ways in which disinformation campaigns, whether foreign or domestic, affect digital public spheres. At the same time, common threats affect Western countries’ ecosystems. In order to contribute to the European agenda to effectively address disinformation campaigns, Institut Montaigne gathered an international group of experts for a workshop, with the participation of the Stiftung Neue Verantwortung and the Alliance for Securing Democracy, in order to discuss disinformation in the context of different political, regulatory and media ecosystems and identify potential responses to limit the impact of disinformation campaigns.

The following article is a one-time joint-contribution by the three organisations, signed by the participants to the workshop. At a time when various countries undertake initiatives to regulate content, its aim is to provide a wide perspective on disinformation, that takes into account the specifics of different national political, regulatory and media ecosystems. It does not aim to provide an extensive list of solutions, but rather intends to help political leaders have a clearer understanding of the many facets of disinformation and identify some potential actions in the near future to go beyond mere regulation of content.

A changing informational environment

The role of technology and its interaction with the media

Social media giants such as Facebook, Google, and Twitter are often held responsible for many of the challenges affecting democratic societies, for example disinformation, polarization and a declining trust in democratic processes and institutions. While such issues clearly predate the birth of social media, technology does raise questions. These concern the fact that digital platforms such as social media networks, video platforms or search engines tend to:

  • reward loud, extremist voices: content that triggers anger and outrage has been proven to keep people engaged in online conversations. With platforms designed to keep users’ attention for the longest time possible, their algorithms end up promoting inflammatory, or even false, content;
  • create segmented, personalized digital information and news spaces determined by algorithms with little to no journalistic gatekeeping;
  • disrupt the traditional media model by usurping a large chunk of online ad revenues, weakening already fragile quality and local journalism.

Content that triggers anger and outrage has been proven to keep people engaged in online conversations.

Social media help spread disinformation, and government increasingly learn how to make use of these tools, as the Oxford Internet Institute’s Computational Propaganda Project has shown in a report on government-led disinformation campaigns worldwide. Still, it should be noted that, if significant efforts are deployed to spread disinformation, there is to date little evidence as to what the impact of disinformation on voters’ attitude is. Moreover, social media platforms are not the only vectors of transmission of disinformation.

Traditional media’s fear of being discredited for not covering a piece of news can lead them to accidentally spread disinformation and give high visibility to fabricated pieces of information. In addition, public figures such as humorists, politicians, celebrities, business leaders, influencers or journalists now have the means to interact directly with citizens in order to influence opinion.

In light of these observations, there are a number of dimensions beyond online content moderation that political leaders must take into consideration to minimize the impact of disinformation and political interference. Before delving into these, a look at European media and regulatory environments is necessary.

The French media ecosystem

VERTICAL POLARIZATION AND LOW TRUST IN THE MEDIA

Institut Montaigne’s May 2019 report Media Polarization “à la française”? Comparing the French and American ecosystems shows that, contrary to the United States, France’s media ecosystem is not divided along a left-right axis. Instead, polarization occurs on a vertical axis, opposing institutionalists to anti-elites. Overall, there is little polarization between French traditional media outlets. Mainstream media such as Le Monde, Le Figaro, Les Échos, Libération, or L’Obs stand in opposition to emerging "new media" such as Russia Today, Fdesouche, or Sputnik.

The latter position themselves to the right of the political spectrum, are situated outside the central space of media (that includes traditional media from the left to the right of the political spectrum) and express anti-elite opinions. They have, in recent years, gained in both audience and visibility. More interestingly, right-wing partisan media such as Français de France and media that relay unreliable information, such as La Presse Galactique, sometimes cite each other, thereby increasing the visibility of questionable information.

The rise in influence of new partisan media has to be understood in a context of growing skepticism and distrust in the French traditional media. According to data from 2017 analyzed by the Pew Research Center for Institut Montaigne, only 28% of the French people surveyed believed that news media were "very important to the functioning of a country’s society". In this regard, it is interesting to compare France to Germany, where this number rises to 61% of the people surveyed.

SHARED PROFESSIONAL PRACTICES IN MAINSTREAM MEDIA

The vertical polarization of the French media space can be explained by the relationship between core media outlets: they regularly cite each other and rarely cite partisan actors outside their group. As a result, media outlets are characterized by a claim to objectivity. Professionals in the field observe one another and assess the quality of their work based on a shared deontology - the CrossCheck initiative during the 2017 campaign exemplifies this.

The fact that the French State finances a large part of the media may also reduce tensions between media groups. For example, in the broadcasting industry, the licence fee created in 1933, known as the "contribution to public service broadcasting" since 2009, is the main source of revenue for French public service broadcasting. This contribution finances about 83% of the five public broadcasting entities’ budget (France Télévisions, Arte France, Radio France, Audiovisuel extérieur à la France (AEF) and INA). The rest is mainly financed by advertising and sponsorship.

Finally, the political landscape has to be stressed: the United States rely on a two-party system which contributes to polarization, contrary to France where politics are structured around multiple parties during election cycles.

The German media ecosystem

Just like in other countries around the world, there is a high convergence of media formats and consumption in Germany, with the line between online and offline, linear and non-linear broadcasting getting blurred. The online news environment is a growing contributor to the formation of people’s political opinions and thus to the democratic process. However, several characteristics distinguish the German media ecosystem from others in Europe, most notably the relatively high trust in traditional media and the historical dual structure of German broadcasting.
 
HIGH TRUST IN TRADITIONAL NEWS, BUT RISING SIGNIFICANCE OF SOCIAL MEDIA

The media ecosystem shaping political opinions in Germany is dominated by established TV broadcasters. A SNV study ahead of the 2017 German federal election found that people mostly used TV news, as well as newspapers, to get informed. Social media was not a big source of news for the elections. These findings are confirmed in the latest Reuters Digital News Report. However, more and more Germans get their news from social media, and overall online news consumption is catching up to TV news. While TV has a commanding lead over online outlets for political news, in the online sphere, the difference between established and fringe online outlets is not as big. That means that even with an audience that is small compared to the main public broadcasters ARD and ZDF, a niche, right-wing online outlet can shape debates in the digital public sphere.
 
GERMANY’S DUAL MEDIA SYSTEM

Two areas, private and public broadcasting, make up Germany’s dual media system. Germans’ relatively high trust in public broadcasters might be a function of the special position these broadcasters have in Germany’s media ecosystem. Public broadcasting is not government-run and is supposed to be independent. It is distinct from privately run TV and radio stations. In this system, as spelled out in the German constitution and multiple court cases, public broadcasters are to keep a distance to the State ("Staatsferne"). They are financed by the public and overseen by state-level, as opposed to federal-level, agencies that are not part of state-level governments. Over the years, many political debates have been held about reforming this system and adapting it to the digital age. Right-wing populists have more recently attempted to paint public broadcasters and established newspapers as state-run propaganda machines, referring to them as the "lying press". This is similar to France's vertical polarization between institutionalists and anti-elites. Nevertheless, a recent study of public broadcasters’ audience showed that they succeed in reaching a fairly diverse audience across the left-right political spectrum.

Given these backgrounds in France and Germany, there are a number of factors that can make political interference more efficient and increase the success of disinformation campaigns.


Disinformation campaigns and political interference in the media

Media ownership

In May 2019, a video surfaced showing Heinz-Christian Strache, the leader of the Austrian far-right party that was then part of the governing coalition in Vienna, chatting with the alleged niece of a Russian oligarch. Among the transactions they discussed was the proposed acquisition of a major Austrian newspaper by Russians who would then make its coverage more friendly to Strache’s party. While the video was the result of a sting operation, it revealed two points. Firstly, it shows that some European political parties have no qualms receiving help from foreign authoritarian regimes to get, or stay, in power. Secondly, it highlighted the fact that purchasing media is one of the methods these regimes resort to in order to financially assist their European allies.

Even when they are not aimed at supporting specific political parties, media acquisitions by foreign authoritarian states have strategic objectives. In the Czech Republic, two media groups, Empresa Media and Médea, were acquired by Chinese energy conglomerate CEFC in 2015. Media analysis conducted by the research group ChinfluenceCE shows that the takeovers have led the outlets owned by both groups to cover China in an almost exclusively positive light. The Czech example demonstrates how foreign authoritarian states use their financial power to shape public opinion in democracies via media acquisitions.

In France, the acquisition of a large stake in major publication Le Monde by Czech billionaire Daniel Kretinsky has raised some eyebrows. While he is an EU citizen and has publicly insisted that the purchase is motivated by exclusively economic considerations, the Canard Enchainé newspaper revealed in October 2018 that Kretinsky, a man whose energy company is a major importer of Russian gas into Europe, was on French intelligence’s radar.

The rise of social media platforms based in authoritarian states should also be a cause of concern.

Beyond the issue of traditional media ownership, the rise of social media platforms based in authoritarian states should also be a cause of concern. In 2018, Tik Tok was the 4th most downloaded app in the world. The app’s owner is a Beijing-based start-up. Ostensibly, TikTok simply allows users to film and share very short videos, with an emphasis on entertaining and viral content. Yet, under the guise of entertainment, the app’s owners enforce strict censorship rules that effectively render videos mentioning subjects such as Taiwan or the ongoing Hong Kong protests invisible on the platform.

Information operations in France and Germany

While neither the Russian nor the Chinese government have been directly tied to media purchases in France or Germany, their state-owned media broadcast in both countries. Moscow has been particularly active: Sputnik has a physical presence in Paris and Berlin, and Russia Today (RT) opened its Paris office in December 2017. Both outlets publish content in both German and French and have garnered a significant online following, RT is now the first French media on YouTube.

In May 2017, newly elected French president Emmanuel Macron declared that "Russia Today and Sputnik (…) have behaved as organs of influence, of propaganda, and of deceitful propaganda." Indeed, starting in January 2017, the Russian state-media apparatus launched a sustained disinformation campaign against then-candidate Macron, with Sputnik portraying him as "US agent" backed by a "very wealthy gay lobby". The disinformation was accompanied by a hack of the Macron campaign team that closely followed the modus operandi of cyber espionage groups linked to Russian military intelligence. The documents acquired via this hack were then leaked to the public with the intention of damaging Macron’s candidacy. In his definitive report on the information operation, French expert Jean-Baptiste Jeangène Vilmer explains that "France has never officially attributed responsibility for the cyberattack (…) [Experts] acknowledge, however, that the available evidence in this case points toward Russia."

Germany had its own high-profile Russian-led disinformation case in January 2016 when Russian media falsely claimed that a Russian-German girl had been raped by migrants. Although the story quickly turned out to be false, Russian media brought it to German mainstream outlets’ attention. The Kremlin got directly involved with Russian foreign minister Lavrov making public statements about Germany’s inability to effectively deal with crime because of political correctness. More recently, Chinese state-media has attracted growing attention in Germany. In September 2018, German media reported that Chinese news agency Xinhua had widely circulated a very upbeat newsletter about President Xi’s Belt and Road Initiative through the German news agency DPA. This fits into Beijing’s broader push for influence in Germany’s public debate.

Political advertising

Traditional media only play a role in the final stages of a modern information operation. The creation and build-up of a divisive narrative usually takes place online, via many different platforms. In parallel to inauthentic accounts and bots that amplify inflammatory stories, foreign authoritarian states also resort to online political advertising. In the United States, the Mueller indictments revealed that Russian operatives had purchased ads on major social platforms to influence American voters.

For now, most EU states have not updated their political advertising laws to include online ads.

These operatives exploited the advertising services provided by social media platforms in two ways. Firstly, they took advantage of the lack of security checks on most social media platforms to conceal their identity and pass themselves off as Americans. Secondly, they used the sophisticated targeting tools these platforms offer to companies to aim inflammatory ads at citizens particularly susceptible to them.

For instance, the Internet Research Agency, a troll factory based in St Petersburg, took out ads on Facebook centered on black American culture and then targeted black voters with the aim of convincing them to abstain from voting in the 2016 presidential election.

While some social media companies have already set up public databases of the political ads running on their platforms, these databases do not include critical information, such as the criteria advertisers use to target specific segments of the public with these ads. This prevents researchers from fully understanding how online ads fit into larger information operations. One solution to the problem would be for governments to step in and legislate on this issue but, for now, most EU states have not updated their political advertising laws to include online ads.

Regulatory systems

Content moderation in France and Germany

So far, regulations on disinformation concern mainly content itself. The details of content regulations in Germany and France can be found in the following table. These include the French bill to "combat the manipulation of information" (November 2018), the French "online hatred" bill (May 2020, then censored by the French Constitutional Council), as well as the German "network enforcement act" or NetzDG (June 2017). The NetzDG does not explicitly deal with disinformation, but rather illegal content. Nevertheless, as it is the first and only German law so far specifically addressing social media companies, a closer look at it is warranted.

OVERVIEW OF FRENCH AND GERMAN CONTENT MODERATION LAWS

5
 FranceGermany
"Fake News" bill"Online hatred" bill, censored by the French Constitutional Council"NetzDG"
Content definition "Fake news" are defined in accordance with three criteria:
- The news must be manifest;
- It must be disseminated deliberately on a large scale;
- It must lead to a disturbance of the peace or compromise the results of an election
- Hateful content is defined as any content published on the Internet promoting crimes against humanity, inciting or promoting acts of terrorism, hatred, violence, discrimination as well as insulting another person or group of people based on origin, alleged race, religion, ethnicity, nationality, sex, sexual orientation, gender identity or disability, real or alleged. Illegal content as defined by German criminal code, i.e. use of symbols of unconstitutional organizations, incitement of hatred, treasonous forgery, defamation of religions, religious and ideological associations
Regulation procedure- The law is only applicable three months before elections
- During this time, content platforms must ensure transparency concerning advertising of content related to public debates (i.e. reveal the name of the advertiser and the amount paid)
- A candidate, political party, citizens’ association or an individual can appeal to a judge for a content it considers should be removed
- The judge has 48h to decide whether or not to take down the alleged fake news from Internet platforms
- Content platforms must allow their users to bring to their attention information they believe to be fake and alert public authorities
- Content platforms must introduce measures to combat fake news and make these public
- Platforms must set up an easily accessible and standardised reporting system to allow Internet users to notify illegal content
- Content notified by users as "hateful" must be removed within 24 hours
- The law, however, does not provide for a specific mechanism for "grey" content: Platforms have the authority to decide whether the flagged content can be qualified as "hateful" or not
- Content platforms must set up an easily accessible and standardised reporting system to allow Internet users to notify illegal content
- Content platforms must delete “obviously illegal content” within 24 hours of being notified (in some cases 1 week), but the decision on what constitutes illegal content is largely left to companies
- There is no mechanism in place for users to object to content mistakenly deleted
- Content platforms must name a contact person for the authorities
- Content platforms must publish a transparency report on content deletion/moderation procedures (but no standards exist for this report)
Penalties imposed- Any infringement to these obligations can be punishable by one year's imprisonment and a fine of 75 000 EUR - In case of non-compliance, platforms could face fines of up to 4% of their global revenue
- Company officials could face a year in prison and fines of up to 250 000 EUR
- Individuals who abuse these reporting mechanisms face penalties of 15 000 EUR fines and up to one year in prison
- In cases of non-compliance with reporting systems for users or transparency reports, platforms face penalties of up to 50 million EUR
Powers of regulatory body- The Audiovisual Council (CSA) is responsible for overseeing platforms’ level of compliance
- The CSA must suspend TV channels “controlled by a foreign state or under the influence” of that state if they disseminate false or misleading information likely to affect the outcome of elections
- The CSA is responsible for issuing recommendations and good practices for proper implementation of the law
- The CSA has the authority to assess the insufficient or excessive nature of the operator's behaviour regarding the withdrawal of content
- Federal Office of Justice oversees NetzDG (not an oversight body for social networks or any other type of media)

Other regulatory approaches in France and Germany

German legislative and regulatory decision makers are currently considering a number of reforms that indirectly touch upon disinformation. For instance, a draft bill to reform the state-level media regulatory system includes requirements for algorithm transparency for social media networks and search engines. The ideas do not contain as detailed a set of questions as the French "Facebook mission" report but do represent an attempt at creating an oversight mechanism for social media. It is noteworthy that the draft legislation is partly implementing rules set by the EU’s audiovisual media directive, highlighting the regulatory ecosystem’s multi-level governance structure. Germany is also trying to update its competition law to account for the data-driven economy. Competition law has already been amended to tighten control over mergers and acquisitions. Now, an advisory body has recommended an improved cooperation between various oversight bodies and between EU member states in the field of competition law. These measures are not meant to tackle disinformation, but are an opportunity to strengthen the digital news and information space and thus indirectly help reduce the spread of disinformation.

The EU regulatory framework

Some issues related to disinformation regulation are national-level issues, but nonetheless, the EU has become active as well.
 
EU ACTION PLAN AND CODE OF PRACTICE AGAINST DISINFORMATION

The EU’s main initiatives against disinformation are an Action Plan and the Code of Practice major social media companies developed under the Commission’s auspices. The EU plan includes:

  • more financial resources and personnel for strategic communications within European External Action Service, through the strengthening of the East StratCom Task Force dedicated to combat foreign disinformation campaigns across the continent ;
     
  • a common Rapid Alert System to facilitate information-sharing and coordinate responses to disinformation campaigns.

According to the European Commission, while there were instances of disinformation during the 2019 European Parliament elections, these measures have been successful in curbing disinformation. Nevertheless, shortcomings have to be noted, most prominently the self-regulatory approach with no enforcement and sanctioning mechanisms in the Code of Practice. This made for spotty, incoherent tackling of disinformation across the EU. Without meaningful access to data for officials or researchers, it will be hard to accurately gauge the success of the Action Plan and the Code of Practice.
 
DIGITAL SERVICES ACT

The incoming European Commission aims to transform the E-Commerce Directive into a Digital Services Act. The weaknesses of the self-regulatory Code of Practice likely supports this effort towards a stronger EU regulatory framework.

This would mainly mean rewriting liability rules, but could also include discussions about political ads, content moderation standards and oversight powers. While the details are still unclear, it shows the EU is willing to address issues underlying disinformation with a stronger European solution and without an over-reliance on self-regulation.

It shows the EU is willing to address issues underlying disinformation with a stronger European solution.

Free speech and regulating disinformation: the US vs. the EU

Freedom of expression – or freedom of speech in the American context – is a pillar of democracies on both sides of the Atlantic. Yet, U.S. and European interpretations of free speech are very different. In the United States, the First Amendment codifies citizens’ right to free speech, with few exceptions. Hate speech, for example, is considered protected speech, except in narrow instances when there is an incitement to violence or true threat. The American interpretation of free speech therefore follows the Ancient Greek tradition of parrhesia, the ability to speak freely without the fear of government censorship or retribution. There is an expectation that the government will not inhibit free expression (except when it is likely to produce "imminent, lawless action"), but there is not an expectation that the government will protect citizens from, among other things, online mobs that seek to intimidate and silence opposing views. In short, the American view of free speech is grounded in the notion that free expression is a right that needs to be protected from rather than provided by the government.

Europe, meanwhile, borrows from the concept of isegoria, a more egalitarian concept that refers to the equal rights of citizens to participate in public debates. European governments traditionally play a more active role in protecting individuals’ right to free expression and European case law has deemed limited restrictions necessary to protect "the rights of others". European restrictions on certain kinds of expression, e.g. Holocaust denial in Germany, are therefore justified by the isegorian interpretation of free speech where one’s inalienable right to free expression needs to be protected not only from the government but also from from one’s fellow citizens.  

These different interpretations of free speech are not merely definitional, they affect public attitudes. A Pew research poll in 2015 revealed that while both Americans and Europeans strongly support the right to criticize governments (95% and 91% respectively), Americans were far more tolerant of offensive speech. For instance, 77% of Americans (compared with 38% of Germans) supported the right of others to make offensive statements about their own religion, and 67% (compared with 27% of Germans) supported the right to make offensive statements about minority groups.

But different tolerance levels in the United States and Europe towards offensive speech do not, alone, explain the contrasting attitudes towards regulation in the transatlantic space. Section 230 of the U.S. Communications Decency Act (CDA) of 1996, which states that no "internet computer service shall be treated as a publisher or speaker of any information provided by another information content provider", effectively shields social media companies from legal liability. The immunity offered by Section 230 has withstood most legal challenges, signaling that any attempt to regulate content on online platforms in the United States would first require an amendment to the CDA.  

Given the challenges of content-based regulation, a more likely approach for American legislators is to focus on regulation that demands greater transparency and data protection from the tech sector. This is an area where the United States and Europe could find significant common ground. Moreover, by adopting common standards that address social media practices rather than the speech of social media users, American and Europeans can take the lead in crafting industry norms that truly adhere to our shared, democratic values. These norms could also provide a possible framework for emerging democracies to follow, thus lessening the appeal of the more draconian measures being advanced by authoritarian regimes.

Suggestions to go forward

Recommendation #1: tackle inauthentic behaviour rather than content

Legislation that attempts to regulate content will invariably come into conflict with different free speech principles. In limited cases already defined by preexisting laws or standards, content moderation is necessary. However, disinformation rarely meets those thresholds, and new legislation designed specifically to tackle false or misleading content risks government overreach. A far more effective approach is to address the malign behavior used to disseminate false content rather than the content itself. This includes but is not limited to impersonation and the use of coordinated inauthentic behavior that can be deployed to deceive users (e.g. presenting oneself as coming from a country when operating from another, using multiple accounts to fake a mass movement, stealing pictures to create fake personas). Mitigating intentionally deceptive behavior is a far more effective approach to tackling disinformation, as it addresses systemic vulnerabilities rather than issues related to protected speech that tech companies are ill-equipped to adjudicate.

Recommendation #2: audit social media companies

One of the current complaints levied by governments against the major social media companies is that governments and, indeed, individual users do not possess the information necessary to determine whether or not tech companies are fairly and effectively enforcing their own terms of service. This creates an inherent power disparity. One possible solution is to mandate independent, third party audits of the major social media platforms – like those required in the financial sector. This would allow for independent oversight of the tech companies, without putting oversight directly in the hands of government regulators. Audits could include looking at algorithms, trying to identify disproportionate impacts of social networks: phenomena that affect a population significantly more than another.

Recommendation #3: legislate advertising requirements

In most EU countries and in the United States, online political ads are currently not subject to the same disclosure requirements as those that exist for ads in offline media. Although many of the major social media platforms have instituted their own requirements for political advertisements, these requirements should be codified through legislation. In addition, a lack of transparency in the online ad sector means that many advertisers may unwittingly be supporting sites that peddle disinformation for profit. At a minimum, third party ad tech companies should be forced to disclose the sites where advertisements are placed, providing companies and the public with the information necessary to understand where ad dollars are being spent. On the long run, transparency principles should be extended to all recommended content on platforms: viewers should be given the possibility to know why a piece of content appears on their screen (how the user was categorized and how this categorization impacts the recommendation). It is also worth debating whether the United States should institute elements of the EU’s General Data Protection Regulation (GDPR) that prohibit advertisers from microtargeting based on data in certain protected categories (e.g. ethnicity, genetics, political opinion).  

Recommendation #4: do not tackle disinformation in isolation

Actors who resort to disinformation often combine it with other tools of political interference. For instance, the disinformation campaigns that polluted the 2016 American presidential election and its 2017 French counterpart were enabled by cyberattacks that procured politically inflammatory material. In addition, authoritarian states are looking to increase their media presence in democracies across the Atlantic. Securing the European information space thus requires regulation that goes beyond anti-disinformation laws, such as effective screening processes for foreign investments in the EU’s media sectors, and stricter anti-money laundering procedures that ensure that the proceeds of authoritarian regimes’ corruption stay out of Western banking systems.

Recommendation #5: bigger is not always better

Large, all-encompassing pieces of legislation in the vein of the GDPR have agenda-setting qualities but take years to draft and implement. With social media platforms based in authoritarian regimes already gaining a larger presence worldwide, European democracies cannot afford to wait for a one-size-fits-all text to solve all the problems of the 21st century’s information space. Instead, they should think tactically and pass smaller, targeted, measures that address the most pressing threats and largest players now, which can later be complemented by more granular laws and regulations.

Recommendation #6: reinforce international cooperation

As mentioned above, the United States and Europe have widely different approaches to the notion of free speech, which undoubtedly explains the contrast in how governments address disinformation. Still, to tackle the spread of false or misleading content, international governments should become more aligned in the way they regulate disinformation. In the short term, the American government could encourage companies to adopt codes of practice that are complementary to those initiated by the EU; the EU should also partner with countries such as India or Brazil to agree on standards. In the longer term, a platform should be created for international political leaders to share their respective experience and advancements. Going beyond a strengthened transatlantic cooperation, monitoring disinformation in real time through Europe’s Rapid Alert System should include as many countries as possible outside the European Union.

______

Authors

Julian Jaursch, Project Director “Strengthening the Digital Public Sphere | Policy”, Stiftung Neue Verantwortung
Théophile Lenoir, Policy Officer, Institut Montaigne
Bret Schafer, Media and Digital Disinformation Fellow, Alliance for Securing Democracy, German Marshall Fund of the United States
Etienne Soula, Research Assistant, Alliance for Securing Democracy, German Marshall Fund of the United States

Signatories

Gilles Babinet, Digital Advisor, Institut Montaigne
Michel Duclos, Special Advisor - Geopolitics, Institut Montaigne and Former Ambassador
Olivier Jay, Partner, Brunswick
Bruno Patino, Editorial Director, Arte and Dean of the Sciences Po School of Journalism
Laetitia Puyfaucher, Chair, Pelham Media Ltd.
Véronique Reille-Soult, CEO, Dentsu Consulting
Ben Scott, Director of Policy & Advocacy, Luminate
Dan Shefet, Lawyer and Chair, Association for Accountability and Internet Democracy
Ethan Zuckerman, Director, MIT Center for Civic Media
Claire Wardle, PhD, Executive Chair, First Draft

 

Receive Institut Montaigne’s monthly newsletter in English
Subscribe