Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.

Preventing Election Interference Online

Preventing Election Interference Online
 Ben Nimmo
Senior Fellow at the Atlantic Council Digital Forensic Research Lab (DFRLab)

Democracies increasingly face the threat of foreign interference over the course of their electoral cycles. Past examples such as the American presidential election, the Brexit vote or the Catalan referendum are informative regarding the recurring methods of influence that are being used. They also demonstrate the need for each relevant stakeholder to implement efficient solutions. In this article, Ben Nimmo, Information Defense Fellow at the Atlantic Council Digital Forensic Research Lab, analyses these phenomena.

Election Interference Online

In 2016, it hit the United States. Last year, it was France and Germany’s turn. This year, elections are scheduled in countries from the Mediterranean to Scandinavia, and their people are afraid that powers beyond their borders will try to interfere via the Internet. 

Cyprus held its election on January 28th. Italy follows suit in March. Swedes go to the polls in September. The atmosphere in these races is feverish, with many observers, internal and external, expressing fears of "fake news" and "foreign interference" online.    

These fears are grounded in the events of 2016, when Russia launched a full-spectrum influence campaign in the US to undermine Democratic candidate Hillary Clinton, and in those of 2017, when Russia and the American far right were both involved in attempts to undermine the campaign of now-President Emmanuel Macron. 

This article identifies the main public (as opposed to covert and intelligence-led) techniques of online election interference, the type of vulnerabilities they exploit, the ways different foreign actors work together, and reflects upon what can be done to increase societies’ resilience to this phenomenon


The most potentially damaging interference technique is hacking the emails of a political leader or group, and leaking them at a sensitive moment. The best example of this is the attacks on Clinton's campaign in 2016

On two occasions, Russian hackers with murky ties to the government hacked emails from the Democratic National Committee and Clinton's campaign manager, John Podesta. These were passed on to Wikileaks - itself bitterly opposed to Clinton - which leaked excerpts every day for a month before the election. 

The extent of the damage to the campaign cannot be measured, though having to deal with these leaks every day in the final sprint to the election, when the rival campaign did not, obviously disadvantaged Clinton. Moreover, the debate was fundamentally shifted. 

Russia is accused of having tried to do the same during the French election in 2017. This time, emails hacked from Macron's campaign were dumped on the 4chan platform and amplified by far-right activists in the United States. The operation was much clumsier, and much less effective. 

There are two ways to counter this: one private, one public. The private method is for all politicians and campaigns to invest much more in basic account security, teaching their staff to recognise and avoid the techniques hackers use. This can make hacking harder, yet it is unlikely to remove the danger altogether. 

The public method is to report any hacking attacks early on, as was done in France, and to warn possible attackers that they are under observation, as was done in Germany. This can act as an effective deterrent, raising the cost of interference beyond its value. It is noteworthy that Germany did not experience a pre-election leak, although its parliament was hacked in 2015

Hacking remains the most dangerous technique, and the most difficult to prevent. It also demands the most skill, meaning that relatively few groups have the sophistication to carry it out. 

Online fakes and bots 

A much simpler and wider-ranging technique is using the Internet, especially social media, to spread false stories and hyper-partisan opinions among voters. This is cheap and simple, and can have significant consequences

A false story announced in January that South African President Jacob Zuma had resigned, thus causing short-term chaos on the international money markets. The false claim, partly propagated by automated "bot" accounts on Twitter, that Clinton was blamed by one of her advisors for the death of US diplomats in Benghazi, spread throughout the US and was quoted at a campaign rally by Trump.  

A photoshopped image spread by the far-right Alternative für Deutschland party, which claimed to show immigrant crimes, was shared by multiple party accounts before it was debunked, just days before the election. 

Most damagingly, we now know that the so-called "troll factory" in St Petersburg - a large-scale operation which pays people to run fake accounts on social media - operated hundreds of Twitter and Facebook accounts posing as radical Americans, spread messages to tens of millions of voters, and had its work amplified by genuine journalists in the target countries. 

These techniques can have a corrosive effect on democracy, especially when they spread messages of hate and division. However, their impact depends on the nature of the society which they are targeting. 

Such accounts work by masquerading as members of the target community, and then using that very community to validate their messages to a wider audience. The usual goal is to radicalise the audience by presenting extremist, divisive and emotional messages, and thus to render rational debate and compromise increasingly difficult. 

This is problem not limited to one country or one source. In Sweden, for example, we have already observed cooperation between local far-right activists and extremists in the United States. International white supremacists attempted to spread their messaging in Germany, though with little apparent impact. 

Foreign trolls posed as both far-right and far-left users during the US election. In Italy, warnings have already arisen over false stories and accounts being used from both inside and outside the country. 

Not all these attempts will have the same results.A number of factors will determine their effects. These include the level of local trust in traditional media and political parties; the effort governments make to raise the alert; the strength of the independent media and fact-checking groups; and the degree of awareness in the target country that such interference is both real and dangerous. 

It is worth remembering that US voters were warned many times, over a period of months, of the Russian interference - most notably in October 2016, a month before the election, when the Office of the Director of National Intelligence and the Department of Homeland Security confirmed that Russia was behind the hackings

France and Germany were the first to benefit from these warnings, as they were not taken seriously in America until after the elections. 


As the term "social media" implies, such operations can reach the whole of society, and we thus need to respond as such. It is essential for us to build resilience to these threats, as a societal immune system. Each actor has a role to play in this effort. 

The social media platforms should increase their awareness of the ways in which they are manipulated every day, their ability to shut down those threats, and their willingness to share their information with the public. 

Facebook's decision to confirm that Russia's troll factory created 129 events to which thousands of Americans responded, but not to say what those events were, shows a troubling lack of transparency in this regard - although it has still been more open than most platforms. 

However, placing all the blame on the platforms would be both unjust and counter-productive.Mainstream media repeatedly shared posts from troll-factory accounts, showing a disastrous lack of due diligence. Genuine users also shared posts from hyper-partisan, anonymous accounts, spreading the toxin of division through society. Both require increased awareness. 

For governments, legislation should only be the last resort: giving the law more power over information streams is never something to be done lightly. Far more important is education. There are techniques for spotting bots and exposing false stories, but they are still the preserve of experts. The more universal these skills become, the more the space available to malicious actors will be narrowed. 


Election interference is a growing threat which uses the openness and anonymity of the Internet to spread deceit, division and stolen information to the greatest number of people. 

It can be tightly targeted, attacking a specific candidate with specific allegations, or diffuse, amplifying hate and promoting radicalisation. It can also be monitored  from either inside or outside the country. 

Many responses are needed to fight this relatively novel trend. Social media platforms need to improve their ability to identify and suspend fake accounts in real time. Journalists need to become more wary of sourcing stories to anonymous, hyper-partisan accounts. National and party security services need to become better at warding off, and identifying, hacking attempts. 

Above all, however, politicians and the media need to find a new tone to discuss this problem. Making hysterical or ill-founded accusations will only make things worse. We need to foster a public debate based on information about the genuine threats of interference, the genuine techniques attackers use, and the genuine ways of identifying and defeating them. 

At its core, election interference feeds on emotions: anger, fear and hate. It thus seems sensible to withdraw our own emotions from this issue, and focus on the techniques used by interference to confront it best.    

Receive Institut Montaigne’s monthly newsletter in English