Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
18/06/2018

Thoughts From Boston - on The Relationship Between Algorithms, Humans, and Disinformation

 Thoughts From Boston - on The Relationship Between Algorithms, Humans, and Disinformation
 Théophile Lenoir
Author
Fellow - Misinformation and Digital Policy

Our new communication tools and software have brought about some serious dysfunctions in our information systems. Disinformation, fragmented audiences, threats to traditional business models, advertising wars, new data infrastructures… In the United States, a group of experts is studying the relationships connecting these different phenomena. I attended the two-day conference they organized at Harvard University on June 7th and 8th 2018 entitled “Information Disorder, New Media Ecosystems, and Democracy: From Dysfunction to Resilience”.

In France, the ongoing debate around the bill against the “manipulation of information” could draw inspiration from the approach adopted during this conference. France needs to build a pluridisciplinary vision of these issues, in order to generate a nuanced account of the way information is being transformed, and then react to the diagnosis established. Only then could we define the collective responsibility we all share in dealing with this emerging information disorder. 

It all starts with disinformation

The following story is often heard in public debates focusing on disinformation. Whilst it does explain the phenomenon well, many speakers of the two-day conference in Harvard insisted on the fact that it paints an incomplete picture of the problem we face.

It goes like this: we used to live in a world where the production and consumption of news followed simple rules. Newspapers produced content, their audience read it. This generated a pool of common topics to be discussed and thereby established a shared public space. 

Today, this scheme has changed.

  • The number of people who are able to produce content has considerably increased, and newspapers are not the only ones able to decide what should be discussed anymore.
     
  • The vertical and uniform model of newspaper distribution has given way to a multidirectional one, fueled by affect and microtargeting on social media platforms.
     
  • While tracking a piece of news and measuring its audience used to be easy, amplification phenomena on social media now make this task more complex.

These transformations have weakened traditional newspapers, led to the fragmentation of audiences, made it easier for disinformation to circulate and harder to evaluate its impact. Understandably, this generates all kinds of fears.

Thus, disinformation matters because it is a threat to our democracy. Malicious actors, including foreign states, are able to manipulate audiences, with some suspecting this could have an impact on the turnout of an election, to the benefit of populist parties. 

But there is more

This is a true story. However, speakers of the Harvard conference added more to it.

Let’s start with the conclusion: if disinformation is an issue, it isn’t so much because so-called “fake news” destabilize the existing order (after all, one could easily argue that the latter has serious defects and could benefit from change).Indeed, fake news are only the tip of the iceberg - where the iceberg would be an information ecosystem that is mostly opaque to citizens. In other words, the content we read isn’t the issue: the mechanisms that put them in front of our eyes are. As such, the emerging information system contributes to undermining trust in our societies.

From disinformation to data

This issue can only be addressed by gathering a variety of expertises. The diversity of actors mobilized during the 48 hours-conference was mesmerizing, ranging from anthropologists, to data scientists, media scholars, legislators, political activists, members of the tech industry, journalists… This diversity of viewpoints resulted in a very nuanced comprehension of the information disorder issue.
 
Here is one simple example. As most of us now know, disinformation is partly a data issue: it circulates thanks to the intense categorization and targeting phenomena that occur on platforms. The anthropologists in the room were able to demonstrate that this happened not because of automatic algorithms, but because of humans’ doings, working together with algorithms. 
 
According to this idea, the root of the problem is not the technical artefact that is the algorithm, but the forces that push humans to create and use it in certain ways. Focusing on these behaviours, by studying the interactions and decision-making processes within organizations (involving designers, coders, business strategist and clients working together to optimize a system) may help us understand why categorizing and targeting have become such a big part of our lives (and not only on social media platforms).

A bridge between disinformation and algorithmic biases

Seeing things from this angle can help us understand different problems linked to the same phenomenon. For example, the logic people use to create algorithmic targeting on Facebook is not so different from the logic others use in banks to, say, determine whether a person is going to be able to pay back his or her loan. Indeed, in this case too, data is used to categorize people, as bits of information about a person are collected and assembled to build a profile (that is only a simplified, purposely selected version of reality).

One should wonder what exactly an algorithm says when it predicts that someone is not going to be able to pay back their loan. Is the delay in payment a valid criterion? Should being male or female have an impact? At the conference, these questions were asked alongside the ones related to the disinformation issue. Indeed, in the end, all of them are related to trust. 

In the end, it’s all about trust

Trust in institutions has dramatically decreased in the last 40 years. The breadth of the disinformation phenomenon is a clear consequence of this situation. But trust is also at the core of the algorithmic discrimination challenge: in an era where transparency is so important, opaque systems are harder to trust. The onus is on the organizations building these algorithms to prove that they are trustworthy.

Soli Özel, professor of International Relations at Kadir Has University in Istanbul and Visiting Fellow at Institut Montaigne, argues that one of the reasons explaining the surge of populism in the West is the indifference of political and economic elites towards the negative impact globalization has had on more disadvantaged parts of the population. According to this argument, elites have turned their back on their responsibility to serve the common good (a phenomenon illustrated to some extent by the rise of private organizations), and therefore lost the trust of a large segment of the population.
 
The underlying argument is neither that private organizations are evil, nor that our salvation will come from the State alone. It is that we need to realize the extent to which the foundations on which the social order relies have changed. Private organizations (from social media companies to insurance companies working with algorithms) are responsible for the values upon which their relationships with their stakeholders, shareholders, clients and furnishers are developed. Forgetting this is dangerous. As we now witness everyday, private companies disregarding these values over long periods of time can have serious consequences: hatred towards the elites, distrust towards institutions, violence in the street… 

The main challenge in the years to come will be to weave a productive dialogue between public and private organizations, to work together towards social good. Facebook is only one of the actors that emerged out of the data architecture we have been collectively building for some time now. Engaging in a productive dialogue with these actors requires that we too acknowledge our responsibility in the unintended consequences brought about by data. It will take more interdisciplinary initiatives to shape the public debate in this direction. 

Receive Institut Montaigne’s monthly newsletter in English
Subscribe