Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
16/07/2019

Algorithms, Data and Bias: Public Policy Needed

Algorithms, Data and Bias: Public Policy Needed
 Anne Bouverot
Author
Chairperson of the Board at Technicolor and the Chairperson of Fondation Abeona
 Thierry Delaporte
Author
Chief Executive Officer and Managing Director of Wipro Limited

Our decisions have always been biased. We all have prejudices, preconceived ideas, that are the fruit of our history, our culture, our experiences, and which unconsciously guide our choices. Why would it be different for algorithms? In this article, Anne Bouverot, Chairperson of Technicolor and Chair of the Abeona Foundation, and Thierry Delaporte, Chief Operating Officer of Capgemini Group, demonstrate why algorithmic biases may be problematic and announce the launch of Institut Montaigne’s taskforce, which they will chair together, to address this issue.
 
Artificial intelligence has an indisputable capacity to augment human intelligence, but as intelligent as they are, algorithms ultimately only analyze data. They can handle data in great quantity and at speeds inaccessible to human beings, and they statistically draw reusable conclusions. But as they do that, they reproduce and even amplify the worst human biases: sexism, racism, social injustice...
 
And this becomes complex when one realizes the potential of these algorithms. They permeate our everyday lives, our businesses and our societies, on the basis of their efficiency and usefulness. They improve decision-making in companies. They make autonomous cars possible. They improve the detection and treatment of cancers.
 
But efficiency is not everything, and algorithms can also be perceived as black boxes that constitute an uncontrollable threat. For their development to be sustainable and accepted by all, it is essential to give full transparency to the functioning of algorithms and to understand where biases come from, in order to detect and correct them. Because let's not forget what is essential: algorithms are great tools, but must remain at the service of humans.

 

Algorithms biases


So, what can we do to ensure algorithms are equitable and unbiased?
 
First, we can try to understand them better. Télécom Paris researchers in computer science and economics have written an excellent article on the subject, Algorithms: Bias, Discrimination and Fairness, in partnership with the Abeona Foundation. This article very clearly explains the different reasons behind the bias of algorithms. 
 
On the one hand, this may come from data used as an input, which may for example be made up only of rich and healthy people on the one side, and poor and sick people on the other, which the algorithm will generalize without "thinking". On the other hand, algorithms can contain the unconscious bias of those who design or code programs.
 

Statistical biasesCognitive biasesEconomic biases
Data bias:a bias in the data set.

For example, a recruitment algorithm trained on a dataset in which men are over-represented will exclude women.
Conformity bias: we tend to believe what people around us believe.

For example, supporting a political candidate because friends and family support this candidate.
Economic biases: biases introduced voluntarily or involuntarily because they lead to higher revenues. 

For example, an advertising algorithm will target specific audience (men for razors, but also poor people for fast food, etc.)
Omitted-variable bias:a bias due to the difficulty of encoding a specific dimension.

For example, it is hard to factually measure emotional intelligence. Therefore, this dimension will be absent from datasets and algorithms used for recruitment.
Anticipation and confirmation bias: we tend to privilege information reinforcing our point of view.

For example, after someone we trust tells us that such person is authoritarian, we tend to notice examples illustrating this.
 
Selection bias: a bias due to the sample selected. 

For example, for credit scoring, banks will use internal data based on people that were granted loans, thus excluding those that did not apply, those whose applications were unsuccessful, etc. 
 
Illusory correlation bias: we tend to associate phenomena that are not linked to one another. 

For example, believing that there is a correlation between oneself and external events (train delays, weather, etc.)
 
Endogeneity bias: a bias due to the difficulty of anticipating future events.

For example, in the case of credit scoring, it is possible that someone with bad reimbursement history changes attitude the moment they decide to start a family.
Stereotype bias: we tend to act according to the social group to which we belong.

For example, a study has shown that women tend to click on job offers they think are more accessible to women.
 


Then, we can think about which actions could be implemented.  At Institut Montaigne, we are delighted to launch a working group with business leaders, from startups to large companies, and with civil society. We want to look at potential impacts in areas such as health, recruitment, online advertising, future transportation, and many more. A series of auditions of experts and personalities has also begun.
 
The idea is to produce concrete recommendations for action, both for politicians and business leaders. For algorithms to remain great tools, at the service of humans

Receive Institut Montaigne’s monthly newsletter in English
Subscribe