So, what can we do to ensure algorithms are equitable and unbiased?
First, we can try to understand them better. Télécom Paris researchers in computer science and economics have written an excellent article on the subject, Algorithms: Bias, Discrimination and Fairness, in partnership with the Abeona Foundation. This article very clearly explains the different reasons behind the bias of algorithms.
On the one hand, this may come from data used as an input, which may for example be made up only of rich and healthy people on the one side, and poor and sick people on the other, which the algorithm will generalize without "thinking". On the other hand, algorithms can contain the unconscious bias of those who design or code programs.
|Statistical biases||Cognitive biases||Economic biases|
|Data bias: a bias in the data set.
For example, a recruitment algorithm trained on a dataset in which men are over-represented will exclude women.
|Conformity bias: we tend to believe what people around us believe.
For example, supporting a political candidate because friends and family support this candidate.
|Economic biases: biases introduced voluntarily or involuntarily because they lead to higher revenues.
For example, an advertising algorithm will target specific audience (men for razors, but also poor people for fast food, etc.)
|Omitted-variable bias: a bias due to the difficulty of encoding a specific dimension.
For example, it is hard to factually measure emotional intelligence. Therefore, this dimension will be absent from datasets and algorithms used for recruitment.
|Anticipation and confirmation bias: we tend to privilege information reinforcing our point of view.
For example, after someone we trust tells us that such person is authoritarian, we tend to notice examples illustrating this.
|Selection bias: a bias due to the sample selected.
For example, for credit scoring, banks will use internal data based on people that were granted loans, thus excluding those that did not apply, those whose applications were unsuccessful, etc.
|Illusory correlation bias: we tend to associate phenomena that are not linked to one another.
For example, believing that there is a correlation between oneself and external events (train delays, weather, etc.)
|Endogeneity bias: a bias due to the difficulty of anticipating future events.
For example, in the case of credit scoring, it is possible that someone with bad reimbursement history changes attitude the moment they decide to start a family.
|Stereotype bias: we tend to act according to the social group to which we belong.
For example, a study has shown that women tend to click on job offers they think are more accessible to women.
Then, we can think about which actions could be implemented. At Institut Montaigne, we are delighted to launch a working group with business leaders, from startups to large companies, and with civil society. We want to look at potential impacts in areas such as health, recruitment, online advertising, future transportation, and many more. A series of auditions of experts and personalities has also begun.
The idea is to produce concrete recommendations for action, both for politicians and business leaders. For algorithms to remain great tools, at the service of humans