Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
10/09/2020

Digital Services Act: Towards Responsible Algorithms?

Digital Services Act: Towards Responsible Algorithms?
 Anne Bouverot
Author
Chairperson of the Board at Technicolor and the Chairperson of Fondation Abeona
 Thierry Delaporte
Author
Chief Executive Officer and Managing Director of Wipro Limited

The Digital Services Act aims at holding digital intermediaries accountable. One of the problems platforms in the fields of media, transportation, recruitment or e-commerce have in common is that of the transparency of the technologies they use to run their services. Most often, these platforms rely on algorithms to make decisions at scale or recommend content or goods. These algorithms are often mis-understood by public actors but can cause significant issues, including promoting problematic content or discriminating. This article focuses on the difficult question of algorithms and offers solutions to address the issue of controlling their potential biases. It is based on the report Algorithms: Please Mind the Bias!, published by Institut Montaigne in March 2020.

Why algorithmic biases matter

Since George Floyd’s death last may in Minneapolis, a series of protests began in the United States, demanding radical reforming of the police department. The protests spread to almost all continents, from Rio to Madrid, all the way to Paris. Many business leaders have given their support to the mouvement. Algorithms and artificial intelligence are a cause of concern. PredPol, that sells predictive policing software - to predict where and when the next crimes will occur - was intensely criticised for building unfair systems. They reminded that their software was only designed with the hope of improving police practices: "reducing bias, increasing transparency, and reinforcing everyone’s responsibility".

Facing the bias of human police officers, PredPol, but also IBM, Microsoft or Palantir, are promoting the implementation of algorithms that would correct that bias. Many of their opponents, however, fear that algorithms present a risk of reinforcing the biases rather than correcting them. This summer, 1,400 mathematicians have signed a letter calling for the boycott of predictive policing softwares. The Los Angeles Police Department decided to abandon PredPol, judging that the historical data used to train the algorithms reflected too much of the old practices of discrimination against African-Americans to give satisfactory predictions.

Neither companies nor the security industry have a monopoly on algorithmic biases. While British high school students were not able to complete their A-levels (the equivalent of the baccalaureate), teachers were required to give them their final grades. An algorithm then had to correct these evaluations, depending on the general level of the school. It was immediately accused of favoring the students from private and prestigious establishments, to the disadvantage of high school students from public schools, with a direct impact on their access to university.

Those two examples support the conclusion of the recent Institut Montaigne report, Algorithms: Please Mind the Bias!: the main source of bias is the data used to train an algorithm. Whether it is the discrimination against the African-American community in the United States, or the academic performance of high school students in underprivileged schools in the United Kingdom, the algorithm replicates historical inequalities.

Algorithms can also help in the fight against biases

Whether it is the discrimination against the African-American community in the United States, or the academic performance of high school students in underprivileged schools in the United Kingdom, the algorithm replicates historical inequalities.

Too often do these algorithms reveal discriminations already well entrenched in our society. This process can also be useful, as algorithms make it possible to objectify them and assess their extent. Algorithms could help us fight against these very same discriminations, by formalizing rules that apply to all, and by making it possible to guarantee their correct application. The fact that this promise is challenged when it comes from PredPol does not prevent it from being taken seriously. For every example of résumé sorting software at Amazon that keeps women out of positions of responsibility, there is an example of a profile recommendation software at a temporary work agency that forces companies to evaluate and recognize their recruitment biases.

A police algorithm trained on the historical data of a racist police team may reflect these biases in the behavior of other teams that use the software. In addition, algorithms may hide individual responsibility and make it more difficult to challenge biased decisions.

Nevertheless, algorithmic biases deserve special attention from legislators, and in particular from the European Commission, which is currently drafting itsDigital Services Act.

Fighting algorithmic biases is difficult

These biases pose a number of difficulties. Equitable treatment can take the form of equity between individuals (everyone is treated the same way) or between groups (each group, man/woman, Catholic/Muslim, etc., is treated, on average, the same way). The two approaches are not always compatible, and it is likely that conceptions of equity vary around the globe, for example between Europe and the United States.

Moreover, modifying an algorithm to make it fully fair necessarily involves reducing its technical performance. While this may be acceptable in the vast majority of cases, performance degradation for sensitive algorithms can be difficult to accept, for example, in the medical field. Should we prioritize the fairness of an X-ray cancer detection algorithm to the detriment of its performance for certain types of patients?

Finally, strictly regulating firms to ensure that there is no bias can lead to strengthening the dominant position of GAFA, who have the resources at their disposal to comply with any regulation. Europe's digital ecosystem is still struggling, and the innovative capacity of companies and start-ups must be preserved.

Still, aiming for the disappearance of algorithm biases is more essential than ever if we want to maintain user confidence in digital platforms and services.

What is the current legal framework?

The year 2020 may be the year when tech really becomes a regulated industry, like so many others before it, from banking to telecommunications. In fact, the movement started several years ago, and digital intermediaries must already follow a number of regulations.

In Europe, they comply with a number of laws that apply to the physical world as much as to the digital one: protection of consumers, fiscality, disinformation and hateful speech… In addition, some specific texts have tightened their scope of activity in several areas such as the protection of personal data with the GDPR, or competition in e-commerce with the e-Commerce directive.

When it comes to algorithmic bias, an important legal framework already exists as well. No less than 5 European directives fight against discrimination, both in the physical and digital world. The GDPR imposes transparency and a right to a right of appeal against decisions resulting from the algorithms, with fines of up to 4% of the company's worldwide turnover in case of infringement. In this way, civil society is able to track down possible discriminations.

The year 2020 may be the year when tech really becomes a regulated industry, like so many others before it, from banking to telecommunications.

With regards to national provisions, the French law from the May 27, 2008 defines 25 criteria protected against discrimination (gender, religion, political opinion, etc.) for a certain number of situations, including access to employment or public services, with or without the presence of algorithms. Thus, there are already many safeguards in place to limit the risk of algorithm bias.

Nevertheless, loopholes remain: the regulations in force are mostly ex-post, i.e. in a logic of sanctions for bad behavior, and therefore give little importance to measures preventing and correcting algorithmic biases, slowing down the implementation of best practices in the digital ecosystem. Moreover, French and European privacy policies prevent the evaluation of algorithmic biases on sensitive criteria such as ethnicity or sexual orientation, as the collection of such information is prohibited. This is not the case in the United States, where such collection is authorized and where the draft Algorithm Accountability Act, evaluated in Congress, requires platforms to carry out impact studies on the possible discrimination of algorithms affecting essential services.

But all rules have costs. The GDPR is thus criticized for having sucked up the digital investments for compliance of European companies for several years, to the detriment of investments in innovation. It is also criticized for adding new technical and regulatory barriers to entry for new players, thus reinforcing the monopolies of American and Chinese digital giants.

Legislators face a dilemma and have contradictory objectives between digital innovation and accountability. If they seek to make the digital industry more competitive with little regulation, they expose it to multiple scandals of algorithmic bias. Such was the case in the United States in the field of recruitment or justice, where an algorithm used by judges to assess the risk of recidivism and denounced by the NGO ProPublica made far more errors in the assessment of African-American individuals than white ones.

What is needed for better accountability?

In our opinion, some important principles should guide the European Commission, in order to efficiently fight against algorithmic biases.

First of all, it is essential to avoid over-regulation and its harmful effects. Laws fighting against discrimination, including in the digital world, are numerous and sufficient in the vast majority of cases. Rather than wondering what new constraints should be imposed on platforms, it is essential to enforce existing texts. It is also illusory to think that the authorities will be able to carry out bias controls, via audits for example, on all algorithms in the world. The resources available to the CNIL (the French regulator) would be quite insufficient! The fact that Europe is lagging behind in terms of innovation has a direct impact on our digital sovereignty. It is necessary to make European digital giants emerge by putting in place regulations favorable to new entrants and small players with the help of an integrated European digital market, i.e. avoiding the superimposition of national digital laws as much as possible.

This European text alone will not overcome the biases in the algorithms. Many solutions fall into the responsibility of companies, third parties or national governments.

We also think it is necessary to encourage platforms to test for biases in algorithms, as we test for side effects in drugs. Forcing ourselves not to collect or use protected data (such as gender) is not enough to guarantee the absence of bias. A logic of "active equity" must be implemented to combat algorithm bias. In concrete terms, this means authorizing the ad hoc collection by platforms of some sensitive and protected data (with the consent of the user), provided that they are only used to assess whether the algorithms are biased or not in relation to these criteria (ethnic origin, religion, sexual orientation, others).

This change in logic should necessarily be accompanied by strict control by the competent authorities, as well as the establishment of safeguards on the nature and quantity of data collected. The collection of sensitive data for the purpose of testing algorithms could, for example, be allowed only on a sample.

In addition, more transparency must be demanded from the platforms, especially for those implementing the most sensitive algorithms, i.e. those that affect access to essential services, the physical and mental integrity of the person, or fundamental rights. These algorithms should be part of an ad hoc framework to prevent bias in which transparency obligations and rights of recourse would be reinforced. This framework should be a mix of best practices promoted by the platforms and sector-specific regulations, where necessary.

Finally, the Digital Services Act should, as far as possible, favor ex ante regulations, i.e. requiring platforms to apply clear processes and rules, whose implementation can be monitored by audits and whose breaches can be sanctioned by fines, rather than ultimately sanctioning the presence of bias.

This European text alone will not overcome the biases in the algorithms. Many solutions fall into the responsibility of companies (training engineers in bias), third parties (creating labels for algorithms according to their auditability, the quality of their data and the evaluation of bias) or national governments (building within administrations a technical capacity to audit algorithms).

 

This article was written with the help of Arno Amabile and Basile Thodoroff, engineers of the French Corps des mines and rapporteurs of Institut Montaigne’s report, Algorithms: Please Mind the Bias!

Receive Institut Montaigne’s monthly newsletter in English
Subscribe