Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
24/10/2018

Thoughts from New York: Questioning AI, Now!

Thoughts from New York: Questioning AI, Now!
 Théophile Lenoir
Author
Fellow - Misinformation and Digital Policy

Last week, the AI Now Institute held its third Symposium in New York (the recording is available on AI Now’s website). Centered around ethics, organizing and accountability, speakers addressed some of the important questions that arise as the US turns to artificial intelligence (AI) systems to manage financial transactions, automate allocation of public ressources or monitor public spaces. These debates took place in the liberal democracy leading the development of AI. These discussions are particularly enlightening at a time when France (and many other countries across the globe) is defining and establishing its own AI strategy. After looking at the French plan for building a strong and ethical AI industry, our Policy Officer Theophile Lenoir outlines the key learnings from the conference.

What is the situation in France?

On 29 March 2018, Emmanuel Macron presented France’s strategy in artificial intelligence, "AI for Humanity". After having received Cédric Villani’s report For a Meaningful Artificial Intelligence: Towards a French and European Strategy, Emmanuel Macron made several announcements to ensure that France becomes a leader in the field. These include proposals to create a strong AI industry whilst making sure AI systems respect ethical principles. 

Overall, €1.5 billion of public spending will be invested during Emmanuel Macron’s presidency (€700 million specifically dedicated to research). This money is hoped to foster the emergence of startups using AI-related technologies, as well as to encourage large industrial players to pursue investments in the field. Two sectors are prioritized: health and transport. Concerning the first, Emmanuel Macron announced the creation of a platform centralizing health data to make it accessible for research projects deemed justified. On 12 October, the Ministry for Solidarity and Health published a report showing what the platform would look like. 

According to the strategy’s underlying argument, the development of AI highly depends on whether or not France is able to attract the best talent. The country’s artificial intelligence program is hoped to do just that, as it aims to bring together four or five research institutes across France. This program will be managed by the French National Institute for Computer Science and Applied Mathematics (INRIA).

Ethics in the French strategy

In parallel, ethics remain an integral part of the government’s strategy. Inspired by the intergovernmental group of experts on climate change (Groupe d’experts intergouvernemental sur l’évolution du climat), the government said it would gather a group of experts, whose goal will be to ensure that the players building the AI sector in France respect the principles of loyalty and transparency. 

A few first steps towards transparency have already been made. Since 2016, the Digital Republic bill (Loi pour une République numérique) compels the French administration to provide information about the functioning of the algorithms they use, in case individuals ask for these details (article 4). This is coherent with one of the AI Now Institute’s recommendations, emphasized in its AI Now 2017 Report and its letter for the State of New York: public agencies’ algorithmic systems "should be available for public auditing, testing, and review, and subject to accountability standards" (2017 report). 

The debate around AI

France and other countries (South Korea, India, the UK, Mexico, Finland, Sweden and many others - see here an overview from June 2018) designed ambitious strategies to develop AI. Most acknowledge the risks associated with AI, and have included an ethical dimension in their plans. They indeed have good reasons to worry about implementing AI solutions too hastily, without carefully evaluating the risks they represent.

Thoughts from New York: Questioning AI, Now!

 

At the opening of the 2018 Symposium, AI Now Institute co-founders Meredith Whittaker and Kate Crawford presented the changes that occurred in the field over the last year (see graph above: in yellow, legal projects; in green, industrial projects; in blue, oppositions to AI-related projects…). Things are clearly moving fast. At each breaking news, public reactions help us understand the challenges AI represents. Mass surveillance and policing, disruption of electoral processes, lethal accidents… A lot to be concerned about. According to the argument prevalent throughout the Symposium, building codes of ethics (as Google did) is a great start to address these challenges. However, these fail to identify "sustainable forms of accountability", which are essential to significantly impact the organizations and institutions governing AI.

Making AI accountable

With AI systems, accountability is challenging. Often (as is the case in France), transparency is portrayed as the solution: many believe that opening a program’s "blackbox" by publishing its code will help us see through it and understand its functioning. Eventually, this will allow public actors to hold programs and their designers accountable for any bad consequence they involve. However, as Kate Crawford and Mike Ananny, Assistant Professor at the University of Southern California, wrote in their article"Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability", opening an algorithm’s blackbox fails to bring to light the connections it has with other systems - be they technological or human - which may be equally opaque. Which secret trade law is an algorithm dependent on? Which informal discussion between high-level decision maker is it the result of? Algorithms do not come out of a vacuum, and the decisions they automate are first and foremost human decisions. Therefore, the real challenge is to decipher the chains of logical decisions that build algorithms in order to determine which institutions and people are responsible for them. For example, we tend to forget the human labor AI relies on (that Amazon’s Mechanical Turk makes accessible). How should a sustainable framework for algorithmic accountability implemented in developed countries take this into account? 

The values AI automates

These are large-scale questions one must have in mind in mind when encouraging the development of AI. Even more so because there are no right or wrong answers, only solutions embedding different value systems. Sherrilyn Iffil, President and Director-Counsel at the NAACP Legal Defense Fund, reminds us of the importance of the context in which AI systems are implemented (in the US, for example, prejudices against Afro-Americans are still regularly perpetuated by police officers). The large-scale use of facial recognition may lead to building surveillance tools used to control the Afro-American population, which could potentially have drastic consequences on the lives of individuals. This is especially true as there remains a high rate of error with facial recognition systems. For example, Amazon’s Rekognition software mistook 28 members of Congress with publicly available mugshots. If one is not as famous as a member of Congress, he or she may not have a possibility to defend themself and prove the system wrong. 

France has its own values embedded in its own institutions. As it develops a strong AI sector, public decision makers need to be aware of the risks associated with the French context. At this stage, I believe that what the country lacks is more research to bring them to light.

Awareness is spreading fast

Amongst all the worry, there are also good reasons to be optimistic. First, our collective awareness of these issues has risen. As noted by Meredith Whittaker and Kate Crawford in their opening remarks, three years ago, the debate around bias was niche. Today, it has become mainstream, partly thanks to researchers like Timnit Gebru who keep exposing cases where biases have had concrete consequences, such as that of facial recognition softwares struggling to recognize people of color’s gender. This case, as the one concerning Congressmen, shows how much datasets used to train AI impact the system’s accuracy.

The second gleam of optimism comes from the spread of this awareness inside communities within companies building these tools. Take Google’s Maven project, Microsoft’s contract with the Immigration and Customs Enforcement (ICE) or the use of Amazon’s Rekognition software by the US law enforcement: all sparked protests led by employees of the companies involved. This is all the more interesting as we are constantly reminded that we should worry about our algorithm-driven information infrastructures (e.g. social media that recommend content using algorithms believed to contribute to the polarization of society or the viral spread of disinformation). In the above case, the complex interaction between social media algorithms, employees and traditional media led to a greater good: employees mobilized from within their companies and managed to catch the media’s attention, eventually creating a topic of public discussion that was amplified thanks to social media algorithms. They finally succeeded in preventing their companies from carrying out their initial plans. Not only do these examples show that some employees feel engaged civically through the softwares they build and within the companies they work for, but they also remind us that new communication technologies can allow effective mobilization for laudable causes. 

Everywhere people are mobilizing to build AI systems that defend human values, and the field is transforming quickly. The question the AI Now Symposium asked is whether awareness is spreading fast enough. It is important that we keep questioning AI before the values it automates become deeply embedded in opaque systems. There is hope that we will manage to better understand the social implications of these systems before they become unchallenged. We’ll see where the conversation goes from there.

Copyright : NICOLAS ASFOURI / AFP

Receive Institut Montaigne’s monthly newsletter in English
Subscribe