At the opening of the 2018 Symposium, AI Now Institute co-founders Meredith Whittaker and Kate Crawford presented the changes that occurred in the field over the last year (see graph above: in yellow, legal projects; in green, industrial projects; in blue, oppositions to AI-related projects…). Things are clearly moving fast. At each breaking news, public reactions help us understand the challenges AI represents. Mass surveillance and policing, disruption of electoral processes, lethal accidents… A lot to be concerned about. According to the argument prevalent throughout the Symposium, building codes of ethics (as Google did) is a great start to address these challenges. However, these fail to identify "sustainable forms of accountability", which are essential to significantly impact the organizations and institutions governing AI.
Making AI accountable
With AI systems, accountability is challenging. Often (as is the case in France), transparency is portrayed as the solution: many believe that opening a program’s "blackbox" by publishing its code will help us see through it and understand its functioning. Eventually, this will allow public actors to hold programs and their designers accountable for any bad consequence they involve. However, as Kate Crawford and Mike Ananny, Assistant Professor at the University of Southern California, wrote in their article "Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability", opening an algorithm’s blackbox fails to bring to light the connections it has with other systems - be they technological or human - which may be equally opaque. Which secret trade law is an algorithm dependent on? Which informal discussion between high-level decision maker is it the result of? Algorithms do not come out of a vacuum, and the decisions they automate are first and foremost human decisions. Therefore, the real challenge is to decipher the chains of logical decisions that build algorithms in order to determine which institutions and people are responsible for them. For example, we tend to forget the human labor AI relies on (that Amazon’s Mechanical Turk makes accessible). How should a sustainable framework for algorithmic accountability implemented in developed countries take this into account?
The values AI automates
These are large-scale questions one must have in mind in mind when encouraging the development of AI. Even more so because there are no right or wrong answers, only solutions embedding different value systems. Sherrilyn Iffil, President and Director-Counsel at the NAACP Legal Defense Fund, reminds us of the importance of the context in which AI systems are implemented (in the US, for example, prejudices against Afro-Americans are still regularly perpetuated by police officers). The large-scale use of facial recognition may lead to building surveillance tools used to control the Afro-American population, which could potentially have drastic consequences on the lives of individuals. This is especially true as there remains a high rate of error with facial recognition systems. For example, Amazon’s Rekognition software mistook 28 members of Congress with publicly available mugshots. If one is not as famous as a member of Congress, he or she may not have a possibility to defend themself and prove the system wrong.
France has its own values embedded in its own institutions. As it develops a strong AI sector, public decision makers need to be aware of the risks associated with the French context. At this stage, I believe that what the country lacks is more research to bring them to light.
Awareness is spreading fast
Amongst all the worry, there are also good reasons to be optimistic. First, our collective awareness of these issues has risen. As noted by Meredith Whittaker and Kate Crawford in their opening remarks, three years ago, the debate around bias was niche. Today, it has become mainstream, partly thanks to researchers like Timnit Gebru who keep exposing cases where biases have had concrete consequences, such as that of facial recognition softwares struggling to recognize people of color’s gender. This case, as the one concerning Congressmen, shows how much datasets used to train AI impact the system’s accuracy.
The second gleam of optimism comes from the spread of this awareness inside communities within companies building these tools. Take Google’s Maven project, Microsoft’s contract with the Immigration and Customs Enforcement (ICE) or the use of Amazon’s Rekognition software by the US law enforcement: all sparked protests led by employees of the companies involved. This is all the more interesting as we are constantly reminded that we should worry about our algorithm-driven information infrastructures (e.g. social media that recommend content using algorithms believed to contribute to the polarization of society or the viral spread of disinformation). In the above case, the complex interaction between social media algorithms, employees and traditional media led to a greater good: employees mobilized from within their companies and managed to catch the media’s attention, eventually creating a topic of public discussion that was amplified thanks to social media algorithms. They finally succeeded in preventing their companies from carrying out their initial plans. Not only do these examples show that some employees feel engaged civically through the softwares they build and within the companies they work for, but they also remind us that new communication technologies can allow effective mobilization for laudable causes.
Everywhere people are mobilizing to build AI systems that defend human values, and the field is transforming quickly. The question the AI Now Symposium asked is whether awareness is spreading fast enough. It is important that we keep questioning AI before the values it automates become deeply embedded in opaque systems. There is hope that we will manage to better understand the social implications of these systems before they become unchallenged. We’ll see where the conversation goes from there.
Copyright : NICOLAS ASFOURI / AFP