Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
12/04/2019

Great National Debate: Artificial Intelligence at the Service of Collective Intelligence?

Interview with Anne Bouverot

Great National Debate: Artificial Intelligence at the Service of Collective Intelligence?
 Anne Bouverot
Chairperson of the Board at Technicolor and the Chairperson of Fondation Abeona

The French Prime Minister Edouard Philippe presented, on Monday 8 April, an overview of the great national debate. Overall, 1.5 million French people participated in this exercise, thus producing a huge amount of unstructured data to be analyzed. In order to process these data, the government worked with artificial intelligence companies specialized in word processing. In this post, Anne Bouverot, Senior Advisor at TowerBrook Capital Partners, explains how these systems work.

How did the artificial intelligence in charge of processing the results of the great national debate work?

Almost two million contributions were shared on the great national debate’s website, and more than 16,000 municipalities have set up registers for complaints. The result is large volumes of unstructured data in natural language. This is the kind of situation in which artificial intelligence (AI) is very effective - virtual assistants or chatbots allow us to get information when booking a trip or making a purchase online, and Google Translate or other tools allow us to translate texts easily. Moreover, AI can process these data in a short amount of time - in this case, it took about two weeks to process the data resulting from the great national debate.

The result is large volumes of unstructured data in natural language. This is the kind of situation in which artificial intelligence (AI) is very effective.

The process is as follows. Concerning handwritten contributions, the registers of complaints are first digitized by the Bibliothèque nationale de France and its partner, Numen (specialized in the digitization of documents). To do this, the company uses handwriting recognition software. These texts are then analyzed and classified by Cognito and Bluenote.

Contributions made on the dedicated website are already digital. They are entrusted to the survey company OpinionWay and its partner in artificial intelligence, Qwam. Digital contributions include, on the one hand, answers to multiple-choice questions, which are fairly easy to analyze, and, on the other hand, texts that are processed by extraction engines, which extract words and concepts based on a predefined term repository. These are then classified into several categories, such as "taxes", "hospitals", "blank vote", etc.

What risks can such processes involve?

Programs that recognize and analyze text are not new and generally work quite well. Nevertheless, they do involve a risk of confusion and misunderstanding. For example, when two words look very similar, such as the French words "fracture" (i.e. "divide") or "facture" (i.e. "invoice"), which one should be recognized on the register for complaints? Should we classify nuanced text such as "I rather agree, but actually" as being "for" or "against" the carbon tax? And how can we ensure that the AI system takes emotions, anger or irony into account?
 
Moreover, any classification involves choices: how many categories should there be, what should count as keywords? For example, should the terms "family allowance" and "family quotient" (the number of family units for the calculation of the income tax) be in the same category? Is the "public service" different from "public services"? Should a category be created for the tax credit for home support services, or should it be included in the "tax exemption" category?

Are there any issues of representativeness, and if so, how should we deal with them?

There are indeed several issues at stake. First, as in any such public and open process, some may try to push for an idea by leading very active communication campaigns through social networks, Whatsapp channels or group emails, so that many people publish the same text defending a specific idea. For example, it is quite obvious that a group has mobilized to abolish the 80 km/h speed limit on departmental roads: the very same text focusing on this topic has been shared by thousands of contributors. Fortunately, artificial intelligence can identify identical texts very easily (although it is unable to identify different texts defending the same idea), and can thus shed light on such strategies of influence. Yet we still need to decide what should be done when confronted to such situations.

Then there is a second, deeper issue of representativeness. These contributions have been made by about 1.5 million people. That's a lot of course, but it's only a fraction of the overall French population. Concerning the contributions made on the website, we do not have data to classify their authors by age, gender, level of resources... The only information requested to make a contribution is the postal code. So one of the things we can observe is that Paris and the big cities are a little more represented than others.

Moreover, for the public meetings, Cevipof conducted surveys among participants in 240 debates: the majority of them are men, aged over 50, who either work or are retired, who completed higher education studies and who own a home. There thus seems to be a risk of bias, given that the youth, the unemployed, women and people with lower levels of education are under-represented. Would these people have the same concerns and propose the same solutions? For example, would the demand for tax cuts have been so visible had their voices been heard?

This is hasn’t been done in the case of the great national debate, because the government's goal in organizing this debate was to allow those who so wished to express themselves and to produce a written record. The aim was not to conduct a survey or to collect votes.

When conducting a survey, all these elements are carefully considered and the weight of each criteria is rebalanced accordingly. This is called statistical adjustment, and aims to ensure that the results obtained adequately represent the population. This is hasn’t been done in the case of the great national debate, because the government's goal in organizing this debate was to allow those who so wished to express themselves and to produce a written record. The aim was not to conduct a survey or to collect votes. The summary report of the great debate thus highlights issues that are important to a large number of citizens, and the government will have to acknowledge this. It however remains free to choose the directions in which it wishes to progress.

What is the final result, and what remains to be done for conclusions to be drawn?

Artificial intelligence software is not in charge of summarizing the great debate! The companies mentioned above - all of which are French, which shows the dynamism of the national ecosystem in AI - allow to transform handwriting into texts, and then to categorize these texts as appropriately as possible, with an idea of the number of contributions per category. Then, several firms (Roland Berger, Res Publica, Missions Publiques) analyze the results and produce an overall summary. Even with the help of artificial intelligence, this exercise is complicated: many interesting ideas will emerge, in the fields of consultation as in others. However, these will also include contradicting proposals. Finally, as we have seen, it is difficult to determine the extent to which these proposals are representative.

Artificial intelligence software is not in charge of summarizing the great debate!

Contributions are available online, but, in order for the whole process to be credible, it is important that transparency does not stop there: handwritten versions must be accessible online (this is planned), and the results of the analyses and of the categorization of texts need to be made public. This means that the public must be able to have access to all the texts contained in each category, such as the "tax loopholes" category, for example.

The various companies involved in this classification and summarizing process will also have to be very open, and be able to answer questions about their working methods, tools, the difficulties they have encountered, etc. The idea is that third parties, associations, independent researchers, be able to access these data, the intermediate and final results, in order to confirm - or not! - the quality and neutrality of their treatment.

Receive Institut Montaigne’s monthly newsletter in English
Subscribe