Skip to main content
In the News   
Ex: Europe, Middle East, Education

The EU Must Do More to Foster Trust in AI

BLOG - 22 April 2021

Artificial intelligence (AI) has become a strategic battleground. In 2020, public and private corporate investment in AI increased by 40% to reach a total of $68 billion worldwide. Companies are looking at how AI can give them a competitive edge and a technological lead. States are investing in public and private AI ventures and trying to woo international talent. There is more funding available for university AI programs than there has ever been and the number of investment-screening mechanisms has increased dramatically.

Since Canada published the world’s first national AI strategy in 2017, over 30 other countries and regions have published similar documents. In the race to develop ambitious AI strategies, the EU this week published its own regulatory proposal so that "European AI can be globally competitive while also respecting European values." Similarly to the EU’s Digital Service Act and Digital Market Act, the EU is hoping these new rules will make it easier for businesses to innovate, give consumers more choice and, crucially, protect them from technologies that could harm or manipulate them.

While the EU Commission is right to recognize AI as a vital component of future industry, there are shortcomings to its proposal. Tight rules to protect EU citizens could make the EU a less attractive market for companies looking to invest in, and develop, AI. More importantly, a regulatory proposal alone will not suffice to ensure what the Commission calls an "excellent and trustworthy European AI". More needs to be done to inform EU citizens of the value and risks of AI - a critical step to accelerate its adoption but also increase scrutiny of its use in our daily lives, for instance in our newsfeeds. 

AI at the heart of government policy

AI is increasingly being considered a vital tool to compete, and remain competitive, on the global stage. It is rapidly becoming a catalyst for other scientific advances. AlphaFold, an artificial intelligence program developed by Google's DeepMind which performs predictions of protein structure, is one example. 

Many governments are exploring the opportunities of AI for military, geopolitical and even ideological reasons. AI has been used to develop target recognition, optimize warfare systems and collaborative attacks, develop autonomous and semiautonomous weaponry and secure new cybersecurity capabilities. But in some parts of the world, it is also being used for mass surveillance and there is growing evidence that AI-enhanced disinformation is being used to influence online discourse. Many citizens are worried of the repercussions of AI over their lives.

The EU’s regulatory proposal is different to the AI strategies of China and the US

Attention is therefore shifting to how governments balance the risks and opportunities of AI. China and the United States in particular both have strong ambitions to be the leading powers in AI in the decades to come, with national AI strategies that are focused on being the dominant power in the space. In July 2017, China’s State Council issued the New Generation Artificial Intelligence Development Plan (AIDP) which outlined China’s strategy to become the world leader in AI by 2030, with a trillion-yuan ($150 billion) AI industry. The plan, which is championed by Xi Jinping himself, aims to "seize the major strategic opportunity for the development of AI, to build China’s first-mover advantage in the development of AI, to accelerate the construction of an innovative nation and global power in science and technology". The Chinese Ministry of Science and Technology (MoST) has set up an AI Plan Implementation Office to engage with the fifteen ministries and departments involved in the plan’s implementation. 

The EU Commission’s proposal wants to turn Europe into "the global hub for trustworthy AI".

In February 2019, President Donald Trump signed an Executive Order to set up the American AI Initiative. The proposal, which has since been codified into law as part of the National AI Initiative Act of 2020, aims to ensure that the US remains the leader of the AI ecosystem.

Its priorities include driving technological breakthroughs; driving the development of technical standards; training workers to develop and apply AI technologies; promoting an international environment that supports American AI research and innovation (while of course protecting the US’s technological advantage); and, importantly, promoting trust and protecting American values including civil liberties and privacy. A National Artificial Intelligence Initiative Office was established in early 2021 to coordinate and implement the Federal Government’s AI strategy.

While the Chinese and American strategies focus on technological leadership, the EU Commission’s proposal wants to turn Europe into "the global hub for trustworthy AI". For this, it wants to introduce new rules to protect EU citizens from AI systems with varying levels of risk: 

  • Systems that manipulate human behaviour and that allow "social scoring" by governments will be banned outright. This also applies to the use of "real time" facial recognition software, with a few exceptions to fight serious crime.

  • High-risk systems, for example in transport, recruitment, exam and credit scoring and biometric identification, will be subject to strict assessments and transparency requirements before they can be put on the market. 

  • Other more benign systems will in many cases also face certain transparency obligations, such as making sure that users are aware that they are interacting with a machine. 

Any companies that fail to meet these rules for "risky" AI systems could be fined up to 4% of global annual turnover (€20 million, if greater). A European artificial intelligence Board, which will comprise of the 27 member states and a representative from the EU Commission will be set up to review these rules and share best practices between member states.

More needs to be done to educate Europeans on AI

There are both opportunities and limits to this strategy. The fact that the EU recognizes AI as a vital component of future industry and competition is important - and providing investment and resources for research and production is a step in the right direction. But equally, AI developers may find these new rules too constraining and, as a consequence, could choose to launch their initiatives elsewhere, for example in the US where rules are more flexible. 

Fundamentally however, these rules alone will not be sufficient to secure society’s trust in AI. Citizens need to understand what AI is, how it works, what it can and cannot do. Educating people about AI is innovation-enhancing: the more students discover AI-related careers and the more decision-makers understand how to leverage AI in their respective industries, the easier it will be for Europe to develop and deploy cutting-edge technology. 

Citizens need to understand what AI is, how it works, what it can and cannot do.

In May 2018, Finland led the way by launching an online course which has now trained well over 1% of the Finnish population. Other Nordic countries such as Sweden have followed suit and in late 2019 French President Emmanuel Macron made it a national objective to teach over 1% of the French population the basics of this technology. Former Finnish Prime Minister Alexander Stubb and the former MEP Marietje Schaake have also launched a joint initiative aiming to educate policymakers and regulators on AI. 

Institut Montaigne too has actively been contributing to the objective of training European citizens to recognize and understand the main applications of AI, including its impact on society. In April 2020, with OpenClassrooms, Fondation Abeona and over 80 public and private organizations, it launched Destination AI, a free online training course aimed at providing anyone with a basic understanding of artificial intelligence in four to five hours. Over 150 000 people have since followed the course and the effort is being pursued with a wide range of initiatives and policy makers. 

Member states and the European Parliament are expected to debate the Commission’s proposal over the coming months. The hope is that it can be adopted some time next year, during the French presidency of the Council of the EU, the grouping of the 27 governments. Transparency and legal oversight are necessary and important elements in promoting what the Commission calls "Trustworthy AI", and it will be key to make sure that the EU’s strategy strikes a fair balance between protecting EU citizens and remaining competitive on the global stage. However, making sure citizens understand the power of AI, and all its components, will be just as crucial to secure EU citizens’ trust.

 

Copyright: OLIVIER HOSLET / POOL / AFP 

 

See also
  • Commentaires

    Add new comment

    Commentaire

    • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type='1 A I'> <li> <dl> <dt> <dd> <h2 id='jump-*'> <h3 id> <h4 id> <h5 id> <h6 id>
    • Lines and paragraphs break automatically.
    • Web page addresses and email addresses turn into links automatically.
    • Only images hosted on this site may be used in <img> tags.

...

Envoyer cette page par email

L'adresse email du destinataire n'est pas valide
Institut Montaigne
59, rue la Boétie 75008 Paris

© Institut Montaigne 2017