Artificial intelligence (AI) has become a strategic battleground. In 2020, public and private corporate investment in AI increased by 40% to reach a total of $68 billion worldwide. Companies are looking at how AI can give them a competitive edge and a technological lead. States are investing in public and private AI ventures and trying to woo international talent. There is more funding available for university AI programs than there has ever been and the number of investment-screening mechanisms has increased dramatically.
Since Canada published the world’s first national AI strategy in 2017, over 30 other countries and regions have published similar documents. In the race to develop ambitious AI strategies, the EU this week published its own regulatory proposal so that "European AI can be globally competitive while also respecting European values." Similarly to the EU’s Digital Service Act and Digital Market Act, the EU is hoping these new rules will make it easier for businesses to innovate, give consumers more choice and, crucially, protect them from technologies that could harm or manipulate them.
While the EU Commission is right to recognize AI as a vital component of future industry, there are shortcomings to its proposal. Tight rules to protect EU citizens could make the EU a less attractive market for companies looking to invest in, and develop, AI. More importantly, a regulatory proposal alone will not suffice to ensure what the Commission calls an "excellent and trustworthy European AI". More needs to be done to inform EU citizens of the value and risks of AI - a critical step to accelerate its adoption but also increase scrutiny of its use in our daily lives, for instance in our newsfeeds.
AI at the heart of government policy
AI is increasingly being considered a vital tool to compete, and remain competitive, on the global stage. It is rapidly becoming a catalyst for other scientific advances. AlphaFold, an artificial intelligence program developed by Google's DeepMind which performs predictions of protein structure, is one example.
Many governments are exploring the opportunities of AI for military, geopolitical and even ideological reasons. AI has been used to develop target recognition, optimize warfare systems and collaborative attacks, develop autonomous and semiautonomous weaponry and secure new cybersecurity capabilities. But in some parts of the world, it is also being used for mass surveillance and there is growing evidence that AI-enhanced disinformation is being used to influence online discourse. Many citizens are worried of the repercussions of AI over their lives.
The EU’s regulatory proposal is different to the AI strategies of China and the US
Attention is therefore shifting to how governments balance the risks and opportunities of AI. China and the United States in particular both have strong ambitions to be the leading powers in AI in the decades to come, with national AI strategies that are focused on being the dominant power in the space. In July 2017, China’s State Council issued the New Generation Artificial Intelligence Development Plan (AIDP) which outlined China’s strategy to become the world leader in AI by 2030, with a trillion-yuan ($150 billion) AI industry. The plan, which is championed by Xi Jinping himself, aims to "seize the major strategic opportunity for the development of AI, to build China’s first-mover advantage in the development of AI, to accelerate the construction of an innovative nation and global power in science and technology". The Chinese Ministry of Science and Technology (MoST) has set up an AI Plan Implementation Office to engage with the fifteen ministries and departments involved in the plan’s implementation.