European Union’s Proposal to Regulate AI




Introduction

On April 14, 2021, POLITICO reported on the leaked draft of the European Commission’s regulations on artificial intelligence (“AI”), rattling the policy and business community around the world. A week later, the Commission officially published the draft of the Artificial Intelligence Act, which has been touted as the “world’s first regulation on AI.” Much like the General Data Protection Regulation (“GDPR”), the European Union has taken a bold step to translate its values into concrete regulations on emerging technologies. It is expected to profoundly affect the regulations of AI around the world, including in Canada.


Executive Summary

The European Commission (“the Commission”) published the draft of the Artificial Intelligence Act, which seeks to establish concrete regulation of AI. The proposal operates around a functional, as opposed to a technique-based definition of AI, covering not just the individuals and companies within the EU but also those that could impact the users within the Union.


Taking a risk-based approach, the proposed regulations categorize risks entailed to AI systems into three types: unacceptable, high-risk, and low or minimal risk. The new EU regulations will likely inform and influence Canada’s AI regulations. Canadian companies need to proactively shift from a data or privacy-centred governance model to an AI-centred one.


What are the major points proposed under the Artificial Intelligence Act?

The Commission published the proposal of the Artificial Intelligence Act on April 21, 2021. Now, the proposal will go through the EU’s legislative process before it comes into effect – and this process will take several years.


The proposal takes a risk-based approach to AI regulation in the current iteration, categorizing the risks entailed to AI systems into three types: unacceptable (Article 5), high-risk (Article 6), and low or minimal risk. The proposal bans the use of AI systems with unacceptable risks, such as social scoring systems. It provides requirements for the management of AI systems with high-risk, building on the principle of transparency.


The proposal establishes specific requirements for the deployment of high-risk AI systems which includes, for example requirements on


a) AI Risk Management (Article 9)

b) Data Governance Standards (Article 10)

c) Transparency (Article 13)

d) Monitoring and Reporting (Articles 60, 61, 62, 63, and 64)

e) Enforcement (Article 71)


What does this mean for your business?

As we have seen with the launch of the GDPR, the EU’s AI regulation will significantly inform and influence policymakers in Canada. Further, the global trend is headed towards introducing AI-specific regulations that translate ethical principles into concrete action by authorities.


In this context, Canadian businesses need to prepare proactively, and think about how to incorporate AI governance practices within their organizations. In particular, this means starting to develop AI policies, for example on explainability and transparency; develop risk-based assessment criteria and control measures; and investing in continuous training and education to increase the culture awareness of AI related risks and best practices.


INQ Law supports clients with a future-looking approach to AI risk management programs that integrate the latest trends and good practices. Contact us at cpiovesan@inq.law or ncorriveau@inq.law for more information.



1. Background: EU’s Leadership in AI

Since 2017, the EU has acted with a sense of urgency to address its concerns about the increased application of AI. The proposed regulations are culmination of extensive consultation. The European Commission set up the High-Level Expert Group on Artificial Intelligence (AI HLEG) in June 2018, which subsequently published the Ethics Guidelines for Trustworthy AI on April 8, 2019. In February 2020, the EU released a White Paper on Artificial Intelligence. Different EU bodies have continued to release regulations and resolutions on digital rights, liability, and copyrights as they relate to AI, positioning the EU not just as a thought-leader but also a regulatory leader, aiming to achieve the following:


1. Ensuring that AI respects law and fundamental rights of EU citizens;

2. Creating legal certainty to facilitate innovation and investment;

3. Introduce enhanced governance, with effective enforcement; and

4. Develop a single European market for lawful, safe, and trustworthy AI.


In this context, the Commission’s proposal of the “world’s first regulation on AI” is a natural next step for the EU. Much like the GDPR, which has greatly impacted the norms and regulation of data worldwide, the Artificial Intelligence Act is poised to set the tone of AI governance globally – including in Canada.


2. Scope: Definition

In policy and regulation, defining AI has been a major challenge (for instance, New York City’s Automated Decision Systems Task Force failed to agree on the definition of AI for over a year). AI has various techniques, and the field continues to evolve quickly, which could undermine regulatory measures that do not have a comprehensive and accurate definition of AI. In this context, the Commission has introduced a definition of AI-based on the functional characteristics of the software to underpin its proposed regulations, which are meant to be tech neutral and future-proof, able to provide legal certainty.


The definition in the new proposal provides a horizontal definition of AI that enables horizontal regulation, which crosses different technologies and various interpretations of AI across the members of the EU by focusing on the key functional characteristics of the software. This definition generally aligns with the one presented in Canada’s Bill C-11.


Artificial Intelligence Act: Article 3(1) (EU)

Bill C-11 (Canada)