Updated: Jul 2, 2021
In April 2021, the European Commission introduced its proposal of the Artificial Intelligence Act, called the “world’s first regulation on AI.” While there has been numerous AI ethics declarations over the past few years, the novelty and complexity of AI, combined with the concerns of hampering innovation, have limited translations of these ethical principles into concrete regulatory measures. In this context, the European Commission’s proposal is an opening salvo, and similar initiatives are expected to follow shortly.
In Canada, there is no AI-specific regulation yet. Instead, federal and provincial initiatives introduce or recommend more concrete regulation of AI beyond guidelines and principles that signal where regulation is headed in Canada. For stakeholders in the AI sector, these insights provide an opportunity to prepare for upcoming regulations proactively.
This article reviews Quebec’s Bill 64; the Office of the Privacy Commissioner (OPC) of Canada’s Recommendations for PIPEDA Reform; Ontario’s consultation framework on Trustworthy AI; and BC and Yukon’s study on the use of AI in the public sector, identifying points of convergence.
Notification and Explanation
The first point of convergence in these four initiatives is the emphasis on the right to notification in cases where personal information is used by an automated decision system (ADS) and explanation of the decision-making process. Quebec’s Bill 64, sections 20 (public body) and 102 (private enterprises) state that “if personal information is used to render a decision based exclusively on automated processing of such information must, at the time or before the decision, inform the person concerned accordingly.” Also, at the request of the concerned individual, the following must be provided:
personal information used to render the decision;
reasons, factors, and parameters that led to the decision; and
right to have the personal information used to render the decision corrected must be provided.
Similarly, the B.C./Yukon study recommends that authorities must “notify and describe how the system operates … in a way that is understandable” when ADS is used to make a decision. The Ontario consultation framework also highlights the importance of establishing “rules to require the public be informed if they are interacting with a machine or have decisions made about them by an algorithm.” The OPC also supports the rights to notification and explanation, building on PIPEDA’s principles of accuracy, openness, and individual access, but recommends wordings that do not use “solely” or “exclusively” to include a wider range of AI uses (for instance, where humans are in the loop).
Objection and Contestation
Another point of convergence is the right of individuals to respond to the decision made by an ADS. The B.C./Yukon study emphasizes that an individual should have “the ability to object to the use of ADS.” OPC, building on the right to withdraw consent and object to the use of personal data in PIPEDA, recommends the inclusion of the right to “contest” in addition to the right to object, as the former allows individuals to re-define their terms of engagement with the ADS as opposed to completely opting out of it. The Ontario recommendations support both the right to contest and object, noting both “right to contest, and right to opt-out.” Bill 64 provides the grounds for contesting the decisions, but the OPC study points out that it does not guarantee the right to opt-out (object).
Safety and Accountability
These initiatives recommend a risk-based approach through privacy (Bill 64, B.C./Yukon, and OPC) and/or algorithmic (Ontario) impact assessment tools for promoting safety. Use of de-identified data and balancing tests are also mentioned as potential ways of enhancing safety. There is also an emphasis on regulation through the whole life cycle of the ADS – from the design (e.g., designing for privacy and human rights) to proactive inspection and continued monitoring. Finally, these four initiatives all support creating accountability measures such as the requirement for organizations to appoint someone responsible for data privacy (Bill 64) and/or the ADS system (B.C./Yukon).
Bill 64, in its current iteration, would allow the imposing of monetary administrative penalties to those who deny the right to notification and explanation. The B.C./Yukon study recommends allowing joint investigation between the regulators, and the OPC recommends providing the authorities with the power for proactive inspection, order making, and penalties. The Ontario framework mentions creating an AI governance framework but does not address specific issues on enforcement.
While none of these initiatives have been translated into concrete regulations (although Bill 64 is close to it!), these proposals and recommendations from the federal government and three different Canadian provinces indicate a coherent trend for AI regulation in Canada.
First, regulations will give individuals the right to get notified and receive explanations about how their personal information is used by an ADS and contest or object to the outcome.
Second, organizations – public and private – will need to develop the capacity to respond to these demands: design for privacy and human rights, designate privacy officers, implement safeguard measures (e.g., PIA/AIA, data de-identification, etc.), ensure that their ADS can be audited, if necessary, and continue to monitor the ADS through its life cycle.
Finally, if the recommendations are accepted, public bodies will be further empowered, with the authority to make orders and impose fines or the ability to work jointly with other government agencies or departments.
The broader trend suggests that organizations using AI should proactively introduce change management to normalize and integrate risk/privacy assessment and continuous monitoring and adjustment of their ADS into their operations to not only adapt to the new regulations but also to leverage them to lead through good practices.
Special thanks to Dongwoo Kim , for his contributions to this article.