Regulating automated decision systems in Canada: What it means for your business

INQ Law
6 min readDec 17, 2020

The regulation of Artificial Intelligence (“AI”) has been a hot topic for years. This discussion has evolved from whether to regulate AI, to when regulation should be introduced, to how and what aspects of AI should be regulated. This evolution reflects the complex relationship between humans, society and technology. Of particular interest is the growing trend over the last two years to regulate the use of automated decision-making systems in Canada.

In 2019, the federal government adopted a Directive on Automated Decision-Making (“DADM”) and an accompanying algorithmic impact assessment (“AIA”) tool to guide the use of automated decision making at the federal level. More recently, the federal government introduced a major bill to reform Canada’s private sector privacy law, Bill C-11 (known as the Consumer Privacy Protection Act or CPPA). If passed, the CPPA would specifically regulate automated decision-making systems. The tabling of the CPPA came on the heels of a recent report by the federal Privacy Commissioner of Canada with recommendations on regulating AI (you can read our commentary on the Privacy Commissioner’s report here).

Scope: What constitutes an automated decision system in Canada?

Both the DADM and CPPA share the same definition of an automated decision-making system: “any technology that assists or replaces the judgement of a human decision-maker using techniques such as rules-based systems, regression analysis, predictive analytics, machine learning, deep learning, and neural nets.” This definition is broad and includes a wide range of possible computer systems.

While the DADM and CPPA share a definition, the two differ in important ways in terms of scope. Specifically, the DADM included a number of exemptions to the applicability of the Directive as follows:

1) the DADM only applies to systems that “provide external services.” Therefore any application for internal use such as talent analytics, fraud detection, predictive auditing, etc. would not trigger any requirements for government;

2) the DADM is not grandfathered in and excludes all systems already adopted by the federal government, including ADS “operating in test environment;” and

3) the DADM does not apply to any national security systems, i.e. any algorithms that may be used by federal law enforcement agencies.

There are no such exemptions in the proposed CPPA. As drafted, the CPPA does not provide any limiting parameters on applicability. Rather, under the proposed CPPA, the use of an automated decision system “to assist or replace human judgment” would trigger a right to an explanation whenever the system is used “to make a prediction, recommendation or decision about the individual” [s. 63(3) of the CPPA]. For any automated decision system that “could have a significant impact” on an individual, the organization would be required to keep a general account of all such systems [s. 62(2)(c) of the CPPA].

Compare for example with Article 22 of Europe’s General Data Protection Regulation (“GDPR”), which is widely cited as the leading regulation of automated decision systems. Article 22 applies only to those decisions made “solely” by an automated system and to those decisions “which produces legal effects concerning him or her or similarly significantly affects him or her.” The CPPA goes beyond the GDPR to include a much broader range of automated decision-making systems.

Obligations: What requirements are associated with the use of automated decision systems?

When regulating the use of automated decision-making systems for government departments, Canada’s DADM adopts a risk-based approach, consistent with proposed regulatory schemes being developed in the US and the EU. The DADM specifically requires that departments complete an AIA, defined in the DADM as “a framework to help institutions better understand and reduce the risks associated with Automated Decision-Making Systems and to provide the appropriate governance, oversight and reporting/auditing requirements that best match the type of application being designed.” (See Appendix A).

The CPPA, on the other hand, does not adopt a risk-based approach. The applicability of the provision is not scaled to risk level. In fact, it does not matter whether the organization is deploying an automated call routing chatbot or a biometric targeted advertisement platform, the CPPA proposes a one size-fits all requirement for automated decision-making systems:

• upon request, organizations must provide an explanation of the prediction, recommendation or decision; [s. 63(3) of the CPPA]

• upon request, organizations must provide an explanation of how the personal information that was used to make the prediction, recommendation or decision was obtained; [s. 63(3) of the CPPA]; and

• organizations must make available a general account of its use of any automated decision-making systems. [s. 62(2)(c) of the CPPA].

The CPPA’s focus is clearly on algorithmic transparency and explainability. This aligns with recommendations made by the Privacy Commissioner of Canada as well. However, the CPPA does not provide any specific guidance as to what constitutes an “explanation” or how an organization should go about discharging this obligation. This is somewhat surprising since some direction on explainability is provided to federal government departments in the DADM.

According to the AIA, systems at the lowest level of impact require only that “a meaningful explanation be provided for common decision results.” It further specifies that “this can include providing the explanation via a frequently asked question section on a website.” Decisions from a system at the second impact level need to provide “meaningful explanation on request” but only for “decisions that resulted in the denial of a benefit, a service, or other regulatory action.” The final and highest impact level would require that a “meaningful explanation be provided with any decision that resulted in the denial of a benefit, a service, or other regulatory action.”

While “meaningful explanation” is not defined in the DADM, looking at the government’s approach to explainability, it is obviously grounded in scaling requirements and focused on assessing algorithmic adverse impact. We will be following whether the CPPA adopts or endorses a similar risk-based approach to automated decision-making systems as the DADM.

Getting ready: What does this mean for your business?

If passed, the CPPA and/or the Privacy Commissioner will need to provide guidance on what constitutes an “explanation” of automated decision systems to satisfy CPPA obligations. Recent guidance on explainable AI from NIST Principles of XAI, ICO’s work, in partnership with the Alan Turing Institute, on explainability of AI and IEEE’s paper on explainable artificial intelligence, among others, provide some notable substantive features of a meaningful explanation:

technical explanation: how does the system work, evidence or reason for outputs, data explanations, etc.?

meaningful explanation: can the explanation be delivered in an accessible and non-technical way? Does the explanation take into consideration both computational and human factors?

fairness or impact explanations: Can the organization demonstrate that the system was designed taking into consideration potential social impacts on the individual or wider society?

robustness and accuracy explanations: Can the organization demonstrate that the system output correctly reflects the system’s process for generating that output (i.e. accuracy, reliability, and robustness, etc.)?

Procedurally, in response to growing regulation, organizations will require a suite of governance policies and procedures to effectively demonstrate accountability of automated decision systems. Such governance measures include:

1. developing (or augmenting) policies on algorithmic transparency and explainability;

2. preparing a general account of how the organization uses automated decision-making systems;

3. documenting criteria for a meaningful explanation in different applications and contexts;

4. developing guidelines for explainability according to a risk continuum, with justification for the risk classification, and actions taken to mitigate risk;

5. appointing an appropriate steward to provide a meaningful explanation, when required (noting s. 66 requirement for plain language); and,

6. developing complementary AI governance policies and procedures to support the broader AI strategy.

Organizations should be proactive in developing (or augmenting existing) codes or schemes, particularly with regards to algorithmic risk assessment frameworks. By starting to establish these processes, your business will be better placed to promote consistency in practice between various jurisdictions in which you operate, build and maintain public trust, and mitigate foreseeable risks associated with automated decision-making systems.

Conclusion

The CPPA is the next step in regulating AI. Organizations need to be ready for these changes to the law in Canada and, increasingly, around the world. As we move towards the adoption of the CPPA, it will be important for Canada to continue aligning industry and government requirements more closely.

— —

By Noel Corriveau with special thanks to Carole Piovesan for her contributions.

--

--

INQ Law

Ahead of the curve, our law firm has deep expertise in health, data and business law.