Aaron Dishy is an IPilogue Writer and a 3L JD Candidate at Osgoode Hall Law School.
The proposed Artificial Intelligence and Data Act (AIDA) would introduce greater regulation of the use and development of artificial intelligence (AI) in Canada’s private sector. On June 15th, 2022, the Minister of Innovation, Science and Industry, François-Phillippe Champagne introduced Bill C-27, or the Digital Charter Implementation Act, 2022. Bill C-27 reiterates much of Bill C-11, tabled in 2020, reintroducing a modified Consumer Privacy Protection Act (CPPA) and Personal Information and Data Protection Tribunal Act (PIDPTA). However, Bill C-27 also introduced newly proposed legislation like AIDA which, if enacted, would make long advocated-for changes to Canada’s AI regulatory landscape.
AIDA would create new assessment and risk-mitigation tools for the use and transparency of high-impact AI systems. It would establish persons responsible for monitoring AI systems, such as the Artificial Intelligence and Data Commissioner — their role is to assist the Minister in the administration and enforcement of AIDA. Monetary penalties for the AIDA contraventions are also set out to enforce trust and deter the reckless and fraudulent uses of AI. In this way, Bill C-27 and AIDA would direct Canada towards harmonization with international regulatory frameworks, like that of the EU.
With that being said, AIDA would be more limited in scope when compared to its EU counterpart. For example, unlike EU legislation, AIDA would not apply to both public and private sectors, and all federal government institutions would be exempt.[1] Further, EU legislation sets out specific prohibited AI practices, alongside criteria for determining the degree of risk presented by any AI system. AIDA establishes no specific prohibited AI practices and distinguishes only between high-risk AI and all other systems; complex and salient matters are left to incoming regulation.
Beyond its limited scope, AIDA may be uncertain in its delineation of provincial and federal responsibilities. For example, AIDA’s consideration of “regulated activity,” would capture many elements of AI development and use, including “designing, developing or making available for use an artificial intelligence system or managing its operations.”[2] This language indicates the legislation is pursuant to Parliament’s trade and commerce power under section 91(2) of the Constitution Act, 1867. However, the federal government may also intend provinces to legislate on intraprovincial uses of AI, notwithstanding the rarity of circumstances under which such AI systems would be developed.
Lastly, attention is required of the breadth of persons AIDA considers “responsible” for an AI system in the course of trade.[3] It holds designers, developers and managers of AI systems subject to AIDA’s administrative and operational requirements. If those parties are expected to monitor or conduct audits of consumer deployment of AI systems, assessments must be made of risk potentials and mitigation from both perspectives. Additional regulation may be required in the full consideration of such perspectives.
AIDA remains proposed legislation and may be modified prior to implementation. However, it represents a much larger move by international legal bodies to regulate the development and use of AI. Businesses must be prepared for greater AI regulation in Canada. Thankfully, informative and responsive policy for the consideration of AI systems is also being developed, such as a recent publication by the Law Commission of Ontario. If correctly applied, AIDA should empower more Canadians to engage with trustworthy and transparent AI systems.
[1] This may be extended to exclude provincial departments or agencies by regulation as set out in s.3 of AIDA.
[2] See s.5(1) of AIDA.
[3] Ibid at s.5(2).