IP Osgoode

Recent Privacy Regulations Concerning Automated Decision-Making Systems: Implications on AI Commercialization


Luna Li is an IP Innovation Clinic Fellow and a 3L JD Candidate at Osgoode Hall Law School.


Prior to the Covid-19 pandemic, academic discussions indicated that artificial intelligence (AI) would signify the fourth industrial revolution with tangible economic benefits and potential privacy concerns. With remote work becoming the new norm, there is a growing reliance on digital technologies that lack sufficient transparency and legal oversight. Nowadays, privacy concerns exceed personal information protection

With more businesses developing and using AI-based automated decision-making (ADM) systems, algorithmic discrimination (at work, social media, or public services) has been in the spotlight (in both the public and governments’ eyes). For instance, ADM systems may lead to bias against marginalized groups based on discriminatory historical data. Algorithms can adversely affect vulnerable populations through proxies, even excluding protected characteristics. Specifically, AI could use certain career breaks as proxies to recognize women; or use postcodes/names to identify visible minorities.

This article summarizes recent global privacy regulations focusing on algorithmic management.  AI developers, licensors and licensees may benefit from reviewing these regulations and the business implications following the summary table.

Jurisdictions

Regulations (shortened title)

Keynotes

European Union

Proposal for Artificial Intelligence Act (2021)

Prohibitions on:

  • AI systems that are likely to cause “physical or psychological harm” by deploying “subliminal techniques” or by exploiting vulnerable groups due to “age, physical or mental disability”.
  • AI-based social scoring for general purposes by public authorities.

Proposal for a Directive on Improving Working Conditions in Platform Work (2021)

  • Enables platform workers to access relevant information about algorithmic decisions.
  • Ensures human monitoring. 
  • Gives the right to contest automated decisions. 

The U.S.

California: Assembly Bill No. 701 (Warehouse Quota Law, 2022)

Limitations on the use of quotas:

  • cannot prevent compliance with meal/rest periods or occupational health and safety standards.

Canada

 

Ontario: Bill 88, (Working for Workers Act, 2022) (1st reading on February 28, 2022)

 

Federal: Bill C-11 (Digital Charter Implementation Act, 2020) (2nd reading on April 19, 2021)

Organizations to provide a(n):

  • general account of ADM applications (including automated decisions, predictions and recommendations) that could significantly impact individuals.
  • explanation about how the personal information used was obtained.

Business Implications:

Ethical AI Assurance

There are rising requests for AI providers/developers to represent or warrant that the ADM systems were developed ethically. The company using AI systems may need to maintain and adhere to ethical AI policies and data control procedures according to applicable law in the jurisdiction.

Risk Allocation

Algorithmic bias may arise discriminatory historical data and proxies (through machine learning). It is thus crucial to clarify a proper risk allocation to ensure the right party is liable for monitoring and acting on algorithmic issues as they appear. The Canadian legal regime concerning AI currently remains untested. Therefore, parties may clarify expectations on AI ownership and data-use procedures to engage relevant contract law protections.

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Search
Categories
Newsletter
Skip to content