IP Osgoode

Who is responsible for discriminatory AI systems in healthcare?


Mac Mok is a 3L JD candidate at Osgoode Hall Law School. This article was written as a requirement for Prof. Pina D’Agostino’s IP Intensive Program.


Artificial intelligence (AI) systems have entered every facet of our daily lives. The availability of massive data sets has been leveraged by AI systems to analyze, predict, and influence our decision making. The healthcare field has been no exception, with the Humber River Hospital now housing “the Command Centre”, an AI system that can track the flow of patients from their intake to discharge, and help healthcare providers make more informed decisions to improve overall efficiency and deliver better care.

“Biased” AI systems, however, can produce discriminatory consequences. Examples include the Amazon recruiting algorithm that penalizes resumes containing the word “women” and the COMPAS algorithm used in criminal sentencing that was more likely to penalize African-American defendants. During AI system development, biases can appear in the training data when historical human biases contribute to the generation of such data, or when the data is imbalanced, that is, some groups are overrepresented and others are underrepresented.

Racial, gender, socioeconomic and linguistic biases can also impact healthcare AI systems. Recent regulatory efforts have begun addressing this issue. Health Canada, in a joint effort with the US Food and Drug Administration and the U.K. ‘s Medicines and Healthcare Products Regulatory Agency, has identified ten guiding principles to inform the development of Good Machine Learning Practices (the dominant method used to train AI systems). In particular, the third guiding principle requiring that clinical study participants and data sets are representative of the intended patient population raises the importance of managing any bias. Another regulatory effort is the Artificial Intelligence and Data Act (AIDA) that was tabled as part of Bill C-27 to regulate the use of AI systems in Canada for purposes of trade and commerce, as well as harm prevention. Clause 8 of the AIDA puts the onus on the person(s) responsible for the AI system to “establish measures to identify, access and mitigate the risks of harm or biased output that could result from the use of the system”. Healthcare AI systems could also fall under such regulation.

The guidelines and regulations above put much responsibility to mitigate bias on the shoulders of AI developers. Logically, those responsible for developing an AI system would be in the best position to recognize and address biases in the system as they have direct access to the training data and may correct important deficiencies. Arguably, however, those for which the AI system was designed for such as the doctors seeking AI to solve a problem, and the end users of the system such as patients should also provide critical feedback to AI developers. Particularly in the healthcare field, where understanding training data and interpreting system outputs may require years of medical training, medical practitioners may play a key part in spotting biased inputs and outputs, allowing for correction of AI system deficiencies. Thus, the guidelines and regulations on preventing biased AI systems should consider what role doctors and patients have in developing responsible healthcare AI tools.

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Search
Categories
Newsletter
Skip to content