IP Osgoode

Bracing for Impact 2022: AI for the Future of Health – Panel Discussion


Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


Photo by Buda Photography

On November 9, IP Osgoode, Reichman University and Microsoft hosted the first in-person Bracing for Impact Conference since 2019. The conference focused on “The Future of AI for Society.” While AI is full of exciting possibilities, real-world application and integration are relatively nascent. Implementing AI technology in society requires complex interdisciplinary engagement between engineers, social scientists, application area experts, policymakers, users, and impacted communities. At the conference, an esteemed lineup of speakers across disciplines discussed the forms that interdisciplinary collaboration could take and how AI can help shape a more just, equitable, healthy, and sustainable future.

IP Osgoode’s recent Bracing for Impact Conference: The Future of AI for Society 2022 included the third panel discussion predicted AI’s impact on healthcare. In his first appearance in his new role, Mr. Konstantinos Georgaras (CEO, Commissioner of Patents & Registrar of Trademarks, Canadian Intellectual Property Office) chaired the session and opened by asking the panelists two questions:

  1. What has been the biggest success in healthcare AI over the past few years?
  2. What is the biggest barrier we still need to overcome if we are to see real, successful progress in the space?

Dr. Devin Singh – Staff Physician & Lead, Clinical AI & Machine Learning, Pediatric Emergency Medicine, Hospital for Sick Children

Dr. Singh identified an obvious use for AI – leveraging predictive analytics for common patient test orders during waiting time to drive efficiency. Dr. Singh is optimistic about the future of this type of innovation as electronic health records become widespread, creating the data that fuels AI innovations. However, he points out a few issues surrounding AI. First, confusion surrounds the regulatory landscape, based on his experience implementing adaptive models which Health Canada does not yet regulate. Second, equitability requires tackling bias in the data that feeds into these AI models. Specifically, there is a lack of data that ensures that a model is equitable regardless of ethnicity, race, religion, or culture. Finally, the advent of AI-powered innovations may widen healthcare gaps caused by inconsistent internet access across Canada.

Dr. Aviv Gaon, Senior Lecturer, Harry Radzyner Law School, Reichman University

Dr. Gaon focused his discussion on the legal difficulties of getting access to high-quality data, primarily concerning copyright and privacy. The copyright issue is that we all own our data – thus, Dr. Gaon proposed that data needs to be governed through a different mechanism, arguing that lowering copyright standards is key to AI development. Dr. Gaon also argued that privacy, as with any other right, has a price which must be acknowledged. He advocates for these costs to be considered in policy decisions, as they may stifle AI development.

Ms. Mary Jane Dykeman, Co-Founder & Managing Partner, INQ Law

Ms. Dykeman’s discussion centers around two main questions: what data do we have? And what can be done to ensure that data privacy is not a barrier to AI?

Ms. Dykeman wondered if we really know what data we hold and what we can do to clean up the data and make it useful. She encouraged AI developers to consider the specific use case as unfocused data problems are difficult to manage. Ms. Dykeman advocated for a need to develop a legal framework that acts as guardrails to AI, keeping it on track rather than hindering it.

Ms. Naseem Bawa, Counsel, Norton Rose Fulbright LLP

Ms. Bawa drew from her experience developing AI for the mental health space before her current role as counsel at Norton Rose Fulbright. Ms. Bawa is primarily concerned with bias in implementing AI models and the data that feeds them. She uses data gained from wearable devices as an example to ask pertinent questions – does the data collected from a specific wearable device have some bias? Now that wearable devices are widespread, can we rely on the data? Ms. Bawa concluded by advocating for multidisciplinary teams coupled with appropriate regulation and training concerning biases in the development of AI.

Ms. Laura Pio, Azure Data & AI Solution Specialist, Public Sector & Healthcare, Microsoft

Ms. Pio raised an important concern regarding data standardization: our healthcare system is highly fragmented by jurisdiction and by specialty (i.e., CAMH, hospitals, and long-term care are systematically separate). However, Ms. Pio is optimistic that all of these institutions will collaborate to improve healthcare through AI. Based on her experience implementing AI solutions, Ms. Pio also asked some important and necessary questions regarding AI in healthcare:     

  • Is it ethical to collect health data?
  • Do you need consent for the use of lifesaving AI?
  • Who can use the data, and how can they use it?
  • Is de-identifying health records sufficient?
  • Should researchers be able to use the data?

Overall, the panel demonstrated a need for clarity in regulation and improvements in data quality and availability. The panelists echoed concerns about the effect on equitability of data biases and issues surrounding privacy. However, they shared optimism for realizing AI’s potential to radically improve healthcare in the near future.

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Search
Categories
Newsletter
Skip to content