IP Osgoode

Bracing for Impact 2022: AI for the Future of Health – Q&A


Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


Photo by Buda Photography

On November 9, IP Osgoode, Reichman University and Microsoft hosted the first in-person Bracing for Impact Conference since 2019. The conference focused on “The Future of AI for Society.” While AI is full of exciting possibilities, real-world application and integration are relatively nascent. Implementing AI technology in society requires complex interdisciplinary engagement between engineers, social scientists, application area experts, policymakers, users, and impacted communities. At the conference, an esteemed lineup of speakers across disciplines discussed the forms that interdisciplinary collaboration could take and how AI can help shape a more just, equitable, healthy, and sustainable future.

After their panel discussion, the AI for the Future of Health session held a spirited question & answer period. Attendees and panelists discussed several interesting ideas about advancing AI in healthcare.

Government’s role in providing access to high-quality data

Dr. Gaon argued that government involvement is key for creating necessary infrastructure for facilitating data access, specifically in gathering the relevant solution-implementing groups. Ms. Bawa responded with concern that state-managed data collection would introduce bias, as the segment of people who interact with government poorly represents the general population. Ms. Dykeman added that a robust regulatory framework may be too slow, and that we need guidance in the meantime so as to not impede AI development.

Synthetic data as a workaround for privacy issues

As a potential means for government to provide easy access to data, Dr. Gaon proposed using synthetic data to facilitate access to data and solve privacy problems, citing a government-led project in Israel. Dr. Singh pointed out that synthetic data is typically used when there is a lack of data, such as for rare diseases. He claimed that using synthetic data when real data is available is a crutch for not figuring out solutions to the problems that were discussed throughout the panel.

Will AI make healthcare less expensive?

Dr. Singh expressed optimism for AI to simultaneously improve costs, time efficiency and care. As software is cheap relative to human labor, AI could achieve all three of these improvements at once. He pointed out that costs still arise from developing and maintaining  models and, speaking from his experience trying to allocate salaried time for testing AI solutions at Sick Kids, they are difficult to account for.

Handling IP to appropriately incentivize collaboration

Dr. Singh, wearing both clinician and developer hats, expressed concern for patient data IP potentially being moved inappropriately. He identified a viable solution to this problem: ensuring that hospitals own the IP while the developer owns only the AI delivery mechanism. This would specifically prevent IP from being exported to third parties through investors. Ms. Pio, as a platform advisor from Microsoft, endorsed this solution as ownership of the IP also comes with difficult questions about transparency, bias, and uses. She also reminded us that many of these AI solutions could be used by a multitude of institutions and thus it is prudent to protect the data from being a part of the product being sold.

Transition from research to clinic

Dr. Singh pointed out 3 issues with translating work from Toronto research hospitals to smaller local hospitals: how to get access to the data without violating privacy, how will it be funded, and how will it be regulated. Ms. Dykeman expressed concerns about how much of the development of AI for health is under the research designation, and the challenges that that may introduce down the road.

From the audience, Prof. David Vaver put forth concerns about IP ownership. He proposed a model where patients license their data to the hospital, rather than assigning ownership to the hospital directly. Dr. Singh acknowledged the philosophical correctness of this model but points out the difficulties of implementation – besides technical limitations, he also worried about biases from patients removing their data from the pool.

Conclusion

The panelists shared optimism for AI in the future – and Ms. Pio indicates that many institutions are on board with this optimism as well. There are still many problems that need answering and frameworks to be built – both in terms of building appropriate regulation and the necessary multidisciplinary culture – but the process has started and will only continue forward.

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Search
Categories
Newsletter
Skip to content