IP Osgoode

AI for Social Good: Becoming Aware of Different Interests

On February 2, 2018, IP Osgoode along with its partners, the York Centre for Public Policy & Law and the Zvi Meitar Institute for Legal Implications of Emerging Technologies, hosted a conference entitled “Bracing for Impact – The Artificial Intelligence Challenge (A Road Map for AI Governance in Canada)”.

The conference brought together experts from a broad range of disciplines to discuss artificial intelligence (AI) innovation and the impact machine learning will have on our social, moral, and legal norms. Throughout the day, tough questions were asked and critical issues about commercialization, cybersecurity, and the application of AI for social good were discussed. In my blog, I will share a piece of this journey with you and focus on the last panel entitled, “AI for Social Good.”

AI in the Public Sector & Biases

Our journey into AI started with Dr. Brandie M. Nonnecke’s presentation on the uses of AI in the public sector, the power of various AI applications to promote equity, and the biases we need to be aware of in designing algorithms. Dr. Nonnecke brought the audience’s attention to the rapid growth of metropolitan centers. According to estimates, 30 years from now, cities are going to have a huge influx of population with 66% of the world’s population living in cities by 2050 compared to 54% in 2014. This will disrupt the status quo and dramatically change how our cities function, explained Dr. Nonnecke. In anticipation of this rapid growth, the public sector is already looking at cognitive technologies that could eventually revolutionize every facet of public services and government operations including oversight, law enforcement, labour, and human rights.

Dr. Nonnecke acknowledged AI’s promise to promote efficiency, effectiveness, and equity. AI can be used, for example, to locate human trafficking hotspots, mitigate biases in job application processes, and detect discrimination in law enforcement. Although AI has the power to promote equity, this power is not an inherent one. AI is as prone to bias as the humans who design its algorithms. Given that algorithms and machine learning (ML) are increasingly used to make decisions, developers need to be aware of their human fallacies that can easily make their way into ML in the form of bias in data and prediction.

Dr. Nonnecke also stressed the importance of ensuring inclusiveness and equity in all stages of AI development. She cautioned that, if we want a good design and an unbiased outcome, we need non-heterogeneous groups, not only in the purview of technical ability, but also in every interdisciplinary team involved in the development of AI from engineers to legal scholars.

Designing for the Average

Big Data inherits methods from quantitative research where outliers (or “noise”) in data is eliminated to find dominant patterns and generalizable findings. In effect, this method “normalizes” the data that is used to recognize speech, faces, illnesses, or to predict loan and credit worthiness, academic potential, and future employment performance.

As Prof. David Lepofsky pointed out, just like the designs of our buildings, AI designs could easily fail to consider people who do not fit the “norm” and when AI applications are offered to everyone but are, at the same time, designed with the average person in mind, “normalization” of data becomes a large issue.  Prof. Jutta Treviranus pointed out that we cannot rely on predictive models or be overconfident in statistical tests where the minority can eventually be discarded as “noise in data.” Rather, we need to recognize diversity and rethink our methodologies having regard to the individuals at the margins. Although the audience was left with questions on how to tackle the potential biases of AI design towards the “average person,” the panelists drew everyone’s attention to the scary fact that as AI permeates our daily lives, the effect of serving the “average person” will lead to further marginalization and widening disparity between those who fit the norm and those who do not in one way or another.

Autonomous Cars for the Unreasonable Person

Traffic, congestion, and parking – situations that will make any driver not want to drive – but what if you could sit back and read a newspaper on your way to work in the comfort of your own car and not have to deal with all that? Prof. Guy Seidman, a proponent of autonomous cars, argued that we need to get rid of regular cars.  He argued that despite our (over)confidence in our driving ability, it is difficult to find a “reasonable person” on the road.  The effect of “deindividualization” allows drivers to feel anonymous and makes them feel less accountable for their risky behaviours behind the wheel.  Citing the high number of fatalities by car accidents everyday around the world, the economical costs of keeping a car that we only use for 10% of the day, the amount of space wasted on parking (e.g. if we get rid of cars in the US we will avoid using a territory that is the size of Sri Lanka just for parking cars), etc., Prof. Seidman argued that it does not make much economic sense to keep regular cars and if autonomous cars can alleviate some of these burdens even slightly, we will see a huge economic improvement.

However, the promise of autonomous cars is tempered by a caution about some of the flaws of the technology as it currently stands. For example, Prof. Treviranus explained that in simulations involving autonomous cars, a pedestrian propelling backwards due to her disability is hit by the autonomous car because the technology failed to recognize the “out of the norm” movement.

Overcoming Algorithm Aversion

While some of the earlier panelists voiced their concern about the over-reliance on algorithms, Prof. Maura R. Grossman argued that the problem is an under-reliance on algorithms.  Prof. Grossman states that one of our fallacies is our tendency to remember the one error and the “bad” about algorithms and hold algorithms to a much higher standard. She voiced her concerns that we will not reap the tremendous benefits of AI innovation because it is hard to get people to rely on algorithms even though one of AI’s key attributes is its ability to learn.  Given that we can rely on  lawyers, doctors, and pilots with our lives, how can we justify our skepticism towards using algorithms that can be more accurate than humans? If there is even a chance to reduce the hefty legal costs and improve access to justice, then why are we not relying on algorithms more often in the legal system?  Prof. Grossman stated that in certain low risk situations where using algorithms is the better and more logical alternative we should be using algorithms.  So how do we alleviate this aversion to using algorithms? The research shows that to get people over this hump in using algorithms we may need to sacrifice some of the efficacy of algorithms and give people back some level of control.  Furthermore, it is critically important to have peer-reviewed research and scholarship on algorithms in order to give it credibility in the long run.   In conclusion, Prof. Grossman suggested that we need to look at the psychological, social and economical incentives, and move away from the zero sum game and find ways to make this a  win-win proposition for everyone in order to reap the benefits of AI.

After the closing remarks of the conference were delivered, attendees and panelists engaged in further discussions at a cocktail reception. By the end of the day-long conference, I believe we were all in agreement that algorithms make mistakes, just like humans. More importantly, the conference was a call for our nation to invest in AI research and uncover the key elements to sparking the next AI innovation wave and better understand the impact of human cognitive bias on AI.

 

Ekin Ober is an IPilogue Editor and a JD/MBA candidate at Osgoode Hall Law School and the Schulich School of Business.

 

 

Related posts

Search
Categories
Newsletter
Skip to content