IP Osgoode

More research, regulation needed to handle artificial intelligence, academics say

This article was originally published by The Lawyer’s Daily (www.thelawyersdaily.ca), part of LexisNexis Canada Inc.

Artificial intelligence (AI) can create inherent benefits for all sectors, but governance of the technology is lagging, forcing industry experts and academics to confront the legal and ethical issues that this technology raises.

At Bracing for Impact: The Artificial Intelligence Challenge, a one-day conference hosted by IP Osgoode on Feb. 2, international speakers gathered to encourage the creation of a road map for the legal treatment of AI issues. Touching on cybersecurity, intellectual property and privacy, the point of the day was clear: AI is here and society must catch up.

Ryan Calo, an associate professor at the University of Washington School of Law, joined the conference by teleconference from the United States and addressed policy issues that can arise when AI is used. He highlighted implications for justice and equity as AI may not benefit all levels of society.

“We’re experimenting with artificial intelligence on populations that are the most vulnerable and have the fewest resources to seek redress,” he said, explaining that in some instances AI may not work well for people based on their race.

“There’s anecdotal evidence that people of colour, their hands are not always recognized by automated dryers or automated faucets because those faucets have been calibrated to white hands. Similarly, there’s evidence that, for example, people of Taiwanese ancestry are having an issue where facial recognition software meant to optimize cameras are not taking pictures or are warning that the subject of the picture actually has their eyes closed or is squinting. Because, again, the database upon which the software’s been trained contained few Taiwanese faces,” he added.

Calo noted that algorithms can reinforce bias, which can be particularly concerning if an AI is being used to determine the length of a prison sentence or being used by police in enforcement. He said this also leads to challenging questions about when AI is safe and can be certified.

“For example, if you’re going to be operated on by a surgeon, she’s got to go to medical school and she’s got to pass her board. But under development today in a number of labs are autonomous surgical units, which are amazing on one level because they allow for standardized and presumably, one day, safer surgery, or even surgery in places where a surgeon isn’t available to do the surgery. But at the same time, how do you go about establishing that the surgeon is adequate given that they’re not going to go to medical school and they’re not going to pass a board?” he said, asking what tests and standards are going to be used to vet AI when they replace a human in their work.

With AI being able to replace an increasing amount of human roles in the workforce, Calo noted that governments need to be alive to the issue of taxation and displacement of labour. He noted that the impact on income tax could be huge if in a short period of time a large amount of jobs are given to robots.

Calo said governments need to accrue expertise in technology in order to understand the impacts AI is having and create policy that will keep people safe.

“It’s very unlikely that we will come up with the wisest laws possible for infrastructure of AI in the absence of lawmakers and regulators and judges that have an inadequate mental model of the technology,” he said, adding that investing in research is key.

Society must make sure “to invest in basic, interdisciplinary research,” he said, “to not only further the state of AI, but also the state of social impact research about AI is critical. Thoughtful procurement — one of the issues we’re having today is people are buying AI enabled systems for use in places, like courts, without really understanding what they’re buying or understanding the consequences,” he explained.

Calo noted that regulation can go a long way in mediating issues surrounding AI, but an unusual paradox of wanting change, but insisting society remains the same is acting as a barrier.

“Artificial intelligence is going to remake every aspect of society, but there shouldn’t be any change somehow to law and legal institutions. That strikes me as deeply implausible. Either artificial intelligence is all hype or we’re going to need laws to address it. I think regulation at some level is inevitable. I think it’s premature today to top down regulate everything, but I think we should be watching for opportunities where there’s a gap between what the law assumes and what is happening on the ground in practice,” he said.

Maura Grossman, a research professor at the David R. Cheriton School of Computer Science at the University of Waterloo, like Calo, noted in her address to the conference that society needs to examine who benefits from the results of AI.

“I think we have to move away from the zero sum game and find ways to make this a win-win proposition for everybody,” she said.

Grossman was working at a large law firm in New York when she was faced with the challenge of going through millions of documents with only five lawyers to do document review.

“It occurred to me that technology is the problem. Technology needs to be the answer, so I started to go to computer science programs and started to learn about machine learning,” she said.

Grossman was at one of these conferences when she met Gordon Cormack, a researcher in information retrieval, and together they began a research study pitting lawyers against algorithms.

“Gordon and I took 896,000 documents that were part of the Enron dataset that was released during the course of that litigation and we took third year law students who had volunteered their time pro bono, and we took contract attorneys who volunteered their time, and then we took some of these supervised machine learning algorithms and we put them back to back on five requests for production. We said ‘find all of the documents that relate to these topics.’ And then we looked at who did better,” she explained.

The results fell staggeringly in favour of the algorithm side with the AI making fewer mistakes and being highly efficient. Grossman thought these results would lead the legal profession to use algorithms across the board, but she didn’t take into account the large amount of people who make money doing document review.

“We don’t forgive errors in algorithms and we don’t believe they can learn for some reason, even though we talked today about one of the scarier things about algorithms is that they can learn,” she said, noting that the legal profession is slow to adopt to technology that it doesn’t trust.

“Obviously we don’t want people to overuse algorithms and over-trust them when they shouldn’t, but they should use them when it’s logical,” she added.

Grossman used the example of air travel as a time when people rely on AI even when they doubt its safety.

“When people say to me ‘I could never use one of these algorithms to do document review. It’s too risky.’ And then I’ll say ‘do you get on an airplane?’ And they say ‘of course I do.’ Are you aware that there are four minutes of that flight that is flown by a human. The rest is flown by an algorithm. And almost all the accidents that occur, occur in those four minutes or in transition from the algorithm to the human,” she explained.

Grossman said in order for people to trust in AI they have to feel some sense of control and more peer-reviewed research is needed to keep the technology moving forward.

“We really need people to take the time to do the careful research. We need the funding of the research. I think that convinces people in the long run,” she said.

 

Amanda Jerome is a Digital Reporter for The Lawyer’s Daily

Related posts

Search
Categories
Newsletter
Skip to content