IP Osgoode

AI for Lawyers Conference Highlights: Exciting and challenging AI technology developments in litigation, immigration, and transactional law

The Law Commission of Ontario, in collaboration with Element AI and Osgoode Hall Law School recently hosted an AI for Lawyers conference. The conference featured a panel of legal practitioners who shared how their practices interact with AI, the benefits and the drawbacks so far, as well as the challenges and exciting opportunities ahead.

Augmenting the Legal Profession
While AI can give us comprehensive models that no human has come up with, and it can continue to learn based on new input, it does have its limitations. A great example of one such limitation, illustrated by Richard Zuroff, Senior Manager of Industry Solutions at Element AI, is in deciphering the meaning of seemingly simple sentences. In the phrase: “The trophy does not fit in the suitcase because it is too big”, any person would naturally draw the conclusion that the trophy is the object being referred to as “too big”. But, a simple switch to “The trophy does not fit in the suitcase because it is too small” triggers a different understanding. The human mind understands that the suitcase is the object being referred to as “too small” because the mind can draw on experience and context. However, this simple switch is still something that stumps AI, despite how advanced the technology has become in many areas. Zuroff attributes this to a lack of real world understanding by AI – he says pattern matching, which is what AI does, does not equate to logical reasoning.

So AI is not perfect, and neither is the human mind. Zuroff believes those gaps may lead to a perfect partnership. He supports an augmentation approach to the legal profession, rather than an automation approach. AI is currently able to create transcripts from recorded interviews or testimony, create summaries of case histories, and even make suggestions regarding the best sources to use for research. Lawyers can then apply this efficiently accumulated knowledge, using context and life experience to come up with prudent legal advice or to help craft an argument. With this augmentation approach, Zuroff believes large firms can become more efficient, while small firms can gain the capabilities of a much larger firm.

The quality of any given AI technology is highly dependent on good AI design. Zuroff emphasized that however AI is designed, AI technology needs to be tailored to a specific industry and predicted use.

As we move into an age where AI will almost certainly play a bigger role than it already does, Zuroff had a few suggestions. He said that investing in complements to the new technology is critical. We need infrastructure within firms to support the new AI strategies, and a willingness to depart from the typical workflow. We also need governance and oversight over AI solutions. Without oversight, risks of implementing AI may include crashing major currencies and causing systemic risk, collusion between AI programs, and having the AI absorb ethnic and racial biases.

Litigating AI
Criminal defence lawyer, Jill Presser stressed the need for good regulation of AI, which could prevent the need for litigation in this area. To Presser, good regulation means that AI outputs should be reviewable for reliability, and their often proprietary codes need to be subject to orders of disclosure in court. She also flagged the need to determine the admissibility of expert evidence of those who understand the decision-making processes of AI.

One other issue that triggers the need for strict regulation is automation bias, as demonstrated in Ewert v Canada (2018), a case in which an actuarial tool assessed a Metis man’s psychopathy and risk of recidivism and placed him in maximum security prison. The tool was developed and tested on non-Indigenous populations, and no research confirmed the results would be valid when applied to Indigenous persons. Presser says that with adequate regulation, the burden of proving that these AI tools are well tested and reliable would be placed on the party seeking to use those tools. Regardless of regulations, however, Presser believes AI tools should not be determinative, and should only be helpful aids in sentencing and other decisions. Read more about issues with bias and transparency in AI policing tools here, and more about the issues with governments using and regulating AI here.

Immigration and AI
Patrick McEvenue, Director of Digital Policy at Immigration, Refugees and Citizenship Canada (IRCC), explained that IRCC is currently trying to determine how AI can be used in the application review process. Right now, they are looking at their back catalogue of applications to mark straightforward applications that were approved. They then feed those to the AI program, which learns and makes decisions about new applications. Applications that are not initially approved by the program are brought to an officer for review, and the officer’s ultimate decision feeds back into the system to help it learn. McEvenue stressed that the process is very human-centred, and the AI component of the process follows the Treasury Board Directive on Automated Decision Making to ensure transparency and procedural fairness. So far IRCC is only using AI to approve applications, not deny them. McEvenue anticipates that a lot of work will need to be done and a lot of questions answered by the IRCC before it can start using AI to deny applications. You can read more about the concerns of using AI in immigration here.

AI and ‘Smart Contracts’
The final panelist was Amy ter Haar, a lawyer and doctorate of law candidate. Her area of interest is smart contracts, which may have multiple uses in the near future, including banks giving out loans or facilitating automatic payments, insurance companies processing claims, and even postal services enabling payment upon delivery. This video explains smart contracts pretty well, but if you don’t have time, here’s the basic rundown: A smart contract is a piece of code stored in a blockchain. In the video, the example is Kickstarter, the popular crowd-sourcing website. If a company chooses to find supporters through Kickstarter, both the supporters and company must put their trust in Kickstarter as the intermediary who holds their money and either grants it to the company once the fundraising goal is reached, or refunds it to the supporters if the goal is not reached. In contrast, if a smart contract is used, the contract holds the supporters’ money in escrow until the final date to reach the fundraising goal arrives. At that point, it automatically allocates the funds to the appropriate party. The smart contract removes the third party (Kickstarter) from the process, and therefore removes any trust concerns or delays. It never allows total control over the money by any party. Smart contracts are also immutable, so once made they can never be changed. Some say this is a benefit, in that contracts cannot be tampered with, while others are concerned about the inability to correct or edit the contract. See this article for one critic’s concerns.

Other issues smart contracts may raise is that parties may enter into them with a pseudonym. This may pose issues regarding the legal capacity to enter into a contract. In addition, there is a question as to where and how contracts could be enforced, if neither party discloses their location. See this article for a more comprehensive understanding of the concerns.

Regulation, regulation, regulation
Amongst all the interesting information conveyed at the conference, the one sentiment that was repeated throughout the day was that there is a real need for the regulation of AI, and soon. Tough questions posed by Carole Piovesan, a co-founder and partner at INQ Data Law, such as – who is liable for the outcomes of AI decisions? What happens when there is a Tesla self-driving car accident, for example? Or, when AI software loses a person’s fortune through bad investing? – underscore this need. While these tough questions need answers, they do create an exciting and interesting space in the legal profession. Those interested in taking up the challenge will likely discover that regulation has to be industry-specific, and it must take into account potential biases, tendencies towards collusion, privacy concerns, enforceability, and transparency, amongst other factors.

Written by Rachel Marcus, IPilogue Editor and JD Candidate at Osgoode Hall Law School.

Related posts

Search
Categories
Newsletter
Skip to content