IP Osgoode

Synapses & Silicon: The Search for the Ideal Adjudicator

A competent adjudicator, according to an Ontario Public Service job posting, will have familiarity with relevant legal concepts, solid analytical skills, an aptitude for impartial adjudication, and well-developed communication/interpersonal skills. Would an Artificial Intelligence (AI) program make a good candidate, or at least a capable assistant? This is the question I explore in my journal article “AI-Supported Adjudicators: Should Artificial Intelligence Have a Role in Tribunal Adjudication?”.[1]

Artificial intelligence (AI), although lacking a universal definition, can be understood as a set of related technologies running on algorithms that replace or augment human cognition. With the growing capacity of AI, we find ourselves in Klaus Schwab’s “fourth industrial revolution” where human brains remain important but are not the only source of intelligence. My article examines the risks and benefits of giving AI programs a role within rights-determining administrative bodies (i.e. tribunals, also called commissions or boards). Specifically, I propose statutory authorization for what I am calling “AI-Supported Adjudication” (ASA), an example of the “centaur model” (where the aptitudes of AI and humans are combined). My basic contention is that while neither humans nor AIs are perfect, pairing them is the most desirable future model for tribunal adjudication. The two other options are (1) maintaining the status quo, and (2) full AI automation. With new technologies come novel risks, but also new affordances. The growing capabilities of AI should inspire us to think beyond the status quo and consider alternative models of adjudication and decision-making. At the same time, the importance of uniquely human qualities and the hazards of unsupervised AI leads me to reject the radical notion of full-fledged “adjudication by algorithm”.

AI tools could help ensure individual adjudicators consider all relevant materials. They would aid with research, for example flagging pertinent sources, highlighting specific excerpts, and assigning relevance ratings. On an institutional level, they could track and evaluate decision-making patterns across a tribunal. Tribunal AI is hypothetical, but a realistic prospect given that AI has been deployed in similar contexts like criminal courts in the U.S. and by government agencies of commonwealth allies.  These experiments in AI-assisted justice and governance have drawn controversy (see the Harvard Law Review’s piece on State v Loomis and The Guardian’s coverage of illegal debt-enforcement in Australia). Is it possible to leverage AI’s benefits while avoiding the missteps made elsewhere? The Canadian government thinks so, as evidenced by the Treasury Board’s recent “invitation to bid” from May 2018. This Treasury Board document solicits private sector proposals for AI solutions to be deployed across several government entities, including the Immigration Ministry and the Department of Justice (for a human rights report on using AI in Canadian immigration see “Bots at the Gate”).

While in theory ASA will be greater than the sum of its parts, there are challenges to address. There are issues of judicial review and lawful delegation that I explore in my article.

Also, when humans make decisions with AI tools there is a risk of overreliance. Even if the human is “in charge” there are psychological tendencies that might reverse this relationship in practice. “Automation bias” and “anchoring” are two examples. In a tribunal context, this could lead to “adjudication by algorithm”, illegal fettering of the adjudicator’s discretion, and de-skilling of adjudicators. Procedural fairness is another issue (what Americans call “due process”). Decisions of a tribunal must be reasonably transparent and unbiased. Some say AI is not sufficiently transparent, as we cannot know how it reaches its conclusions (the so-called “black box” problem). Others believe that through technical solutions (i.e. “explainer tech”) we will eventually “open up the hood” of AI to see the basis of its decisions. It is worth noting the problem of opacity also exists with human thinking. It is cautioned in administrative law, for example, that a requirement to provide written “reasons” does not mean those reasons will be the true antecedents of the judgment. A second aspect of procedural fairness, an unbiased hearing, may pose greater difficulties. There are many ways bias can become encoded in algorithms, and once there, it may be difficult to detect or trace its source. However, again we must evaluate the human comparator. Human thinking is subject to numerous biases and shortcomings. In the context of adjudication, it was said that ‘‘Every judge…unavoidably has many idiosyncratic learnings of the mind…which may interfere with his fairness at trial.” Contemporary research shows how extra-legal factors[2] like whether an adjudicator has eaten lunch can influence the harshness or leniency of a decision.

AI is not perfect, but neither (if we’re being honest) are human adjudicators. As Shakespeare poetically notes, we are all “in our own natures frail, and capable of our flesh; few are angels.” I suspect one of the greatest challenges of our era will be wading through the #legaltech hype and the doomsday prophesizing, and rationally taking honest stock of AIs strengths and weaknesses, as well as our own. As noted by Professor Alarie of U of T Law, advances in computational power and sophistication mean that ‘‘the set of tasks and activities in which humans are strictly superior to computers is becoming vanishingly small”. This does not mean that computers or AI are superior, only that a humanity would be remiss not to utilize the power it has in these tools. The search for the ideal adjudicator, spurred on by AI’s advances, will likely settle on an approach that engages with humanness, in all our virtues and shortcomings. To err is human, so why not augment with AI?

 

Jesse Beatson is a JD Candidate at Osgoode Hall Law School. 

[1] Jesse Beatson, “AI-Supported Adjudicators: Should Artificial Intelligence Have a Role in Tribunal Adjudication?” (2018) 31:3 Cdn. Journal of Admin. Law & Practice 307.

[2] Craig E. Jones, “The Troubling New Science of Legal Persuasion: Heuristics and Biases in Judicial Decision-Making” (2013) 41  Advocates’ Quarterly 49.

 

Related posts

Search
Categories
Newsletter
Skip to content