IP Osgoode

Moral Ethics of Artificial Intelligence Decision-Making – Who Should be Harmed and Who is Held Responsible?

As autonomous vehicles begin their test runs and potential commercial debuts, new liability and ethical questions arise. Unlike other computer algorithms which are already available to the public, a fully automated car divorces the authority of the device from the driver, instead vesting all power and decision-making into the car and its software. Accidents may become less accidental, and more preordained in their execution. While the death toll from human negligence numbers yearly in the thousands from vehicular accidents in Canada alone, and automated cars will supposedly decrease this by a high amount, the approach to delineating liability fundamentally changes. If the hypothetical drop in incidence of vehicular mortality is to be believed, then future automation of cars objectively supersedes the question of ‘if’ these technological advancements should be performed, and instead transforms into a ‘how’ scenario regarding integration.

While theoretically superior in terms of public safety, autonomous cars and advanced algorithms bring up a type of mechanical morality — in decisions to be made between life and death of pedestrians, occupants, or property damage. Determined by humans, these ethical ‘what-ifs’ translate into potentially tangible scenarios; extensive hypotheticals and their implications are not a question of existence, but of approach. While coded ethics, according to Noah J Goodall, a University of Virginia research scientist in transportation, will have “situations that the programmers often will not have considered [and] the public doesn’t expect superhuman wisdom but rather rational justification,” what counts as ‘ethical’ is subjective (Goodall). Factors in ethics go beyond death toll. Janet Fleetwood, from the  Department of Community Health and Prevention, Dornsife School of Public Health, Drexel University, notes how age, assigned societal value, injuries vs. fatalities, future consequences on quality of life, personal risk, and the moral weight of killing against a ‘passive’ death all contribute to the problem.

The legal framework for autonomous vehicles does not yet exist, but the laws and policies that govern automated cars must not derive from their creators. Doing so would place considerable responsibility on the programmers of automated vehicles to ensure their control algorithms collectively produce actions that are legally and ethically acceptable to humans. The ethical decisions must be uniform for the sake of simplifying liability and establishing an optimal process for future programming. Negligence, the current “governing liability for car accidents,” will expand to the field of programming. Consequently, it is important to recognize what counts as unavoidable negligent behavior in the cases of accidents.

Ethics being prescribed solely by manufacturers may result in an arms-race to best serve the consumer’s interests. The technique of marketing reveals itself in ethical ambiguity of preferential treatment of the consumer; instead of being ethically neutral, an autonomous car may be marketed to show strict deference to the consumer, diluting ethics into a protectionist game. Conversely, programming an automated car to slavishly follow the law might also result in dangerous and unforeseen consequence. Extensive research and decision-making experiments explored in a non-profit based model may simplify the scope of the problem; instead of imagining how to market the product, discussions over autonomous cars become purely performative. Consideration from multiple stake-holders such as consumers, corporations, government institutions, organizations and others may be necessary to construct clear and stringent ethical guidelines that is necessary in the implementation of automated vehicles into society, which requires extensive efforts to reach a uniform conclusion. However, ethical dilemmas regarding control algorithms that determine the actions of automated vehicles will inevitably be subject to philosophical issues such as “the trolley problem” — a no-win hypothetical situation in which a person witnessing a runaway trolley would either do nothing and allow it to hit several people or, by pulling a lever, divert it, killing only one person. In such circumstances, there is simply no right answer and this make ethical guidelines difficult to construct.

While one may propose a utilitarian approach should be adopted for the sake of simplicity, questions such as would humanity be comfortable having a computer decide the fate of their life — and what if the machine’s philosophical understanding extends beyond dire incidents would undoubtedly arise. Professor Azim Shariff of University of California found that found that respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car. However, this raises the question of, would a customer buy a car in which they and their family member(s) would be sacrificed for the benefit of the public?

To delineate the complexity of the situation, Fleetwood posed multiple hypothetical situations regarding moral preferences to participants. The study concluded 76% of participants favored a utilitarian approach in which the maximum number of lives were saved, yet participants were reluctant in purchasing a vehicle that sacrificed its own passengers. Understandably, consumers maintain a bias towards their own life; few would desire a product that chooses to sacrifice its owner.

Perhaps, legal ethics of automated vehicles should not exist in human management but rather in a machine’s own ability to learn. This is known as machine learning, where the program provides systems the ability to automatically learn and improve from experience. Machine learning focuses on developing computer programs that can access data and use it learn for themselves without human intervention. Static programming arrives at pre-determined ethical conclusions, while machine learning generates its own decisions, distinct from purely human determined ethics. While introducing an objective or impartial arbiter to complex situations would be desirable, questions on how accurate its judgments may arise. Scholars propose to model human behaviour to ensure that cars, rather than behaving better, behave exactly like us, and thus impulsively, rather than rationally. One proposal by Leon R. Sütfeld et al. state “simple models based on one-dimensional value-of-life scales are suited to describe human ethical behaviour” in these circumstances and as such would be preferable to pre-programmed decision-making criteria, which might ultimately appear too complex, insufficiently transparent, and difficult to predict. This solution regarding machine-learning to mimic human decision-making appears to be oriented towards an essential aspect of social acceptance; the uniformity of robots with human behaviour. Therefore, instead of regulating automated vehicles through ambiguous ethical guidelines, they will base their decisions through humanistic thinking while still lowering accident rate compared to human drivers. Nevertheless, other issues still exist; questions of whether this technology is achievable in the future and who should be held responsible for automated incidents in the context of machine learning. Regardless, whether the guidelines for automated vehicles arise from policy regulators or machine-learning, society needs to embrace that autonomous cars will debut on the market in the coming years, and work towards addressing the floodgate of concerns for wider applications including life-or-death accidents.

Rui Shen is an IPilogue Editor and a JD Candidate at Osgoode Hall Law School.

Related posts

Search
Categories
Newsletter
Skip to content