IP Osgoode

AI or Human Doctor?

This post is a response to the following question, initially posed to students in the Legal Values: Artificial Intelligence seminar:

“You are not feeling well and need to see a doctor.  You have two options: (i) you can be treated by a human doctor, or (ii) you can be treated by an AI doctor, but not both.  Which do you choose and why?

  1. Introduction

This question is so open-ended that a scope-narrowing discussion is needed. As such, I make three assumptions. The first overarching assumption I will make is that there exists a trade-off of capabilities between AI and humans such that a weighing of their relative benefits is required to render a decision. This assumption births further assumptions, namely that AI doctors are technically superior, that humans possess some intangible positive factor, and that I have zero information regarding my affliction. .

  1. Setting the Stage with Necessary Assumptions

#1 – AI doctors have technical superiority

Current research suggests AI doctors are equal to or slightly better than their human counterparts at diagnostic accuracy. For the purposes of this discussion, I will assume that AI doctors possess, across the board, technical superiority. Otherwise, what are they providing us that humans do not?

#2 – Humans possess some intangible factor of positive, albeit uncertain, value

For the question, ‘AI or human?’ to be worth asking, there must be a trade-off. What advantage does the human possess? The ability to empathize and connect on a ‘human level’, something that is undoubtedly beneficial in a patient-doctor relationship.

A primary source of resistance to the use of AI in medical settings is “the belief that AI does not take into account one’s idiosyncratic characteristics and circumstances”. The more ‘unique’ one perceives herself to be, the stronger the preference will be for a human doctor. However, this reliance on ‘uniqueness’ is, at least in part, unjustified because there is a limit to how ‘unique’ medical conditions can be. To some extent (greater or lesser, depending on the specific ailment), sickness X afflicting person A is equivalent to sickness X in person B for diagnostic and treatment purposes. For example, a doctor does not need to know about a diabetic’s personality and emotions in order to create a treatment plan.

This trade-off then begs the question: what is the relative value of the ‘human factor’ versus the superior diagnostic ability of an AI in a doctor-patient setting? It depends on the patient’s specific needs.

#3 – I have zero information regarding my affliction (i.e., it can be benign or fatal)

The third assumption I must make involves the nature of the malady requiring medical intervention. In a scenario in which I have knowledge of my condition, my decision would be predicated on the following considerations. If there is a chance I could be dealing with a hard-to-detect, serious disease, the presence of the ‘human factor’ would be of limited value. In this case, what I need is the superior technical capabilities of the AI doctor.

On the other hand, I may be dealing with a condition that is mostly, or even exclusively, psychological in nature. I imagine that a human doctor would be desirable in dealing with a patient’s depression, anxiety, or any other condition in which the patient may derive some benefit from ‘being heard’. In particular, the need for human intervention (as opposed to a robot therapist) can be heightened in times of crisis, such as flashbacks and nightmares from PTSD. This ties in with the above discussion on ‘uniqueness’ as a source of resistance to AI doctors. Psychological disorders, I believe, possess a greater degree of uniqueness than ones of a primarily physiological nature, which supports the assertion that human doctors are better equipped to deal with them.

The only context provided by the question is that ‘you are not feeling well’. I can only assume, then, that the mystery condition is equally likely to be physical or psychological in nature, and equally likely to be serious or non-serious. In simple language, ‘it could be anything’.

III. Assessing the Trade-off

I don’t feel well, and that’s as specific as I can be

The above considerations are summarized as follows: AI doctors provide superior diagnostic service that is preferred in situations involving hard-to-detect, potentially life-threatening conditions. Human doctors, while not as technically savvy, possess an intangible ‘human factor’ that is of increased utility in a psychological/psychiatric setting.

With reference to my contextualization above of the words ‘you are not feeling well’, I must assume the worst-case scenario. For me, that consists of a hard-to-detect, life-threatening condition. This is based on an implicit premise that a life with psychological unrest is preferable to no life at all. Therefore, in a situation of zero information, I would opt for the AI doctor.

Additional considerations

Pragmatically speaking (discounting for a moment the ambiguity of the statement ‘you are not feeling well’), it would not be difficult to distinguish a psychological condition from a physiological condition. A stomach pain is likely to be primarily physiological, in the same way that auditory hallucinations are probably psychological in origin. Given the trade-offs between diagnostic accuracy and the ‘human factor’ discussed above, even a small amount of information about my condition would assist in making the optimal choice between an AI and human doctor.

Another consideration I set aside until the end is that of the coincidence of physiological and psychological trauma. I deliberately separated the two in the above discussion to give due consideration to each factor; but it is true that physical maladies are often accompanied by psychological distress (see here and here for examples). However, does this change the above calculus? In my mind, it does not. The mental toll exacted on a cancer patient notwithstanding, her primary focus remains detecting and combating the cancer itself. Benefits accrued from the ‘human factor’ are secondary to the AI doctor’s superior ability to protect the patient from the actual disease.

In summary, I prefer an AI to a human in the vast majority of medical scenarios.

Daniel Joseph is a second-year JD student at Osgoode Hall Law School and a Fellow at the IP Osgoode Innovation Clinic.

Related posts

One Response

  1. This is one of my favorite articles of innovation in AI, interestingly I came across research by conducted by colleague Andrea Bonezzi of New York University that explored patients’ receptivity to medical AI in a series of experiments. The results, reported in a paper forthcoming in the Journal of Consumer Research, showed a strong reluctance across procedures ranging from a skin cancer screening to pacemaker implant surgery. Interestingly further, they found that when health care was provided by AI rather than by a human care provider, patients were less likely to utilize the service and wanted to pay less for it. They also preferred having a human provider perform the service even if that meant there would be a greater risk of an inaccurate diagnosis or a surgical complication.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search
Categories
Newsletter
Skip to content