Case Study II

Case Study II

Please be sure you've reviewed the instructions and understand how to exchange with your partner and submit a complete case study. According to our schedule, Case Study II is due Thursday, April 1st, by midnight at the latest. Please be reliable, not for my sake, but for the sake of your partner. Well, and honestly, for your own sake as well.

Notice that everyone applied Classical Utilitarianism (= Eudaimonic Act Utilitarianism) in Case Study I, so no one may use the theory on this case. Further, we found decisive arguments in I that Cultural Ethical Relative, Simple Ethical Subjective, Divine Command Theory, and Natural Law Theory cannot yield defensible analyses, so no one may use any of those theories on the case below, either.

That still leaves a rich collection of theories for application to the case below: Any of the rule-utilitarian theories, most of the act-utilitarian theories, and Kantian Ethical Theory.

For this case study, get together with your partner and decide which of you will apply Kantian Ethical Theory and which of you will select and apply one of the available utilitarian theories to answer the question following the case below. Note in thinking about which theory to apply that Kantian Ethical Theory and whichever of the utilitarian theories may entail the same conclusions regarding the case, or the may not. What they clearly will not do is provide the same grounds (reasons) for the conclusion so drawn.

The Death Algorithm*

In May, 2018, Google's Medical Brain team published a paper in Nature announcing a new health care initiative, an Artificial Intelligence algorithm designed to predict patient outcomes, duration of hospitalization, even the likelihood of death during hospitalization. A great deal of attention is being paid to mortality statistics, or the death algorithm, which has been used in two instances. In the first case, at Hospital A, the algorithm was 95 percent accurate in predicting death; in the second case, at Hospital B, it was 93 percent accurate. In both of these cases, the AI algorithm preformed significantly better than the more traditional models or techniques of predicting patient outcomes.

Google researchers believe the algorithm will reduce health care cost, increase patient-physician face time, and reduce the burden of current data systems which rely heavily on cumbersome and labor-intensive data mining techniques. The AI algorithm is based on very large amounts of anonymous patient data (one previous algorithm used forty-six billion pieces of data), for which use patients and hospitals had consented and approved. Proper safeguards or data security, privacy, and various other HIPPA concerns are a major issue, especially in light of data privacy concerns with companies in the past such as Facebook.

This technology may also be exciting for health insurance companies. Insurance companies love data because it allows them to better estimate the cost of covering an individual. The AI algorithm is the first of its kind due the large amount of data it uses, and promises to become one of the most effective tools for predicting health care cost and outcomes.

There are, however, many unknowns. How will this new AI affect health insurance and patient treatment? Will health insurance companies have access to the data? How will accessibility and affordability of health insurance change if there is reason to believe an individual has increased risk factors for disease progression, hospitalization, or death? Will physicians still use due diligence for medical diagnoses or will they simply rely on the AI outcomes? What will happen when the algorithm and a physician disagree?

*From the 2019 National Ethics Bowl

Suppose the administrator of a chronically cash-strapped, under-resourced rural hospital wants to adopt the AI algorithm to help hospital administrators and medical staff make better decisions about (scarce) resource allocations. The AI would analyze patient records and diagnoses to 'grade' patients according to their likely mortality outcomes, as follows:

Grade AI Prognosis
A Excellent. Patient very likely (90% or better chance) to thrive with minimal medical intervention.
B Good. Patient likely (80% or better chance) to thrive given short-term, industry standard care.
C Acceptable. Patient has a 70% chance of surviving given sufficient in-patient and out-patient resources.
D Poor. Patient has at most a 60% chance of surviving and is likely to require long-term medical intervention and readmission.
F Very poor. Patient has less than a 50% chance of surviving regardless of medical intervention.
F- Extremely poor. Patient has less than a 40% chance of surviving and constitutes an exceptional and on-going drain on hospital resources with almost no likelihood of success.

 

According to the administrator's plan, patients the AI algorithm grades 'A', 'B', and 'C' are largely left to medical staff (physicians, nurses, physical therapists, counselors, etc.) and insurance company discretion in determining treatment plans.

However, the administrator's plan requires that treatment plans for patients whom the AI has graded as 'D', 'F', or 'F-' be automatically referred for review by a panel composed of hospital administration staff, insurance representatives, and a physician-representative to modify treatment plans or even veto treatment plans and reject the patient as un-salvageable. The patient's AI-Grade would be made available to the panel to help guide their deliberations.

Question: Is it morally right for the administrator to implement this plan?