Case Study I

Case Study I

Please be sure you've reviewed the instructions and understand how to exchange with your partner and submit a complete case study. According to our schedule, Case Study I is due Thursday, October 14th, by midnight at the latest. Please be reliable, not just for my sake, but for the sake of your partner. Well, and honestly, for your own sake as well.

Consider the following puzzle posed by a case in the 2019 National Ethics Bowl:

The Death Algorithm

In May, 2018, Google's Medical Brain team published a paper in Nature announcing a new health care initiative, an Artificial Intelligence algorithm designed to predict patient outcomes, duration of hospitalization, even the likelihood of death during hospitalization. A great deal of attention is being paid to mortality statistics, or the death algorithm, which has been used in two instances. In the first case, at Hospital A, the algorithm was 95 percent accurate in predicting death; in the second case, at Hospital B, it was 93 percent accurate. In both of these cases, the AI algorithm preformed significantly better than the more traditional models or techniques of predicting patient outcomes.

Google researchers believe the algorithm will reduce health care cost, increase patient-physician face time, and reduce the burden of current data systems which rely heavily on cumbersome and labor-intensive data mining techniques. The AI algorithm is based on very large amounts of anonymous patient data (one previous algorithm used forty-six billion pieces of data), for which use patients and hospitals had consented and approved. Proper safeguards or data security, privacy, and various other HIPPA concerns are a major issue, especially in light of data privacy concerns with companies in the past such as Facebook.

This technology may also be exciting for health insurance companies. Insurance companies love data because it allows them to better estimate the cost of covering an individual. The AI algorithm is the first of its kind due the large amount of data it uses, and promises to become one of the most effective tools for predicting health care cost and outcomes.

There are, however, many unknowns. How will this new AI affect health insurance and patient treatment? Will health insurance companies have access to the data? How will accessibility and affordability of health insurance change if there is reason to believe an individual has increased risk factors for disease progression, hospitalization, or death? Will physicians still use due diligence for medical diagnoses or will they simply rely on the AI outcomes? What will happen when the algorithm and a physician disagree?

The rapid spread of the delta variant of covid has led to hospitals in Idaho and Alaska adopting Crisis Standards of Care for allocating scarce hospital beds and resources to those patients for whom a triage team of health care providers determine doing so will do the most good. For example, having more patients needing ventilators than ventilators forces the hospital to make difficult choices about which patients get intubated and put on a ventilator. Under Crisis Standards of Care, young, otherwise healthy patients who are more likely to improve on a ventilator receive the care over someone who is older and suffers from other disease processes--so-called co-morbidities--which make it less likely they will survive covid.

The problem, of course, is that one of the scarcest resources available is precisely the members of the triage team themselves who would be making these determinations. Surely their efforts are better utilized working directly with patients rather than laboriously reviewing patients to determine who shall enjoy full care and who at most palliative care. Moreover, it is unclear how health care providers can avoid biases against, say, anti-maskers and anti-vaxxers whose very intransigence has been responsible for over-topping hospital resources and so resulting in adopting Crisis Standards of Care in the first place.

Hospital administrators tasked with implementing Crisis Standards of Care thus face a complex and morally fraught choice between at least three alternatives:

  1. Deploy Google's 'Death Algorithm' to sort patients on intake between those suited to receiving full care (thumbs up!) and those destined for at most palliative care and, likely, hospice (thumbs down!).
  2. Deploy a standard triage team of physicians and nurses to sort patients on intake between thumbs up and thumbs down without relying on judgments from the Death Algorithm.
  3. Deploy a special triage team that in turn uses the Death Algorithm to inform the team's deliberations over how to sort the patients.

Question: Putting yourself in the position of being a hospital administrator under these cruel circumstances and facing these three choices, which of the three is the morally right choice?

Answer this question by assuming one and only one of the Utilitarian theories for your Case Study. Thus you assume (just one of) either EAU (=CU), HAU, QHAU, IAU, PAU, ERU, HRU, QHRU, IRU, or PRU upon which to construct your argument.