Case V

Case V

For this final Case Study, assume the most defensible theory you've encountered this session (presumably either one of the utilitarian theories, kantian ethical theory, or social contract theory) so as to meticulously construct an argument showing how the theory answers the question, below, posed by the following case about a specific application of predictive analytics, a relatively new development in Artificial Intelligence.

Death Algorithm

In May, 2018, Google's Medical Brain team published a paper in Nature announcing a new health care initiative, an Artificial Intelligence algorithm designed to predict patient outcomes, duration of hospitalization, even the likelihood of death during hospitalization. A great deal of attention is being paid to mortality statistics, or the death algorithm, which has been used in two instances. In the first case, at Hospital A, the algorithm was 95 percent accurate in predicting death; in the second case, at Hospital B, it was 93 percent accurate. In both of these cases, the AI algorithm preformed significantly better than the more traditional models or techniques of predicting patient outcomes.

Google researchers believe the algorithm will reduce health care cost, increase patient-physician face time, and reduce the burden of current data systems which rely heavily on cumbersome and labor-intensive data mining techniques. The AI algorithm is based on very large amounts of anonymous patient data (one previous algorithm used forty-six billion pieces of data), for which use patients and hospitals had consented and approved. Proper safeguards or data security, privacy, and various other HIPPA concerns are a major issue, especially in light of data privacy concerns with companies in the past such as Facebook.

This technology may also be exciting for health insurance companies. Insurance companies love data because it allows them to better estimate the cost of covering an individual. The AI algorithm is the first of its kind due the large amount of data it uses, and promises to become one of the most effective tools for predicting health care cost and outcomes.

There are, however, many unknowns. How will this new AI affect health insurance and patient treatment? Will health insurance companies have access to the data? How will accessibility and affordability of health insurance change if there is reason to believe an individual has increased risk factors for disease progression, hospitalization, or death? Will physicians still use due diligence for medical diagnoses or will they simply rely on the AI outcomes? What will happen when the algorithm and a physician disagree?

Question: Given the current over-running of hospital resources (beds, personnel, PPE, medicines, ventilators, etc.) in the Valley, Houston, and many hospitals in Florida as a result of the CoViD-19 pandemic (such that at least one hospital in the Valley made national news refusing patients deemed unlikely to survive), would it be morally permissible for hospitals to substitute Google's AI Predictive Outcomes in making pandemic triage decisions for (human) health care professionals' doubtless exhausted, biased, and stressed judgment?

From the 2019 National Ethics Bowl.