Excerpted From: Khiara M. Bridges, Race in the Machine: Racial Disparities in Health and Medical AI, 110 Virginia Law Review 243 (April 2024) (403 Footnotes) (Full Document)

KhiaraBridges02As artificial intelligence (“AI”) technologies proliferate across sundry sectors of society--from mortgage lending and marketing to policing and public health--it has become apparent to many observers that these technologies will need to be regulated to ensure both that their social benefits outweigh their social costs and that these costs and benefits are distributed fairly across society. In October 2022, the Biden Administration announced its awareness of the dangers that “technology, data, and automated systems” pose to individual rights. Through its Office of Science and Technology Policy, the Administration declared the need for a coordinated approach to address the problems that AI technologies have generated--problems that include “[a]lgorithms used in hiring and credit decisions [that] have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination,” “[u]nchecked social media data collection [that] has been used to threaten people's opportunities, undermine their privacy, or pervasively track their activity,” and, most germane to the concerns of this Article, “systems [that are] supposed to help with patient care [but that] have proven unsafe, ineffective, or biased.”

As an initial measure in the effort to eliminate--or, at least, contain--the harms that automation poses, the Administration offers a Blueprint for an AI Bill of Rights, which consists of “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” Crucially, the Blueprint identifies “notice and explanation” as a central element in a program that protects the rights of individuals in an increasingly automated society. That is, the Biden Administration proposes that in order to ensure that AI does not threaten “civil rights or democratic values,” individuals should be informed when “an automated system is being used,” and they should “understand how and why it contributes to outcomes that impact” them. To apply it to the context to which this Article is most attuned, if a hospital system or healthcare provider relies upon an AI technology when making decisions about a patient's care, then the patient whose health is being managed by the technology ought to know about the technology's usage.

Although the Biden Administration appears committed to the idea that an individual's rights are violated when they are unaware that an AI technology has had some impact on the healthcare that they have received, many actors on the ground, including physicians and other healthcare providers, do not share this commitment. As one journalist reports:

[T]ens of thousands of patients hospitalized at one of Minnesota's largest health systems have had their discharge planning decisions informed with help from an artificial intelligence model. But few if any of those patients [have] any idea about the AI involved in their care. That's because frontline clinicians ... generally don't mention the AI whirring behind the scenes in their conversations with patients.

This health system is hardly unique in its practice of keeping this information from patients. “The decision not to mention these systems to patients is the product of an emerging consensus among doctors, hospital executives, developers, and system architects who see little value ... in raising the subject.” Moreover, while these actors see few advantages associated with informing patients that AI has informed a healthcare decision or recommendation, they see lots of disadvantages, with the disclosure operating as a “distraction” and “undermin[ing] trust.”

We exist in a historical moment in which the norms around notice and consent in the context of AI in healthcare have not yet emerged--with some powerful actors in the federal government proposing that patients are harmed when they are not notified that AI has impacted their healthcare, and other influential actors on the ground proposing that patients are harmed when they are notified that AI has impacted their healthcare. As we think about the shape that these norms ought to take, this Article implores us to keep in mind the fact of racial inequality and the likelihood that AI will have emerged from, and thereby reflect, that racial inequality. Indeed, this Article's central claim is that the well-documented racial disparities in health that have existed in the United States since the dawn of the nation demand that providers inform all patients--but especially patients of color--that they have relied on or consulted with an AI technology when providing healthcare to them.

Although much has been written about AI in healthcare, or medical AI, very little has been written about the effects that medical AI can and should have on the informed consent process. Moreover, no article to date has interrogated what the reality of racial disparities in health should mean with respect to obtaining a patient's informed consent to a medical intervention (or nonintervention) that an AI system has recommended. This Article offers itself as the beginning of that conversation. It makes the case that we ought to reform the informed consent process to ensure that patients of color are aware that their health is being managed by a technology that likely encodes the centuries of inequitable medical care that people of color have received in this country and around the world.

The Article proceeds in four Parts. Part I canvases the deep and wide literature that documents that people of color suffer higher rates of illness than their white counterparts while also suffering poorer health outcomes than their white counterparts when treated for these illnesses. These racial disparities in health are also present in the context of pregnancy, a fact that is illustrated most spectacularly by the often-quoted statistic describing black women's three- to four-fold increased risk of dying from a pregnancy-related cause as compared to white women. Part II then provides an introduction to AI and explains the uses that scholars and developers predict medical AI technologies will have in healthcare and, specifically, the management of pregnancy. Part III subsequently serves as a primer on algorithmic bias--that is, systematic errors in the operation of an algorithm that result in a group being unfairly advantaged or disadvantaged. This Part explains the many causes of algorithmic bias and gives examples of algorithmic bias in medicine and healthcare. This Part argues that we should expect algorithmic bias from medical AI that results in people of color receiving inferior healthcare. This is because medical AI technologies will be developed, trained, and deployed in a country with striking and unforgivable racial disparities in health.

Part IV forms the heart of the Article. It begins by asking a question: Will patients of color even want medical AI? There is reason to suspect that significant numbers of them do not. Media attention to the skepticism with which many black people initially viewed COVID-19 vaccines has made the public newly aware of the higher levels of mistrust that black people, as a racial group, have toward healthcare institutions and their agents. That is, the banality of racial injustice has made black people more suspicious of medical technologies. This fact suggests that ethics--and justice--require providers to inform their patients of the use of a medical technology that likely embeds racial injustice within it.

The Part continues by making the claim that healthcare providers should disclose during the informed consent process their reliance on medical AI. To be precise, providers should have to tell their patients that an algorithm has affected the providers' decision-making around the patients' healthcare; moreover, providers should inform their patients how racial disparities in health may have impacted the algorithm's predictive accuracy. This Part argues that requiring these disclosures as part of the informed consent process revives the antiracist, anti-white supremacist origins of the informed consent process. To be sure, the practice of informed consent originated in the Nuremberg Trials' rebuke of Nazi medicine. These defiant, revolutionary origins have been expunged from the perfunctory form that the informed consent process has taken at present. Resuscitating the rebelliousness that is latent within informed consent will not only help to protect patient autonomy in the context of medical AI but may also be the condition of possibility for transforming the social conditions that produce racial disparities in health and healthcare. That is, the instant proposal seeks to call upon the rebellious roots of the doctrine of informed consent and use it as a technique of political mobilization. A short conclusion follows.

Two notes before beginning: First, although this Article focuses on medical AI in pregnancy and prenatal care, its argument is applicable to informed consent in all contexts--from anesthesiology to x-rays--in which a provider might utilize a medical AI device. Concentrating on pregnancy and prenatal care allows the Article to offer concrete examples of the phenomena under discussion and, in so doing, make crystal clear the exceedingly high stakes of our societal and legal decisions in this area.

Second, the moment that a provider consults a medical AI device when delivering healthcare to a patient of color certainly is not the first occasion in that patient's life in which racial disenfranchisement may come to impact the healthcare that they receive. That is, we can locate racial bias and exclusion at myriad sites within healthcare, medicine, and the construction of medical knowledge well before a clinical encounter in which medical AI is used. For example: people of color are underrepresented within clinical trials that test the safety and efficacy of drugs--a fact that might impact our ability to know whether a drug actually is safe and effective for people of color. For example: the National Institute of Health (“NIH”) and the National Science Foundation (“NSF”) fund medical research conducted by investigators of color at lower rates than that conducted by white investigators fact that might contribute to the underfunding of medical conditions that disproportionately impact people of color. For example: most medical schools still approach race as a genetic fact instead of a social construction, with the result being that most physicians in the United States have not been disabused of the notion that people of color--black people, specifically--possess genes and genetic variations that make them get sicker and die earlier than their white counterparts. For example: pulse oximeters, which use infrared light to measure an individual's blood saturation levels, are so common as to be called ubiquitous, even though it is well-known that the devices do not work as well on more pigmented skin. For example: most clinical studies that are used to establish evidence-based practices are conducted in well-resourced facilities, making their generalizability to more contingently equipped and more unreliably funded facilities uncertain. For example: many research studies do not report their findings by race, thereby impeding our ability to know whether the studies' results are equally true for all racial groups. And so on. If providers ought to notify their patients (especially their patients of color) that the provider has relied upon medical AI when caring for the patient, then it is likely true that providers similarly ought to notify their patients about racial inequity in other contexts as well. That is, there is a compelling argument that when a provider prescribes a medication to a patient, they might need to notify the patient that preciously small numbers of people who were not white cisgender men participated in the clinical trial of the medication.

There is a compelling argument that when a provider tells a black patient that the results of her pulmonary function test were “normal,” they might also need to inform that patient that if she were white, her results would be considered “abnormal,” as the idea that the races are biologically distinct has long informed notions of whether a set of lungs is healthy or not. There is a compelling argument that when a provider affixes a pulse oximeter to the finger of a patient of color, they might also need to inform that patient that the oximeter's readings may be inaccurate--and the care that she receives based on those readings may be inferior the widely known and undisputed fact that such devices do not work as well on darker skin. There is a compelling argument that when a physician tells a pregnant patient laboring in a safety net hospital that the evidence-based practice for patients presenting in the way that she presents is an artificial rupture of membranes (“AROM”) to facilitate the progression of the labor, they might also need to inform the patient that the studies that established AROM as an evidence-based practice were conducted in well-funded research hospitals that were affiliated with universities. There is a compelling argument that when a physician tells a forty-year-old black patient that he does not need to do a screening for colorectal cancer until age forty-five, they might also need to inform the patient that the studies that established forty-five as the age when such screenings should commence did not report their findings by race. And so on.

It does not defeat this Article's claim to observe that racial bias and exclusion are pervasive throughout medicine and healthcare and that providers in many contexts outside of the use of medical AI ought to notify patients how this bias and exclusion may affect the healthcare that they are receiving. Indeed, it is seductive to claim in those other contexts that it is better to fix the inequities in the healthcare than to tell patients of color about them--a fact that is also true in the context of medical AI. However, fixing the inequities in healthcare in those other contexts and telling patients about them are not mutually exclusive--a fact that is also true in the context of medical AI. And as Part IV argues, telling patients about the inequities in those other contexts might be the condition of possibility of fixing the inequities--a fact that is also true in the context of medical AI.

Essentially, this Article's claim may be applied in a range of circumstances. In this way, this Article's investigation into how algorithmic bias in medical AI should affect the informed consent process is simply a case study of a broader phenomenon. This Article's insights vis-à-vis medical AI are generalizable to all medical interventions and noninterventions.

[. . .]

Philosopher Camisha Russell explains that the discipline of bioethics--the field that has theorized the import of informed consent most extensively--historically concerned itself with concepts like autonomy, freedom, and self-determination. These are all concepts that center the individual as divorced from social context. However, she writes about the possibility of the irruption of “the political” into bioethics, an irruption that would widen the scope of the field's interests. She offers:

The ethical, where it is centered on autonomy conceived in terms of personal freedom, comes to be concerned only with what is or is not permissible in biomedical practice in terms of individually conceived ethical rights, duties, obligations, or prohibitions. With ethical rules in place, much of patient and physician decision-making is taken to be a private matter, with little relevance to politics or social justice. By contrast, the view from the margins suggests that bioethics ought to be at least as concerned with what we might label the political--that is, social responsibility, collective life, the power dynamics and inequalities of social orders, and the role that concepts like race have played in creating and maintaining such inequalities.

This Article observes the introduction into healthcare of AI technologies that threaten to encode the indefensible racial disparities in health and healthcare in the United States that are so well-documented. It concludes that this is an appropriate occasion by which to invite the political to irrupt into the informed consent process--a space that, despite its origins in the Nuremberg Trials' rebuke of racism, antisemitism, and white supremacy, many imagine to be unconcerned with social life.

Professor of Law, UC Berkeley School of Law.