Thursday, February 09, 2023

Become a Patreon!

Become a Patreon!


Excerpted From: Sahar Takshi, Unexpected Inequality: Disparate-impact from Artificial Intelligence in Healthcare Decisions, 34 Journal of Law and Health 215 (April 28, 2021) (177 Footnotes) (Full Document)


SaharTakshiA 2019 study revealed that an algorithm used by UnitedHealth, one of the nation's largest managed care organizations, might be violating state and federal law: The algorithm had a racially discriminatory impact. The algorithm (called “Impact Pro”) makes eligibility determinations for “high risk care management” services by identifying patients with complex health needs. The researchers found that it deemed black patients' health needs as “less than” white patients', and as a result, black patients were not targeted to benefit from specialized care management programs. Such discriminatory effects from artificial intelligence and augmented intelligence (AI) are not undocumented; however, the study was the first to expose these effects from automation in the healthcare industry. One can imagine an AI system that relies on a patient's oral description of their symptoms to design a treatment plan, or automated imaging technology that diagnoses skin conditions--both systems have the potential to discriminate against patients because it has been proven that AI systems have greater difficulty understanding African American vernacular and analyzing images of people of color.

Discrimination in the healthcare industry is not a novel concept. Thirty-five years ago, then-Secretary Margaret Heckler issued a report and recommendations based on the findings from the Task Force on Black and Minority health, with the report's focal point being minority groups experiencing tremendous amounts of “excess deaths” compared to their non-minority counterparts. Despite the Heckler Report's call to action--increased education and information, professional development, and research and data gathering--health disparities have persisted. Racial and ethnic minorities continue to experience higher rates of premature death and chronic disease. Native Americans and Alaskan natives have higher rates of infant mortality, and black patients are more likely to be inaccurately deemed as having high pain tolerance. From a healthcare entity's perspective, healthcare AI presents a significant compliance challenge issue because of the risk of discrimination and relevant regulations (or lack thereof).

The introduction of AI-informed decision making into the healthcare sphere will continue to exacerbate many of these inequities, and possibly introduce new ones (e.g., in diagnosis and treatment decisions). The promise of AI as a more consistent, and even more accurate, decisionmaker means automation is likely to become the standard in healthcare, but should these positives outweigh its discriminatory impact? This Article proceeds in four parts. In Part I, this Article will outline the current and prospective uses of AI in healthcare and provide examples of potential discriminatory effects. Part II discusses the Food & Drug Administration's current efforts to regulate AI used in medical settings, particularly as clinical decision supports. This Part also highlights the gaps in regulations and makes recommendations to bolster the agency's role in fighting healthcare discrimination. Part III will introduce Section 1557 of the Affordable Care Act and argue that this nondiscrimination provision alone is inadequate to prevent or remedy disparate-impact from AI-informed decisions by providers and insurers beginning by describing the Department of Health and Human Service's enforcement of Section 1557, and further making recommendations for covered entities as they develop their compliance programs to address AI based on these enforcement actions. It then discusses the limited possibility of private rights of action for plaintiffs who are disparately impacted by healthcare AI. Part IV describes the novel compliance challenges posed by licensing laws and malpractice liability doctrines in relation to healthcare AI. Finally, Part V introduces recommendations for the healthcare industry to develop internal compliance standards and for regulators to promulgate policies that address biases in healthcare AI.

[. . .]

“Against this historical backdrop [of racial imbalance], it is imperative that pretrial risk assessment instruments, if used at all, be designed to help meet the goal of reducing racial disparities .... If a tool cannot help achieve that goal, then it is not a tool that the justice system needs.”

The quote above is from a statement released by over 100 advocacy organizations in response to California Senate Bill 10, a legislation that essentially approved the use of AI in the criminal justice context. As emphasized throughout this Article, AI-informed decision-making is emerging in nearly every industry. From determinations made by the government (e.g., public benefits) to those made by private actors (e.g., employers), researchers and advocates are concerned about the discriminatory effects inherent in AI. AI used in the healthcare context, however, presents a unique challenge: these biases can permeate the intimate nature of the examination room or operating room.

This Article discussed the effects of AI bias from a legal and compliance standpoint. The existing nondiscrimination responsibilities on healthcare entities through Section 1557 of the ACA are likely to extend to healthcare AI. Similarly, licensing requirements and the threat of malpractice liability will continue to hold physicians to a standard of care when using AI, one that does not condone discrimination. However, those responsibilities do not govern the design and development of AI--where bias is most likely to be imbedded. As discussed in this Article, the FDA's enforcement authority is currently too limited to adequately oversee all types of healthcare AI and is unlikely to address discrimination.

The promise of AI is attractive: Faster, less costly, and more accurate decision-making. Advocates of healthcare AI technology argue that it has the potential to reduce errors resulting from human variation by physicians. In fact, these advocates argue that AI has the potential to eliminate, rather than introduce or perpetuate, biases in healthcare because it can be programmed to not be influenced by external information about patients or their finances. AI even presents a novel opportunity to remedy many health disparities. For example, AI could be used as a tool in psychiatry to diagnose or treat individuals with severe mental illness.

But the “promise” of AI is misleading. Without a comprehensive (legislative, regulatory, or industry standard) framework that addresses biases in AI, patients that have historically not benefited from the healthcare industry will continue to face discrimination--engrained systemic biases, will only become solidified, automated ones. Patients belonging to suspect classes or low-socioeconomic communities have historically been excluded from the benefits of the American healthcare system, and patients have little reason to trust to automation. As AI explodes as a clinical and administrative tool, nondiscrimination principles should be at the center of regulators' and the industries' plans.

Become a Patreon!