Excerpted From: Patrick Ross, Racial Equity Implications of Artificial Intelligence in Health Care, 77 Food & Drug Law Journal 265 (2022) (42 Footnotes) (Full Document)


PatrickRossArtificial intelligence (AI) and machine learning is transitioning from a mainstay of science fiction to a workday tool. Advances in computational power, joined with a deep well of data, have resulted in a flourishing of uses for AI. However, regulation and oversight of these algorithmic tools have not kept up with the rate of technological innovation. What sets these tools apart is their capacity for continuous learning--given additional data, the mathematical model can be refined and adapt, giving the appearance of “intelligence.” This unique ability of AI tools presents a challenge to current regulation and oversight structures. Without proper guidance, these futuristic tools pose a risk of further embedding biases and health disparities captured in today's data.

When thinking of the future of AI in medicine, many point to the possibilities to reduce human error and improve the quality of care. AI and machine learning tools have been used to reduce adverse events such as pressure ulcers, surgery complications, and diagnostic errors. However, from image detection tools to algorithms used to guide population health decisions, these powerful technologies are susceptible to replicating or entrenching human racial biases due to the same pattern recognition that gives them such potential.

Across a wide variety of disciplines and use cases, evidence is already showing that AI tools can produce results that discriminate by race, sex, or socioeconomic status, either by incorporating human biases or being used in the wrong setting or with the wrong population. While clinical care and public health are not the only sectors to grapple with these challenges, the risks posed to users are greater, as the use of AI solutions without careful development and review may further encode biased care into the health system.

[. . .]

When developed thoughtfully and used correctly, AI has shown real promise to advance the delivery of health care. Ensuring these tools are safe is critical, especially in the health care context. This requires a clear structure for oversight from development to deployment. However, the unique ability to learn and adapt that fuels AI's promise also presents a challenge to current regulatory frameworks. Additionally, the pattern-recognition capabilities key to AI's success also poses a risk of encoding racial bias from historical practice into these new tools. Current regulation of AI tools is limited to a relatively small portion of software and remains largely undefined. As regulatory bodies and stakeholders come together to flesh out guidelines for the safe development and use of AI tools, regulators must place a strong emphasis on ensuring that AI tools do not further entrench racial bias.

Patrick Ross is the Associate Director of Federal Affairs at The Joint Commission.