Saturday, December 04, 2021

 RacismLogo02

Become a Patreon!


 Abstract

Excerpted From: Gabrielle M. Haddad, Confronting the Biased Algorithm: the Danger of Admitting Facial Recognition Technology Results in the Courtroom, 23 Vanderbilt Journal of Entertainment and Technology Law 891 (Summer, 2021) (Note) (204 Footnotes) (Full Document)

GabrielleMHaddadRobert Julian-Borchak Williams, a Black man from Michigan, was wrongfully arrested in January 2020 based on a flawed match from facial recognition technology. Williams was minding his own business at work when he received a call from law enforcement asking him to come to the police station to be arrested. At first, he thought the call was a prank. However, shortly after receiving this call, Williams was arrested on his lawn in front of his wife and two daughters. The police would not explain why Williams was being arrested; they merely showed him a piece of paper reading “felony warrant” and “larceny” alongside his driver's license photo. When his wife asked where he was being taken, an officer simply responded, “Google it.”

According to technology and legal experts, this may be the first known account of an American being wrongfully arrested based on a facial recognition algorithm. Williams was arrested after a surveillance camera image of a man robbing a retail store was uploaded to a facial recognition system and generated multiple matches with Williams's driver's license photo among the results. The results were shown to an eyewitness who had witnessed the crime five months prior and she selected Williams as the “correct” match. Since Williams's arrest, US authorities have identified two other men wrongfully arrested based on facial recognition technology results; in each of these cases, the men mistakenly identified were Black. These recent examples of police implementation of facial recognition technology raise questions about the technology's development and use.

The facial recognition technology that police departments employ to identify suspects predominantly originates from private companies. The National Institute of Standards and Technology (NIST) conducted a study in 2019 that evaluated 189 different algorithms from 99 developers, which represents the majority of the industry. The study found that the algorithms generated higher rates of false positives for Black faces--sometimes up to one hundred times more false identifications--than white faces. This study, and various others, reveal the widespread bias embedded in facial recognition technologies.

In the context of the Black Lives Matter movement and call for police reform in the United States, it is important to consider the consequences of using biased facial recognition technology in law enforcement. The inaccuracy of facial recognition technology raises concerns about the potential disparate impact of this technology in law enforcement and the justice system. With the increasing use of this technology, it is likely that prosecutors will soon seek to introduce it into evidence at criminal trials to establish probable cause or as evidence of an identification. Because this technology's embedded bias currently places minorities at a disadvantage in the criminal justice system, courts should carefully examine the state of the technology, its regulation, and consider what rights criminal defendants should have if condemning facial recognition technology evidence is introduced.

This Note addresses whether results from facial recognition technology should be admitted into evidence at trial and, if the results are admitted, what rights defendants should have to challenge this evidence. Part I gives background information on facial recognition technology, its use by law enforcement, and the lack of regulation. Part II examines whether results from facial recognition technology are admissible as reliable scientific evidence under the Daubert factors and analyzes the scope of defendants' right to challenge the evidence if admitted. Part III suggests that results from facial recognition technology should not be admitted into evidence at trial based on the Daubert factors and further recommends legislation that would grant defendants access to the software used in their trials, with possible protections for the software developer's trade secrets.

[. . .]

This Note cautions against admitting results from facial recognition technology into evidence at trial based on the current infancy and bias of the technology. Further, if the evidence is admitted, defendants should have access to the software's source code to meaningfully challenge the evidence presented against them under the confrontation clause. This Note recognizes the developers' interest in protecting trade secrets and argues that judges should make case-by-case determinations about how to protect developers' information without blocking defendants' access to the software.

Because of the current bias of facial recognition software and its disparate accuracy in identifying different demographics, it is important to critically analyze the state of the technology and the industry before allowing it to be admitted into evidence at trial. This Note presents a call to action to examine law enforcement's use of facial recognition technology and to prevent unreliable uses from incriminating defendants without an opportunity for these defendants to exercise their constitutional right to confront the algorithm that accuses them.


J.D. Candidate, Vanderbilt University Law School, 2022; B.A., Furman University, 2019.


Become a Patreon!

Vernellia R. Randall
Founder and Editor
Professor Emerita of Law
The University of Dayton School of Law

  patreonblack01