Become a Patron! 


 

Abstract

Excerpted from: Renata M. O'Donnell, Challenging Racist Predictive Policing Algorithms under the Equal Protection Clause, 94 New York University Law Review 544 (June 2019) (Student Note) (194 Footnotes) (Full Document)

RenataMODonnellThe Chicago Police commander and two officers knock on the home of a Black man on the West Side of Chicago. Robert McDaniel is not under arrest. He has not committed any crime, with the exception of a single misdemeanor he pled to years ago. The officers tell Mr. McDaniel that they have a file on him back at the precinct that indicates he is very likely to commit a violent offense in the near future. Dumbfounded, Mr. McDaniel wonders how they can predict such a thing. The answer: an algorithm known as the Strategic Subject List (“SSL”). Mr. McDaniel is shocked because he has not done anything egregious that would flag him, personally, as a risk. So why is Mr. McDaniel on the SSL and being monitored closely by police? The Commander tells him that it could be because of the death of his best friend a year ago due to gun violence. Ultimately, it was an algorithm, not a human police officer, which generated the output causing Mr. McDaniel's name to appear on the SSL.

Officers are beginning to delegate decisions about policing to the minds of machines. Programmers endow predictive policing algorithms with machine learning type of artificial intelligence which allows the algorithms to pinpoint factors that will distinguish people or places that are allegedly more likely to perpetrate or experience future crime. With each use, algorithms automatically adapt to incorporate newly perceived patterns into their source codes via machine learning and become better at discerning patterns that exist in the additional swaths of data to which they are exposed. In this way, machine learning creates a “black-box” conundrum, wherein the algorithm learns and incorporates new patterns into its code with each decision it makes, such that the humans relying on the algorithm do not know what criteria the algorithm might have relied on in generating a certain decision.

Machine learning-based predictive policing algorithms can learn to discriminate facially on the basis of race because they are exposed to and learn from data derived from the racist realities of the United States criminal justice system--a world in which Black Americans are incarcerated in state prisons at a rate that is 5.1 times the imprisonment of whites, and one of every three Black men born today can expect to go to prison in his lifetime if current trends continue. Machine learning-based policing algorithms learn to replicate and exacerbate these patterns by associating race and criminality. Because these algorithms have the power to discriminate facially by engaging in race-based classifications, they can be challenged under the Equal Protection Clause. This Note is the first piece to argue that machine learning-based predictive policing algorithms present a viable equal protection claim.

Litigants can challenge a state actor's policy under the Equal Protection Clause when that policy impacts a “suspect classification”--such as a classification on the basis of race--because of the policy's intentional, facial discrimination on the basis of the suspect classification. If the litigant can demonstrate that the policy facially discriminates based on the suspect classification, the court reviews the policy under strict scrutiny and will only deem it constitutional if the government can demonstrate that the policy is narrowly tailored to serve a compelling government interest.

This Note specifically examines machine learning-based predictive policing algorithms that programmers feed and train on data sets from which race is not completely removed. Parsing the differences between specific algorithms, however, is beyond the scope of this Note. This Note does not claim that any predictive policing algorithms are intentionally programmed by developers to target people or places on the basis of race. On the contrary, programmers expose the algorithms to large swaths of data with the benign intention of creating an algorithm that can objectively predict crime. However, when input data--like historical crime data and dragnet data searches--contains information about race, a machine learning algorithm becomes biased by parsing the patterns that exist between race and criminality, regardless of whether the developer explicitly wrote that its source code ought to find such a pattern.

Part I gives an overview of the state of predictive policing. Section I.A defines machine learning. Section I.B explains how machine learning is used in predictive policing. Section I.C explains how and why these algorithms can develop racialbiases by delving into the types of data that the algorithms train and rely on, the ways that this data can lead to bias, and the ways in which that bias exacerbates the human biases that already exist in policing.

Part II argues that because these algorithms facially discriminate on the basis of race, a group of plaintiffs could bring a viable facial challenge to police precincts' reliance on them. Section II.A gives an overview of the modern equal protection framework. Section II.B applies this framework specifically to machine learning-based predictive policing algorithms. Section II.C discusses the hurdles that litigants will face in bringing an equal protection challenge to machine learning-based predictive policing algorithms.

[. . .]

The purpose of this Note has been to highlight the facially discriminatory nature of machine learning-based predictive policing algorithms and the potential for equal protection claims against the government for relying on such algorithms. As the preceding analysis suggests, machine learning endows an algorithm with the ability to learn, mimic, and refine patterns that exist in the real world. In the context of policing, machine learning allows an algorithm to associate race and criminality, and thereby discriminate via race-based facial classifications. The Equal Protection Clause is the obvious remedy for facial discrimination. However, claimants will face significant barriers to success because of the difficulties of attributing private action to state actors and the difficulties of gathering proof of the algorithms' classifications on the basis of race.

If courts view these barriers as insurmountable, they will render algorithms immune from equal protection review and will (yet again) fail to deliver on the promise of the Equal Protection Clause. The slow deterioration of the Equal Protection Clause has occurred because “the presumption--despite staggering evidence--seems now to be that nondiscrimination is the norm ....” However, if the courts permit this presumption to eviscerate equal protection challenges to algorithms, a burgeoning number of state policies will be deemed unreviewable, given that government reliance on algorithms is becoming more pervasive across the board, including in decisions of who gets access to healthcare, who teaches American children, and who receives loans.

As municipalities begin to rely on black-box artificial intelligence to determine where to dispatch police officers, American jurisprudence must navigate ways to hold those machines accountable. Trust in police is already at a low ebb in the United States. If systematic racism is not just perpetuated but exacerbated by the facially discriminatory nature of machine learning, trust in police might be eroded entirely, particularly for communities of color. Further, this generation will be yet another in the history of the United States that has maintained racial discrimination in the criminal justice system. If police departments employ these algorithms for a sustained period of time, the algorithms' feedback loops could exacerbate disparate policing practices in the United States. If the Equal Protection Clause is truly meant to ensure that no state denies to “any person within its jurisdiction the equal protection of the laws,” then the Equal Protection Clause should protect against the facial discrimination of machine learning-based predictive policing algorithms.


Renata M. O'Donnell. J.D. Candidate, 2019, New York University School of Law; B.A., 2016, University of Pennsylvania.