Part One: Implicit Bias Research

A. Identifying Bias

Bias is the pre-judging of a person based on his or her, perceived or actual, status of being a member of a particular group, without regard to that person's actual conduct or performance. Biases can be explicit or implicit.

Explicit biases are easier to identify and explain. People who have explicit biases will express those biases, verbally or in writing. They are aware of their biases and admit to having the bias if asked or challenged. They are deliberate in relying upon those biases and have an animus--a mental state similar to purpose or knowledge. For instance, a person may say, “African American men are more prone to violence than white or Asian men.” When faced with a situation requiring him to assess the potential violence of an approaching man, he will readily admit to this bias being a part of his thought process in deciding whether self-defense strategies are needed.

By contrast, implicit bias is unintentional from a mental state perspective: People do not know they are speaking or acting in a particular way because of the influence of a bias. For instance, when participants were primed with either pop or rap music and asked to evaluate a Black person's behavior, those primed with rap music noted the behavior as more aggressive and the individual as less intelligent. Similarly, a recent study asking participants to estimate height and weight based on photos of faces found people estimated that Black male faces belonged to taller and heavier bodies than white male faces; when in reality the white male faces were from bodies that were taller and heavier. These subjects did not intend to discriminate and were not conscious of the discrepancies between their estimates and reality. Still, those estimates can have an impact.

The implicit bias field has been developing over the past several decades. The research focuses on how human brains work to process information and make decisions. When people have time to evaluate information and make a decision, that decision can be more deliberate. Deliberate decisions are more thoughtful and purposeful, relying upon our analytical skills. By contrast, when making decisions quickly, people rely on intuition and on somewhat automatic processing by their brains. For instance, people know that a flame is hot and infer that a pot sitting on the stove over an open flame is hot. So, there is no need to think about whether or not to use to an oven mitt when lifting the pot from stove: people will reach for the oven mitt.

Similarly, lawyers may know the court house contains many white judges. So, when feeling some time pressure and stress having arrived late at an unfamiliar courtroom and upon seeing a Latina step into the room, the lawyer may ask her when the judge is expected to take the bench. This quick reaction may be based on an implicit bias and the lawyer may presume that the Latina is not the judge, or (more pejoratively) that she must be the clerk. If the attorney took the time to think about it, the attorney might not have made that assumption; but when faced with a quick decision, the brain defers to its well-worn paths to help process information quickly.

One well-worn path is that most judges are white men. Another well-worn path is that most people of color in the courtrooms are either parties or court employees. These are both true statements in the experience of many. And they both reveal potential implicit biases. Implicit bias is not easy to uncover, explain, or analyze. It is one's intuitive reaction in situations that generally are immediate or require fast-thinking and quick judgment calls. When confronted or simply asked, people are likely to reject the notion that they are biased and will deny that they behaved in a biased way.

The Harvard Implicit Association Tests are the “most well-known and highly regarded measure of implicit bias.” According to data from the Race IAT, eighty-eight percent of white Americans have implicit bias against African Americans. Further, forty-eight percent of African Americans have an implicit bias that favors white Americans. There is significant support for the conclusion that, “in the aggregate, implicit bias can have a substantial impact on perception, judgment, decision making, and behavior.”

While having bias is not necessarily the same as acting upon it, some people may call upon these biases when making decisions. For instance, these biases could influence someone's decision on whether a defendant is guilty or innocent. The next sub-section describes a selection of these studies.

B. The Stereotypic Association Between African Americans and Violence

Words can activate implicit biases. For instance, researchers often use a technique called “priming,” which involves exposing a test subject to some focused stimulus before the actual test occurs. The control group may have no priming or be primed with something intended to be neutral. In some IATs, when primed with “words associated with Blacks, such as slavery,” subjects were more likely to rate the ambiguous behaviors of a male as hostile even when the race of the male was not specifically identified. This particular study involved mock jurors, who the researchers concluded were impacted in two ways: (1) they were more likely to consider ambiguous evidence as supporting guilt, and (2) they were more likely to believe that the defendant actually was guilty.

In addition, test subjects were more likely to interpret ambiguous behaviors as aggressive when there are Black actors rather than white actors performing the action. Several studies have evaluated the association between African Americans and violence by analyzing whether a suspect's race influenced the participant's decision to fire a weapon at a suspect, as well as whether race influenced the time spent deliberating before making the decision as to whether or not to shoot.

In one study, the researchers developed a simple video game with twenty different backgrounds and eighty different target images. Ten African American men and ten white men posed as models for the target images. The models appeared in the video game multiple times in different scenarios and positions. Sometimes the targets were armed with guns and other times they were not armed but holding “no-gun” objects. When participants played the game, they encountered a slideshow of different backgrounds. An image of a man randomly appeared and the participants were prompted to make a decision as to whether to shoot this target, after having been told they needed to react quickly to shoot armed suspects. The results of the study show that potentially hostile targets were identified more quickly if they were African American, and participants were also more likely to miss an armed target if he was a white man.

A second study repeated the first study with shorter time frames in which to decide, thus further activating the brain's automatic shortcut processes. Researchers found that participants “set a lower threshold” for shooting African American targets, which can be interpreted to mean that they were more willing to shoot “less threatening” African American targets.

A third study tested whether participants used stereotypic associations between African Americans and violence to help them decide whether to shoot. The subjects were forty-eight undergraduates (twenty-six female and twenty-two male) playing the same video game. They also completed a questionnaire to examine whether they endorsed a negative stereotype of African Americans as dangerous or aggressive. The results suggested it was knowledge of the cultural stereotype rather than indirect personal prejudice that influenced the decision to shoot. Knowing about stereotypes is more pervasive than ascribing to them, but if conduct is partially determined by this knowledge, rather than acknowledged prejudice, acting in response to it would not constitute purposeful discrimination. As intent or purpose is required for actionable state actor discrimination, the lack of intent negatively impacts the ability to effectively combat stereotypes in the justice system.

A fourth study in this group used the same video game parameters and the participants included fifty-two adults (in part, composed of twenty-five African Americans and twenty-one white Americans) selected from bus stations, malls, and food courts. The results showed that the decision to shoot African Americans more quickly did not differ between white participants and African American participants.

The researchers' analysis of these studies led to four findings: (1) white participants made the correct decision to shoot an armed target more quickly if the target was African American; (2) white participants decided not to shoot an unarmed target more quickly if he was white; (3) the magnitude of bias varied with perceptions of the cultural stereotype and with levels of contact, but not with racial prejudice; and (4) a follow-up study showed that the levels of bias were the same among African American and white participants in a community sample. The researchers concluded:

In four studies, participants showed a bias to shoot African American targets more rapidly and more frequently than White [sic] targets. The implications of this bias are clear and disturbing. Even more worrisome is the suggestion that mere knowledge of the cultural stereotype, which depicts African Americans as violent, may produce Shooter Bias, and that even African Americans demonstrate the bias.

This evidence shows the stereotype has an impact on the behavior of people deciding whether or not deadly force is warranted. If students and community members demonstrate this bias, the next question is to consider those who are charged with enforcing the law, and who have more opportunities to make a decision about whether or not deadly force is justified. The next section evaluates some of these studies.

C. The Stereotypic Association Between African Americans and Crime

The media plays a large role in molding our stereotypic associations. For example, “regularly seeing images of Black but not white criminals in the media may lead even people with egalitarian values to treat an individual Black as if he has a criminal background or assume that a racially unidentified gang member is Black.” The association between Black people and crime and against white people and crime influences society at large and police officers in particular.

Another group of studies included police officers as subjects to analyze the influence of stereotypical associations on visual processing. These five studies aimed to identify whether a person's preconceived notion about a person or group of people influenced what that subject perceived when viewing certain images or objects.

Specifically, the first study “investigated (a) whether the association between Blacks and crime can shift the perceptual threshold for recognizing crime-relevant objects in an impoverished context and (b) whether these perceptual threshold shifts occur despite individual differences in explicit racial attitudes.” The participants were primed with Black male faces, white male faces, or no faces. In an unrelated task, they were shown images of objects with incomplete pixels, such that it was difficult to identify the object initially; and as more pixels were added, the resolution gradually enhanced. The subjects were asked to push a button at the point when they thought they could identify the object and write down their guess. The images included both “crime-relevant (e.g., a gun or a knife) and crime-irrelevant (e.g., a camera or a book).

The authors concluded that “black [sic] faces triggered a form of racialized seeing that facilitated the processing of crime-relevant objects.” Further, in comparing the participants who were primed with white faces and those who were not primed with any faces, the authors found that the mere priming with white faces actually inhibited the detection of crime-relevant objects. People are thrown off guard and have a more difficult time connecting crime objects with white faces, which may lead to both a lower expectation of danger from white actors and a lower percentage of deadly force engagements.

Participants in another study were all police officers, seventy-six percent of whom were white. First, they were primed with “crime words,” such as “violent, crime, stop, investigate, arrest, report, shoot, capture, chase, and apprehend.” Then, the participants were asked to look at photos of sixty Black male faces (with features stereotypically associated with Blacks) and sixty white male faces. Next, they were asked to participate in a surprise face-recognition task where they viewed images of Black “lineups” and white “lineups” and were asked to identify any faces they were shown during the previous task.

This study found the police officers were more likely to identify a face that was more “stereotypically Black” than the target they actually were shown when they were primed with crime words. “Priming police officers with [words associated with] crime caused them to remember Black faces in a manner that more strongly supports the association between Blacks and criminality.” The authors determined:

Researchers have highlighted the robustness and frequency of this stereotypic association by demonstrating its effects on numerous outcome variables, including people's memory for who was holding a deadly razor in a subway scene .... [¶] The mere presence of a Black man, for instance, can trigger thoughts that he is violent and criminal. Simply thinking about a Black person renders these concepts more accessible and can lead people to misremember the Black person as the one holding the razor.

From these studies, the authors drew five conclusions: (1) Black faces influence a participant's ability to detect degraded images of crime-relevant objects such as guns and knives; (2) showing crime-relevant objects to participants prompts them to visualize Black male faces--suggesting that the association of Black and criminality is bidirectional--i.e., when participants saw Black faces, they visualized violent objects and when they saw violent objects, they visualized Black faces; (3) these associations exist based on both positive and negative images because when the participants were exposed to positive stereotypical images involving Black people (basketball and athletics), the results were similar; (4) police officers associate the concept of crime with Black male faces and priming police officers with crime words or concepts “increases the likelihood that they will misremember a face as more stereotypically black [sic] than it actually was;” and (5) the more stereotypically Black a face appears, the more likely police officers are to report that the face looks criminal. This association impact suggests there is a strong bias in police officers themselves that Black people are more likely to be engaged in criminal activity than white people, which impacts who is stopped, frisked, questioned, and detained in their community encounters.

Another IAT experiment identified a potential implicit racial bias in favor of guilt, despite the presumption of innocence. Professor Demetria Frank discusses the applications of cross-racial identifications and their unreliability, as well as the overrepresentation of Black people in the criminal justice system. From her evaluation of several studies, she concluded, “Whites are more likely to exhibit racial neutrality in decisions where race is a salient feature in the trial or when normative cues to avoid bias are strong.” When attention is called to bias, people are on guard and make the effort to be race-neutral.

It is imperative that lawyers and judges understand the implications of these and other research studies. Lawyers and judges are often forming their own opinions and making decisions based on the evidence they are given, which includes eyewitness testimony. If implicit bias is so strong it can cause actual eyewitnesses to incorrectly remember who was the perpetrator during a crime, then implicit bias not only affects the lawyers and judges directly but also indirectly as they develop strategies and arguments in reliance upon this testimony.

D. Critiquing Implicit Bias

There are some notable challenges made against the implicit bias testing regime within the scientific community. Those challenges include: defining what is and is not implicit bias; physical processing of information used to measure implicit bias; whether the samples are sufficiently generalizable; what the level of correlation says about causation; how predictive the measures can be; and the impact of changing levels of implicit bias. On the first issue, there are questions about whether the distinction between explicit and implicit biases is a spectrum rather than a bright line. In other words, are we really measuring what we think we are measuring?

On the physical processing issue, one instrumental critique is that younger people generally have quicker reflexes and are better at video games and sending text messages with one finger per hand than older adults, which could lead to tests revealing greater levels of bias in older people. Any critique about varying reflex times would necessarily undermine the perceived validity of the test results because part of the IAT test relies upon measuring the difference in the amount of time it takes the subject to react when processing “two stimuli that are strongly associated (e.g., elderly and frail)” with “two stimuli that are less strongly associated (e.g. elderly and robust)” to test out the existence and clarity of pathways.

In terms of subject sampling, many of the subjects are college students, who are overrepresented in the data, and the studies that have used other subjects recognize disparate outcomes for other groups. While participation in the IAT is open to all through its website, people self-select; and therefore extrapolations about their results may be misleading as to the general population.

On the issue of correlation and causation, critics question whether implicit bias governs people's actions. Even if the IAT tests accurately predict the existence of pathways that evidence implicit bias, they do not demonstrate that people act consistently with the implicit biases that the test measures. Some studies show that more than four percent of variance in discrimination-relevant criteria measures is predicted by the Black-white race IAT measures, but if implicit bias accounts for only about four percent of behaviors, over ninety-six percent remains; and that is no greater impact on behavior than what some have found from explicit bias. Others found an effect size of discrimination closer to zero.

Researchers measure what people do (such as who employers hire, or who juries find to be guilty), but cannot measure what they are thinking when they actually engage in that conduct. Nor do researchers know whether subjects are more likely to engage in biased conduct based on their implicit bias score. Studies show the IAT finds that many people have a high level of favorability for white males and words associated with leadership roles, but the IAT does not tell whether these subjects will actually give a hiring preference to white males when they have the opportunity to do so. We may know that certain individuals promoted white men in the past, but the IAT test does not explain whether their preferences for associating white males and leadership caused them, or played a role in their decision to promote a white male. The issue is that “behavior toward black [sic] people, or white people in isolation, cannot be operationalized as discrimination ... since they fail to capture differential treatment. Hence, treating a black [sic] person badly is not discrimination per se; it only becomes discrimination if the treatment is worse than the treatment of an equivalent white individual.” Thus, the research does not explain whether people will act in accordance with their implicit preferences, nor whether their past acts were because of those implicit preferences.

On the issue of predictive validity, others also criticize the value of the IAT, noting “severe validity and reliability issues,” and stating that:

[T]he most important finding of the present study is that the current literature is uninformative as to whether the IAT can predict discrimination or not, as it turned out that too many studies failed to measure or provide evidence of discrimination actually occurring in the first place. Hence additional empirical work is needed.

They “strongly caution against” applying the IAT based on any assumption that it can or does predict discrimination. For instance, someone could show an amygdala reaction that equates with prejudice against Black people when tested, but actually treat Black and white people the same way in the real world. That same person could be explicitly biased against Black people, and still refrain from treating them differently in a particular situation.

In a study that evaluated changes in levels of implicit bias, researchers found that while some procedures changed levels of implicit bias, their impact was very small. For instance, “[p]rocedures that associate sets of concepts, invoked goals or motivations, or tax people's mental resources produce the largest changes in implicit bias, whereas procedures that induced threat, affirmation, or specific moods/emotions produce the smallest changes.” Appeals to fear, pride, and other emotions had the least impact on implicit bias measures. Conversely, “big picture” strategies addressing associations and goals are less tangible and had a larger impact in changing implicit bias levels. This study further notes that “even the procedures that produced robust effects on implicit bias had effect sizes that are ‘small,’ both by conventional standards and as compared to typical effect sizes in social psychology.”

Despite their limited impact on implicit biases, these procedures had no significant impact on explicit biases and behaviors. The researchers were surprised to find “little to no evidence that the changes caused by procedures on explicit bias and behavior are mediated by changes in implicit bias.” Recognizing the limitations of their analysis--that most of the studies relying upon university student samples can differ significantly from the larger world--they noted the need for further research to better understand “changes in implicit biases and their role in explicit bias and behavior.” They concluded that it would be “more effective to rid the social environment of the features that cause biases on both behavioral and cognitive tasks, or equip people with strategies to resist the environment's biasing influence.”

Instead of recognizing and measuring implicit biases, researchers suggest eliminating environmental biases altogether (“biases in the air” so to speak) or training people to make bias-free decisions. In order to make an implicit bias-free decision, one has to notice bias, identify it, and then act consciously; and recognizing and measuring biases are important steps on that path. Other researchers, like Professors Greenwald and Banaji note that even taking these and other meta-analyses into account, “[t]his level of correlational predictive validity of IAT measures represents potential for discriminatory impacts with very substantial societal significance.” These apparently small impacts in individual cases can add up to a significant impact on how litigants are treated in the court system in general and by individual attorneys and judges in particular.

In the same journal, others reply that despite the additional research and analysis, “by current scientific standards, IATs possess only limited ability to predict ethnic and racial discrimination and, by implication, to explain discrimination by attributing it to unconscious biases.” The authors criticize Professors Greenwald and Banaji because they “focus on a set of implicit bias effects without considering the vast array of other realistic effects that could be competing with implicit bias in any setting.” The authors conclude with a word of caution noting:

[I]f one allows anything to grow unimpeded--be it money in the bank or an epidemic or the ripple effects of unconscious bias in a population--that phenomenon will eventually, with enough time, grow to gargantuan proportions. That is mathematically uncontestable. Whether the small effects of unconscious bias that are suggested as at least possible from these meta-analysis will in reality grow, be contained or disappear in complex, real-world social systems is a question that should be resolved through vigorous empirical testing, not computer simulations and thought experiments that, by their nature, must rely on strong yet untested assumptions.

So, does the IAT really help?

E. The Next Generation of Implicit Bias Research

In responding to some of these critiques, others note too little attention has been focused on the way decision makers justify their decisions after the fact. It is not clear whether people make decisions and then explain how bias did not play a role in the decision or make decisions and then create an explanation showing that bias did not play a role, even if bias actually did have an impact. Professor Kang and others note:

[B]roadly speaking, this research demonstrates that people frequently engage in motivated reasoning in selection decisions that we justify by changing merit criteria on the fly, often without conscious awareness. In other words, as between two plausible candidates that have different strengths and weaknesses, we first choose the candidate we like--a decision that may well be influenced by implicit factors--and then justify that choice by molding our merit standards accordingly.

The authors describe an experiment involving subjects evaluating finalists for a job as police chief, with one of each gender and different profiles that suggested either “book-smart” or “street-wise.” The subjects were asked to rank the candidates and then identify the factors that contributed to their ranking. Depending on which candidate they selected, the subjects ranked factors such as education and experience differently, leading the authors to conclude that “what counted as merit was redefined, in real time, to justify hiring the man.” When the man had more experience and less education, the subjects noted that experience was ranked more highly after the fact. When the man had more education and less experience, education was ranked more highly after the fact.

The next question was whether this post-hoc valuation of factors was done consciously to provide a cover story or was merit “re-factored in a more automatic, unconscious, dissonance-reducing rationalization, which would be more consistent with an implicit bias story?” Further research tested this question. Participants evaluated college admissions decisions for African American and white candidates, with variations in their GPAs and in the number of Advanced Placement courses taken. When asked to identify which criteria was most important, the rankings changed as to whether GPA was the most important factor depending on whether or not the white or the African American candidate had a higher GPA.

Even where the participants were not going to select who would be admitted because admission decisions had already been made and thought they were simply examining the most important criteria, their assessments of value varied such that the white applicant satisfied the higher-valued criterion. The process of “reasoning from behavior to motives, as opposed to the folk-psychology assumption that the arrow of direction is from motives to behavior, is, in fact, consistent with a large body of contemporary psychological research.”

This outcome suggests the subjects are not consciously trying to justify what they know to be biased decisions, but rather that the bias is truly unconscious and the brain engages in a dissonance-reducing rationalization. These authors also studied how jurors evaluate attorneys and what the implications of the juror evaluation of the attorney has for the client, which will be addressed in part two below.