Become a Patreon!
Excerpted From: Moon Duchin and Douglas M. Spencer, Models, Race, and the Law, 130 Yale Law Journal Forum 744 (March 8, 2021) (143 Footnotes) (Full Document)
The Voting Rights Act of 1965 (VRA) guarantees that all American citizens, regardless of race or ethnicity, should have an equal opportunity to participate in the political process and to elect representatives of their choice. The VRA frequently interacts with single-member districts, which serve as the electoral system for congressional and nearly all state legislative races and are the go-to remedy in local VRA enforcement. It has long been known in the redistricting literature that random boundary placement puts minorities at a major structural disadvantage. Single-member districts can secure electoral opportunity for minorities, but only if the minority population is sufficiently concentrated and the boundaries are favorably aligned. The ability of the VRA to remediate historical discrimination and underrepresentation thus depends on proactive redistricting. As a matter of practice, when a set of districts empowers minority communities to elect representatives in rough proportion to their population, courts have held the promise of political equality to have been fulfilled. However, proportionality has functionally operated as a ceiling even when viewed as normatively desirable: White voters will never be represented by less than their share of the population while minority communities nearly invariably will.
In The Race-Blind Future of Voting Rights (henceforth, the Article), Jowei Chen and Nicholas Stephanopoulos sketch out a less proactive future of districting, including a mechanism that stands to needlessly sabotage minority political power and undermine the signal remedial goal of the VRA. The authors devote their Article to delineating a new baseline of opportunity provided by a randomized redistricting protocol that operates with no regard to race. Their project is strategic and pragmatic, motivated by the prediction that an increasingly conservative Supreme Court is likely to effect “avulsive change” for the VRA in the near term, quite possibly by dropping any role for rough proportionality and elevating race-blind mapping as a new ideal. Their Article thus seeks to provide a roadmap for voting-rights advocates to navigate a new nominally race-blind landscape.
To present their approach as a manageable standard, Chen and Stephanopoulos go big--modeling voter preferences in 1,903 districts and evaluating 38,000 districting plans spanning 19 states--and describe their outputs as the race-blind baseline, full stop. Their particular setup is said to be capable of capturing the full dynamics of non-racial redistricting.
We find that most--though not all--enacted state-house plans overrepresent minority voters relative to the race-blind baseline. For example, numerous plans in the Deep South include substantially more African American opportunity districts than would typically emerge from a nonracial redistricting process, while a few plans in the Border South include fewer such districts. Similarly, several western states feature extra Hispanic opportunity districts compared to the race-blind baseline, while only one western state underrepresents Hispanic voters.
As we show below, the authors' methodology does not warrant these kinds of conclusive statements, much less the slippage into the unmistakably normative language of over- and underrepresentation.
We certainly share the authors' enthusiasm about the burgeoning ensemble method. The central counterfactual problem in vote dilution law for many decades has been that of conceptualizing the undiluted baseline, or understanding how districts might convert votes into seats in a state of nature, absent manipulation. In recent years, algorithms that generate large samples of “ensembles” of plausible districting plans have been increasingly used to approach that question. Using ensembles made to conform to legal rules, but without regard to race or partisan data, can provide a non-gerrymandered baseline. Unfortunately, the approach taken by Chen and Stephanopoulos does not conform to best practices in mathematical modeling.
First, the authors' ambitious scope leads them to take many shortcuts in methodology as they build their ensembles and label of opportunity. They borrow tools from mathematical and statistical modeling (notably the randomized districting algorithm developed in the research group that one of us runs but do not provide a detailed description of their design choices; do not report any convergence metrics to confirm that their ensembles of districting plans are representative of any particular weighting of plans; and do not provide any control of errors that propagate through their workflow, especially through their idiosyncratic use of ecological inference.
There are quite a few junctures where their modeling decisions should be flagged. For example, the nineteen states under consideration all have different statutory and constitutional rules for redistricting. Therefore, a one-size-fits-all modeling approach cannot come close to the mark of capturing legal nuance. This is not simply a question of whether to take each rule or principle into account, but how to operationalize that priority. For example, the legal language around county preservation is markedly different across these states: Texas mentions county preservation, North Carolina and Ohio have extremely specific language about how to measure it, and Delaware and Illinois do not have any county preservation rules at all. Nevertheless the same kind of (very strong) county filter is applied by Chen and Stephanopoulos in generating districts in all states--the details, impacts, and alternatives are left completely undiscussed even though the particular filter they use sacrifices the properties needed for representative sampling. Perhaps more fundamentally, the authors rely on a single presidential election to infer voter preferences-- Obama versus Romney 2012--immediately decoupling their findings from VRA practice where attorneys would never claim to identify minority opportunity based on Obama's reelection numbers alone. Beyond this, the authors consider only a single plausible definition of opportunity district; they do not compare their “opportunity” label against the ground truth of recent district performance; and they provide no significant robustness checks at any step in their modeling. Because the authors package their series of complex and computationally intensive functions into a single statistic (the median number of opportunity districts) with very little discussion about their modeling choices, readers may not appreciate the extent to which many of the ingredients are arbitrary, approximate, or numerically unstable. We unpack some of the workflow complexity in Table 2. Do these many choices have effects that cancel out in the end somehow, leaving the finding of over- or underrepresentation intact even if the numbers shift? Do their design choices systematically bias estimates upwards or downwards relative to what would be possible if more elections were taken into account or state laws were handled differently? Chen and Stephanopoulos, when they do address these questions, do so glibly.
Second, the authors misuse the ensembles that they do generate. Ensembles are not suited to identifying a single ideal value of a score, as Chen and Stephanopoulos implicitly do by assigning a designation of under- or overrepresentation based on the median value alone. Rather, ensembles are a powerful tool for understanding baseline ranges for valid districting plans and are useful for clarifying decisionmaking tradeoffs. As the Supreme Court held in 1994, “no single statistic provides courts with a shortcut to determine whether a set of single-member districts unlawfully dilutes minority strength.” The single statistic presented by Chen and Stephanopoulos is no exception.
One of the challenges of introducing novel technical methods in a law review is that the blueprints that are especially important for validation--the details of algorithm design, the magnitude of uncertainty, convergence metrics, alternative specifications, and other robustness checks--are not likely to draw needed scrutiny from law review editors or indeed to hold the attention of most readers. The temptation is thus to gloss over or omit these technical details altogether, even in an eighty-six-page article and its fifty-three-page appendix. But transparency is all the more important for a project that has not been subject to rigorous peer review. This worry about law review publication is not new. Nearly twenty years ago, Lee Epstein and Gary King wrote an important piece in which they reviewed the legal literature and sounded the alarm that “the current state of empirical legal scholarship is deeply flawed.” The lack of attention to sound methodology, they warned, would lead readers to “learn considerably less accurate information about the empirical world than the studies' stridently stated, but overly confident, conclusions suggest.”
This is exactly what generates our grave concerns about the current Article and its placement in a flagship law review. Chen and Stephanopoulos's style of leveraging technical tools while ignoring the scientific standards surrounding their development and deployment risks creating an unnecessarily muddy legal terrain. And the stakes are high: they have provided a recipe that may well devastate electoral opportunity for minority groups just as public opinion and voting behavior are pushing the other way.
In sum, we find that The Race-Blind Future of Voting Rights is a provocative proof of concept that stands on a shaky empirical foundation. The Article uses the promising ensemble method of random district generation to deliver a baseline for minority electoral opportunity; this Response both flags technical issues and questions the conceptual alignment of the methods with their application to voting rights law.
[. . .]
A municipality preservation rule is also imposed in the Article, again with a hard threshold. This does not match up with the ex ante rules for redistricting in Texas or in most other states in the authors' sample. Of the nineteen states in the study, only three (AZ, CA, SC mention cities as such in their redistricting rules, and four (DE, IL, NV, VA have no rule at all regarding counties, municipalities, or any political boundaries.
The authors' style of operationalizing municipality preservation is interesting enough to merit discussion. In many states, there is no authoritative source to find boundaries for relevant municipal geographies. In order to build an approach across states, the authors turn to a Census data product called Census Places. These include not only “Incorporated Places” like cities and towns, but also “Census-Designated Places” like Native American reservations and various land use areas that are chosen by the Census Bureau, not the state, as being appropriate for statistical tabulation.
Figure A6 shows Census Places statewide and in a Fort Worth inset, showing that the Places can include strands and spurs and empty loops. The authors make their technique municipality-conscious in two ways, both extremely strong. One is to impose another rejection filter that requires accepted plans to have at least as many intact Places as the enacted plan. The second is a fundamental shift whose impacts are hard to understand completely. They do not build their plans out of whole precincts, as we do in our replication runs. Instead, they create novel geographic units that they call “base polygons,” defined as intersections of block groups and Places.
These choices--new building blocks, yet another rejection filter--certainly could have a major impact on the findings, and they are not justified in the Article or well-tailored to state law.
We stand to learn a great deal from continued investigations that meet the highest standards of data science while staying grounded in the details and the meaning of the law.
Become a Patreon!