Become a Patron!
Excerpted From: Anita L. Allen, Dismantling the “Black Opticon”: Privacy, Race Equity, and Online Data-Protection Reform, 131 Yale Law Journal Forum 907 (February 20, 2022) (258 Footnotes) (Full Document)
In the opening decades of the twenty-first century, popular online platforms rapidly transformed the world. Digital modalities emerged for communication, business, and research, along with shopping, entertainment, politics, and philanthropy. Online platforms such as Facebook, Twitter, Google, Airbnb, Uber, Amazon, Apple, and Microsoft created attractive opportunities and efficiencies. Today, life without those platforms is nearly unimaginable. But they come at a heavy price. Familiar platforms based in the United States collect, use, analyze, and share massive amounts of data about individuals--motivated by profit and with limited transparency or accountability. The social costs of diminished information privacy include racial discrimination, misinformation, and political manipulation. This Essay focuses on one set of social costs, one set of institutional failures, and one demographic group: diminished information privacy, inadequate data protection, and African Americans.
African Americans could greatly benefit from well-designed, race-conscious efforts to shape a more equitable digital public sphere through improved laws and legal institutions. With African Americans in mind, there are several reasons for advocating for improved laws and legal institutions. Existing civil-rights laws and doctrines are not yet applied on a consistent basis to combat the serious discrimination and inequality compounded by the digital economy. Existing common law, constitutional law, and state and federal regulations protecting privacy--much of which predates the internet-- are of limited value. Current federal privacy and data-protection law--patchy, sectoral, and largely designed to implement 1970s-era visions of fair information practices inadequate for digital-privacy protection and equitable platform governance. Although the Federal Trade Commission (FTC) exhibits concern about the special vulnerabilities of African Americans and other communities of color, it has lacked the resources to address many of the privacy-related problems created by internet platforms. And the platforms themselves have failed to self-regulate in a way that meaningfully responds to race- and equity-related privacy problems. The self-governance efforts and policies adopted by these companies have not silenced criticism that platform firms prioritize free speech, interconnectivity, and interoperability at the expense of equitable privacy protections and antiracist measures.
In the United States, discussions of privacy and data-protection law ground the case for reform in values of individual autonomy, limited government, fairness, and trust--values that are, in theory, appealing to all people. Yet, until recently, the material conditions and interests of African Americans, particularly from their own perspectives, have received limited attention in such discussions. As civil rights advocates observe, although “[p]rivacy should mean personal autonomy and agency ... commercial data practices increasingly impede the autonomy and agency of individuals who belong to marginalized communities.” In pursuit of equitable data privacy, American lawmakers should focus on the experiences of marginalized populations no less than privileged populations. For Black Americans, those experiences feature three compounding vulnerabilities: (1) multiple forms of excessive and discriminatory surveillance; (2) targeted exclusion through differential access to online opportunities; and (3) exploitative online financial fraud and deception. Digital-privacy and data-protection law proposals fashioned to promote equitable governance online must be responsive to calls for improved online governance made by and on behalf of African Americans relating to these forms of pervasive and persistent disadvantage.
Although a great deal of state and federal privacy and data-protection law is already on the books, additional rules, statutes, and authorities are needed to empower the public sector to regulate how companies handle personal information. A new generation of privacy and data-protection laws is evolving in the United States. But promising state and federal initiatives require a hard look to determine whether they go far enough toward addressing the digital-era vulnerabilities of African Americans. The new generation of laws would ideally include provisions specifically geared toward combatting privacy- and data-protection-related racial inequalities enabled by online platforms. A stronger FTC, a freestanding federal data-protection agency, and updated state and federal privacy and data-protection legislation can all potentially help meet contemporary demands for more equitable online-platform governance. How much they can help depends in large part upon whether these measures are pursued boldly and specifically to address the racial-equity challenge.
In Part I of this Essay, I describe the “Black Opticon,” a term I coin to denote the complex predicament of African Americans' vulnerabilities to varied forms of discriminatory oversurveillance, exclusion, and fraud--aspects of which are shared by other historically enslaved and subordinated groups in the United States and worldwide. Echoing extant critical and antiracist assessments of digital society, I reference the pervasive calls for improved data-privacy governance, using the lens of race to magnify the consequences for African Americans of what scholars label “surveillance capitalism,” “the darker narrative of platform capitalism” and “racial capitalism.” Privacy advocates repeatedly call for reform to improve online data protections for platform users and the general public who are affected by businesses' data-processing practices. Such reforms also benefit African Americans, of course, to the extent that the interests of African Americans converge with those of the general public. I maintain, however, that generic calls on behalf of all population groups are insufficient to shield the African American community from the Black Opticon. To move from generic to race-conscious reform, I advance a specific set of policy-making imperatives--an African American Online Equity Agenda--to inform legal and institutional initiatives toward ending African Americans' heightened vulnerability to a discriminatory digital society violative of privacy, social equality, and civil rights.
In Part II, I consider whether new and pending U.S. data-privacy initiatives meet the reform imperatives of my African American Online Equity Agenda. I argue that while the Virginia Consumer Data Protection Act is flawed and too new for its full impact to be evaluated, several provisions that could over time reduce race discrimination by private businesses are on the right track. Because the FTC is evidently committed to using its authority to advance the interests of people of color, the U.S. House Energy and Commerce Committee recommendation to allocate a billion dollars to create a new privacy and data-protection bureau within the Commission is on point for reducing online fraud and deception targeting African Americans. Finally, though unlikely to be passed by Congress in the very near term, privacy legislation introduced by Senator Kristen Gillibrand in 2021 is remarkably equity conscious, setting the bar high for future federal legislation.
I conclude that although we must welcome these major reforms and proposals for advancing online equity, privacy, and consumer-data protection, grounds for concern remain when reforms are assessed against imperatives for specifically combatting African American disadvantage. Whether and to what extent contemplated legal and institutional reforms would free African Americans from the Black Opticon remains an open question. However, the current era of privacy and data-protection reform presents a paramount opportunity to shape law and legal institutions that will better serve African Americans' platform-governance-related interests no less than, and along with, the interests of others.
i. the black opticon: african americans' disparate online vulnerability
African Americans dwell under the attentive eye of a Black Opticon, a threefold system of societal disadvantage comprised of discriminatory oversurveillance (the panopticon), exclusion (the ban-opticon), and predation (the con-opticon). This disadvantage--propelled by algorithms and machine-learning technologies that are potentially unfair and perpetuate group bias--is inimical to data privacy and an ideal of data processing that respects the data subject's claim to human dignity and equality. Structural racism renders African Americans especially vulnerable to disparities and disadvantages online. Highlighting the problem of algorithmic bias, Dominique Harrison asserted that “Black and Brown people are stripped of equitable opportunities in housing, schools, loans, and employment because of biased data.” As Harrison's observations attest, my Black Opticon metaphor--denoting the ways Black people and their data can be visually observed and otherwise paid attention to online-- encapsulates literal aspects of the urgent privacy and data-protection problem facing African Americans.
African Americans are active users of online platforms. Although roughly thirty percent of Black homes lack high-speed internet access and seventeen percent lack a home computer, Black Americans are well-represented among the 300 to 400 million users of Twitter and the billions of daily users of Meta (previously known as Facebook) platforms. African Americans accounted for nearly one-tenth of all Amazon retail spending in the United States in 2020. As the digital divide is closing among younger Americans, commentators extol a vibrant African American “cyberculture” of everyday life. Black people's recent civil-rights strategies include racial-justice advocacy that is digitally mediated. While Black users and designers of online technology were once a faint presence in popular and academic discussions of the digital age, today, “Black digital practice has become hypervisible to ... the world through ... Black cultural aesthetics ... and social media activism.” On a more mundane level, Black Americans turn to internet platforms for access to housing, education, business, employment, loans, government services, health services, and recreational opportunities.
It may appear that African American platform users, like other groups of users, are not overly concerned with privacy and data protection because of their seeming readiness to give away identifiable personal information. While some consumers indeed undervalue privacy, privacy-abandonment behaviors may not signal genuine indifference to privacy for several reasons. Low-income African Americans may decline privacy protections, such as smartphone encryption, due to prohibitive costs of data-secure devices and services. Some consumers trust that Big Tech is sufficiently caring, comprehensively regulated, and responsive to closing gaps in privacy protection. Typical consumers experience corporate data practices as a black box. Terms of service and privacy policies, while available online, are lengthy, technical, and complex. Educated and uneducated users are poorly informed about the implications of joining a platform, and once on board a platform, stepping off to recover a semblance of control over data can sever important channels of communications and relationships.
I believe many African Americans do care about their data privacy, and that their understanding of the many ways it is unprotected is growing. The remainder of this Part describes the three constitutive elements of the Black Opticon of African American experience: (1) discriminatory oversurveillance (the panopticon), (2) discriminatory exclusion (the ban-opticon), and (3) discriminatory predation (the con-opticon). In so doing, it illustrates how attentive eyes within American society misperceive African Americans through warped lenses of racial discrimination.
A. Discriminatory Oversurveillance
Elements of the Black Opticon have been recognized for decades. In fact, since the dawn of the computer age, wary privacy scholars have emphasized that watching and monitoring individuals with the aid of technology threatens privacy--both for its tendency to chill and control behavior, and for its potential to efficiently reveal and disseminate intimate, personal, incriminating, and sensitive information. For example, Alan Westin's 1964 treatise, Privacy and Freedom, among the most influential law-related privacy studies of all time, noted the special susceptibility of African Americans to the panoptic threat. According to Westin, privacy denotes a certain claim that all people make for self-determination: “Privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.” Distinguishing physical, psychological, and data surveillance, Westin explicitly mentioned African Americans in relation to concerns about covert physical surveillance and discrimination by segregationists within a white power structure. Arthur R. Miller raised political surveillance as a privacy problem facing African Americans in his landmark 1971 Assault on Privacy. And a more obscure early book devoted to privacy in America, Michael F. Mayer's 1972 Rights of Privacy, decried the unlawful wiretapping surveillance of Martin Luther King Jr. that was revealed in the trial of Cassius Clay (Muhammed Ali). Mayer devoted a short chapter to the oversurveillance of the poor dependent on government housing and other benefits, foreshadowing Khiara M. Bridges's work on privacy and poverty.
Twitter, Facebook, Instagram, and nine other social-media platforms came under attack in 2016 for providing location-analytics software company Geofeedia with access to location data and other social-media information illustration of platform-related oversurveillance. According to an American Civil Liberties Union report that year, police departments used software purchased from Geofeedia that relied on social-media posts and facial-recognition technology to identify protesters. For example, following the Black Lives Matter (BLM) protests sparked by the death of African American Freddie Gray while in police custody, Baltimore police reportedly used Geofeedia software to track down and arrest peaceful protesters with outstanding warrants. Police deliberately focused arrests within the majority Black community of Sandtown-Winchester, the precinct where Freddie Gray was apprehended and killed. Geofeedia continued to market its services as a way to track BLM protesters at a time when the FBI was a Geofeedia client and an FBI report indicated that a so-called “Black Identity Extremist” movement would be a target of surveillance. As imprecisely defined by the FBI, the label “Black Identity Extremist” could be affixed to an activist who merely protested police brutality. Fortunately for African Americans and all social-media users, Twitter, Facebook, and Instagram discontinued sharing location data and social-media feeds with Geofeedia following public backlash. But panoptic concerns about platforms and privacy will remain so long as efforts to subject Black people to special levels of efficient social control persist.
Today, government and nongovernmental surveillance practices and technologies of all kinds disparately impact communities of color. The Geofeedia example demonstrates how data sharing and disclosures by digital platforms can have far-reaching inequitable consequences for African Americans. The business of providing platform users' location data and social-media posts to third parties, including law enforcement, without express consent or transparency is aptly perceived by platform critics as violating users' legitimate information-privacy interests. Data-privacy interests are implicated whenever consumer data collected or shared for one purpose is used for other purposes without consent and transparency. A panoptic threat grows as the extent, frequency, and pervasiveness of information gathering through surveillance grows. The Black Opticon exists--and has long existed-- inasmuch as wrongful discrimination and bias persistently focus panoptic surveillance on African Americans, leading to oversurveillance. Location tracking, the related use of facial-recognition tools, and targeted surveillance of groups and protestors exercising their fundamental rights and freedoms are paramount data-privacy practices disproportionally impacting African Americans.
B. Discriminatory Exclusion
I now turn to another feature of the Black Opticon, namely, targeting Black people for exclusion from beneficial opportunities on the basis of race. Discriminatory exclusion requires obtaining information identifying a person as African American.
Such information is not hard to come by. In the 1950s, a brick-and-mortar business could obtain race information from the City Directory. In Atlanta, Georgia, for example, white-only businesses wishing to avoid soliciting business from African Americans could use race information published in the 1951 City Directory. The directory designated people known to be Black with a “c” for colored. The Directory also included information from which race might be deduced due to segregation in housing and employment, such as the entrant's street address and employment. African Americans who did not wish to be discriminated against might have aspired to keep their race private. But in the old South, neither civility norms nor laws protected race information from disclosure.
Today, similar information can be used to identify a person as African American. For example, residential addresses continue to serve as racial proxies. And even something as basic as a person's name can reveal their race. These racial proxies then facilitate discriminatory exclusion. For example, Dr. LaTanya Sweeney, Harvard professor and former Chief Technology Officer at the FTC, uncovered that when she typed her name into Google an advertisement for InstaCheckmate.com captioned “LaTanya Sweeney Arrested?” popped up. When she searched for the more ethnically ambiguous “Tanya Smith,” the arrest-association advertisement disappeared. As Sweeney's work demonstrates, biased machine learning can lead search engines to presume that names that “sound Black” belong to those whom others should suspect and pay to investigate.
Online businesses have the capacity to discriminate and exclude on the basis of race, just as brick-and-mortar businesses have done. Discriminatory exclusion by government and in places of public accommodation is both a civil-rights and a privacy issue. In the 1960s and 1970s, legal commentators began to frame uses of information about race to discriminate and exclude Black people from opportunity as among the nation's information-privacy problems. For example, Miller pointed out in Assault on Privacy that psychological testing, which excluded some minorities from employment and school admissions, required test takers to respond to invasive personal questions. In the 1970s, when unfair credit practices could easily exclude people of color from lending opportunities, policy makers linked fair credit goals to consumer-information privacy. Privacy rights were understood to include the right to withhold information, to restrict sharing with third parties, and to access and correct credit-reporting information.
The forms of racism and inequality of opportunity that policy makers recognized in the 1970s permeate today's digital sphere, in the exclusionary practices of the sort Didier Bigo would term ban-optic. Discriminatory practices (i.e., those that rely on racialized sorting by humans and machines that reinforce racism and deny equal access to services and opportunities thrive on online platforms. Platforms have come under attack for targeted advertising that discriminates against consumers of color with respect to housing, credit, and services, for hosting racially biased advertisements, and for facilitating unequal and discriminatory access to ridesharing and vacation rentals.
Nonconsensual and discriminatory uses of personal information, like other unauthorized use and disclosure of personal information, should be understood as information-privacy violations. For a time, advertisers on Facebook were able to select which Facebook users could and could not see their advertisements by race. The ability to target sectors of the market meant that African Americans could be excluded from commercial opportunities on the basis of their race alone. In November 2017, after Facebook claimed to have devised a system that would recognize and not post discriminatory housing advertisements, journalists at ProPublica were able to purchase housing advertisements that excluded classes protected by the Fair Housing Act, including African American users and users interested in wheelchair ramps. A representative from Facebook explained ProPublica's racially discriminatory housing advertisements as a technical failure. In 2019, showing that some data-privacy problems can and should also be framed as civil-rights violations, the U.S. Department of Housing and Housing and Urban Development charged Facebook with violating the Fair Housing Act by selling advertisements that discriminate against protected classes. Facebook's current policies prohibit discrimination based on race.
C. Discriminatory Predation
Personal data of people of color are also gathered and used to induce purchases and contracts through con jobs, scams, lies, and trickery. Discriminatory predation describes the use of communities of color's data to lure them into making exploitative agreements and purchases. This feature of the Black Opticon searches out and targets vulnerable African Americans online and offline for con-job inclusion. Predatory surveillance is the flip side of the exclusionary-surveillance coin.
Discriminatory predation makes consumer goods such as automobiles and for-profit education available, but at excessively high costs. Predation includes selling and marketing products that do not work, extending payday loans with exploitative terms, selling products such as magazines that are never delivered, and presenting illusory money-making schemes to populations desperate for ways to earn a better living. The FTC has gone after wrongdoers for the practice of targeting low-income individuals. The agency has noted that Native Americans, Latinos, African Americans, immigrants, and inmates and their families are disproportionately impacted by fraud. These populations are lured through false, unfair, or fraudulent online and offline advertising, marketing, and promotions for consumer goods, medical products, government services, education, employment, and business opportunities.
Recent litigation focused on the company MyLife.com. MyLife.com is an online enterprise that sells profiles of individuals, marketed for purposes including housing, credit, and employment-screening decisions. These services are particularly important to communities of color, where limited income, weak credit, and criminal-justice histories can combine as barriers to obtaining basic necessities. Privacy provisions of the Fair Credit Reporting Act (FCRA) (along with provisions of the Restore Online Shoppers Confidence Act and the Telemarketing Sales Rule were deployed in a lawsuit brought by the FTC and Department of Justice. The suit alleged that MyLife.com violated the FCRA by “failing to maintain reasonable procedures to verify how its reports would be used, to ensure the information was accurate, and to make sure that the information it sold would be used by third parties only for legally permissible purposes.” The suit also importantly alleged that defendant MyLife.com fraudulently enticed consumers into purchasing automatically renewing subscriptions to its services by providing them with false and unverified information about their own backgrounds and others, including criminal histories, and that My-Life.com lacked procedures for both determining the accuracy of information and providing user notices. Although the district court denied the defendant's motion to dismiss and granted the government partial summary judgment, the court did not grant summary judgment on the FCRA privacy-related claims. The suit resulted in an injunction and $21 million in civil penalties.
More enforcement lawsuits of this type, that make use of existing law and the FTC's unfair-trade-practice authority, could help deter online predatory practices and shrink the Black Opticon. I believe that future litigation enforcing new race-conscious privacy laws enacted to address discriminatory predation and the disparate impact of data abuses on people of color could help even more.
To reiterate, Part I has illustrated the three sets of data-protection problems that comprise the Black Opticon. The Geofeedia incident, discussed in Section I.A, demonstrated panoptic problems of oversurveillance. Oversurveillance undermines African Americans' sense of security and fair play by placing their lives under a level of scrutiny other groups rarely face, exacerbating the problem of unwarranted encounters with the criminal-justice system. Facebook's discriminatory advertisements, discussed in Section I.B, embodied ban-optic problems of racially targeted exclusion from opportunity. Ban-optic practices online encase African Americans in a racial caste system whereby roles and opportunities are fixed by perception of race rather than need, merit, or ability. Finally, the My-Life.com litigation, discussed in Section I.C, illustrated the con-optic problems of targeted fraud and deception. African Americans deserve the attention of marketplace opportunity, but on the same, more favorable terms extended to other groups.
The Black Opticon pays the wrong kinds of attention to African Americans, using the resources of internet platforms and other digital technologies to gather, process, and share data about who we are, where we are, and to what we are vulnerable. In Part II, I consider how and whether changes in the design and enforcement of privacy law could help combat the Black Opticon.
[ . . .]
Simone Browne innovated “the concept of racializing surveillance,” defined as “a technology of social control where surveillance practices, policies, and performances concern the production of norms pertaining to race and exercise of ‘a power to define what is in or out of place.”’ Digital platforms are racializing technologies in this sense. Despite some scholars' rejection of the panopticon metaphor that it enfolds, the Black Opticon is a useful, novel rubric for characterizing the several ways African Americans and their data are subject to pernicious forms of discriminatory attention by racializing technology online. Online attention can work to keep Black people in an historic place of social and economic disadvantage.
Digital society and online platforms “reinforce and reproduce racist social structures” through “software, policies, and infrastructures that amplify hate, racism, and white supremacy.” They cause social harms such as privacy loss, political harms such as threatening democratic discourse and choice, as well as the abuse of economic power. Platforms could in theory use their resources voluntarily to counteract these abuses. Instead Big Tech struggles with self-governing its platforms to deal with racist content and discrimination in opportunity, services, and privacy. They gesture at change more than fundamentally change. The paucity of people of color in management and leadership positions in Silicon Valley worsens the situation since their absence excludes “advanced-degree holders [in ethnic studies] ... with deep knowledge of history and critical theory.”
This Essay advocates for treating some of the ills affecting African Americans on online platforms with privacy and data-protection reforms, while recognizing that the complete remedy demands “a coordinated and comprehensive response from governments, civil society and the private sector.” I believe the Black Opticon of panoptic, ban-optic, and con-optic discrimination is amenable to attack by well-designed, race-conscious legal reform. Which of the three pillars of the Black Opticon will prove most amenable to destruction through privacy law is an open question I have not attempted to answer here.
Racially equitable policies and aspirations emerge to varying degrees in proposed and enacted privacy and data-protection law, such as the VCDPA, the proposed FTC privacy bureau, and the proposed federal data-protection agency. These reform agendas have a grave purpose, as grave as the purposes that motivated the twentieth-century civil-rights movements. Understandings of the specific impact of diminished data privacy and inadequate data protection on African Americans will continue to unfold. But in the meantime, we should consciously seek to regulate platforms--and, indeed, all of the digital economy--with an agenda that centers nondiscrimination, antiracism, and antisubordination on behalf of African Americans.