Abstract

Excerpted From: Maurice R. Dyson, Algorithms of Injustice & the Calling of Our Generation: The Building Blocks of a New AI Justice in the Technological Era of Global Predatory Racial Capitalism, 5 Howard Human & Civil Rights Law Review 81 (Spring, 2021) (51 Footnotes) (Full Document)

 

mauricedysonI would like to thank Dean Holley-Walker, Howard University School of Law, Professor Darin Johnson, Kayla Strauss, the staff of the Howard Human & Civil Rights Law Review, and the beloved spirit and legacy of C. Clyde Ferguson Jr., that continually inspires and guides this Annual Symposium. And we are beyond grateful for your thought leadership and those of my colleagues who have embraced me in this circle to speak on one of the most profound and unprecedented matters of precious import to the legal profession, and to our nation at this pivotal moment in history.

Indeed, we have seen a great deal of developments in the space of artificial intelligence and its profound implications for civil rights and civil liberties, but the stakes are far greater than society has yet to acknowledge, and its profound responsibilities are correspondingly far deeper than we have yet to assume. We have already envisioned the benefits that could derive from AI technology that are capable of holding a mirror to ourselves and our conduct. It can allow us to improve our efficiency, our ability to diagnose, assess, and treat our bodies' health, improve service delivery, and ourselves, when so informed and imbued with this intent. We can see its promise in improving our society's ability to meet the substantive needs of the human condition; but that is only if we vigilantly persist in the cause of our mutual understandings to speak to the root underlying conditions of our inequity. It cannot continue to manifest in laws and policies that seek to become the technological equivalent of Jim Crow.

But what we confront today is a far greater golden opportunity to shape this emerging technology in ways that can help us to become more aware. To be more aware of the trajectory of not just our actions, but ultimately our conscious and subconscious patterns of thinking and feeling, which is when we can really begin to understand how AI algorithms seek to exploit those hidden and overt biases within us. So legal technologists must now seek a means of making the new. Where loving correction through accountability and where love, in practice, is not circumstantial to politics, nor transactional, conditional or carceral in nature with new AI applications.

Because we know we are the same branches of the same tree of humanity, we refuse to have the Rosewood and Tulsa tragedies of yesteryear become new proxies of algorithmic predatory capitalism today. The consciousness that produces these results seeks profits at any and all costs over humanity's sake. And now is the time for humanity to take its stand, front and center, to unite and embrace a jurisprudence that requires truth, reconciliation and accountability in law and technology.

But we must remember our rights have also been discursively constructed, and that is no less true in dialogues shaped by AI where we see even greater parallels to our own lexicon. For instance, the rhetoric of strict scrutiny has never been just rhetorical, not for those on the receiving end of its inhumanity and when it comes to AI, as with the law's language, context matters. Strict scrutiny is not only a jurisprudential interpretation in constitutional law. Strict scrutiny has become a reality--a massive state strict scrutiny due to increasing AI surveillance technologies.

That strict scrutiny mindset also seeks to be fatal in fact when it is rooted in the consciousness of Jim Crow. This same consciousness, which also produced the U.S. Capitol insurrection, undoubtedly reveals a nation that must be put into balance with itself. We must come to terms with a deeper truth that has become evident from the insurrection, which is so important in achieving systemic justice now.

Much of what we have seen in the attempts to undermine legitimacy have led to calls of “fake news.” But if in this polarization, we could put politics aside for just a moment, we can realize an important question that never gets asked, nor answered enough: it is despite any intention to distort political perception, what becomes of the rhetoric of fake news as a viable argument now when deep fakes, secretive surveillance, facial recognition technology, voice spoofing, and targeted algorithms can rock the bedrock of democratic legitimacy?

When such technology evaluates user preferences by manipulating the engineering of a Brexit vote as to alter and reshape the geopolitical economic power structure of nations, or by leveling a massive misinformation campaign in a domestic U.S. election, or to instigate ethnic tensions in Myanmar, can it truly be said that we are too far away from the day this technology, which exploits destructive sentiments, might one day be used for further undermining law and accountability?

Just this week, we can read that Facebook is targeting right-wing paramilitary extremists with ads for tactical gear clearly intended for combat at a time the nation is crying out for reconciliation and healing. Facebook surveils 2.6 billion users and filed a patent to collect user face print information without their knowledge to sell to merchants, including a “trustworthiness score” for each face. We must refuse a digital caste system AI racism and classism seek to produce. The Facebook-Cambridge Analytica scandal--where the personal data of eighty-seven million Facebook users was allegedly harvested in order to sway a general presidential election has, and continues to, raise alarms. Facebook's platform has also been allegedly used to help incite WhatsApp lynchings in India, and to spread misinformation of QAnon and the hate of the Proud Boys. It is also said that Beijing allegedly surveils 1.4 billion people to develop a “social credit score,” which assesses each citizen's social and economic standing based on data to discriminate in services and to track those of different faiths and backgrounds that may be deemed politically undesirable, as a form of enforcing authoritarian social control. Baltimore police used facial recognition to quash First Amendment rights to peaceably assemble by targeting those with outstanding warrants in the crowd at the Freddie Gray protests to arrest them, which in turn suppresses and chills the demands for police accountability.

It is against this backdrop that the National Institute of Standards and Technology released a report analyzing 189 facial recognition algorithms from ninety-nine companies, which revealed that it saw higher rates of false positives for Asian and African American faces relative to images of Caucasians. The differentials often ranged from a factor of ten to one hundred times in errors, that is, errors on African American and Asian faces were ten to one hundred times more than errors on Caucasian faces. Their study revealed that African American female faces were most likely to be misidentified disproportionately, and thus subject to the greater possibility of false accusations. The Gender Shades Project and the great work of the Algorithmic Justice League have shown significant disparities in facial recognition technologies for people of color, often preferring a lighter phenotype. And while there is much talk of the promise of AI to help law enforcement with crime fighting, the reality is that AI Algorithms not only misidentify, they lack predictive value in law enforcement recidivism rates. When COMPAS' predictive recidivism scores of 7,000 people arrested in Broward County, Florida, were compared with the criminal histories of those same people over the next few years, “the score proved remarkably unreliable” in forecasting violent crime where “only twenty percent of the people predicted to commit violent crimes actually went on to do so.” They also concluded that the algorithm was twice as likely to falsely flag Black defendants as future criminals as it was to falsely flag White defendants as future criminals.

A judge in Wisconsin overturned a plea deal of one year in jail, which had been jointly agreed on by the prosecution and defense, and gave the defendant more jail time (two years in state prison and three years of supervision) because of the defendant's high “risk assessment” score. Is this an acceptable way to use an AI determined risk assessment? This is not just an abuse of discretion. It is the abdication of discretion in the solemnity of the judicial function. To see that this technology is unreliable and unfair, consider Amazon's facial recognition technology, Rekognition, which allegedly falsely matched criminal photo arrays with twenty-eight members of the U.S. Congress. What have been the fruits of this technology in the hands of the self-serving? Predictive policing, sentencing and bail reform show a disturbing pattern that relies on compromised policing practices. Such deeply compromised flawed practices and over-policing in communities of color form the flawed arrest data to determine risk assessment scores. But make no mistake, dirty data has never led to clean outcomes. And clean, fair outcomes are even less probable with few Black and Brown people on code engineering teams. Indeed, we also remember Timmit Gebru, who, as one of the few Black women in her field, was allegedly fired from Google for speaking out about unethical AI.

Even more troubling is that risk assessment factors in AI discriminate against immutable characteristics and now criminalize those characteristics in ways that manifestly violate notions of fundamental fairness in our justice system when an individual has no control over these factors that an algorithm employs. Should these factors equal probable cause? Or are we using AI as yet another means to criminalize poverty and race in the United States? Consider the immutable factors that are comingled indiscriminately in an algorithm that forms the basis for risk assessment scores. They include:

• Geography of birth
• Parents' marital status
• Parents' education
• Where you were raised & surrounding income level
 Number of arrests in your neighborhood
• Type of crimes committed in your community where you grew up.

Is this approach determining danger and culpability based on conduct? No. By these factors alone, the algorithm would likely declare that I should not be here. Yet here I stand before you as the living, breathing truth that defies the mechanistic Calvinism of AI risk assessment factors. And yet here you are. We are broad and diverse as we come. These risk assessment scores and the factors they rest upon simply cannot be left unquestioned or ignored in the long history of the U.S as a predatory racial capitalist state. Indeed, its greatest reliability rests in its unreliability to be used fairly for people of color, and the poor. And if this was not an egregious enough affront to our precarious rhetoric and unrealized notions of meritocracy, fairness, and reliability, we can see AI tech is rapidly being used in attempts to unilaterally allocate life opportunities for all Americans. You will have awakened to a world where it is not the Constitution, nor the rule of law, but an AI algorithm that determines unilaterally, without your knowledge, what rights you have and how you may or may not exercise them, given the private information exchanges defining the global marketplace. Think about how an algorithm can set the terms and conditions of your digital destiny. This is the era of the “score ranked digital caste system” and we see it already in determining who gets what, including:

• Home mortgages
• Car loan and car insurance
• University admissions offers
• Credit financing
• Employment
• Health benefits
• Consumer rewards
• Police stops, arrests, and detention
• “At risk” student designations to justify surveillance in public schools

The history of fabricated evidence and racial hostility at the Capitol should be evidence enough to serve as a reflective mirror to the nation to see for itself the need to put transparent, accountable and replicable, reversed engineered safeguards into the technology. But perhaps even that can be spoofed in deep neural networks. Simply put, the use of these factors criminalizes poverty without regard to any individual culpable conduct whatsoever. And that in itself presents a looming legitimacy crisis that cannot be easily resolved. That crisis of legitimacy is further amplified when the powers use, abuse, and lose our biometric data. And when, without our knowledge, it is used to harness, hack, and harm us, then the Republic has already lost the war on legitimacy. It has already lost the war in failing to regulate use, in failing to secure data, and in failing to validate that hacked or harnessed data is not used in any criminal adjudication or investigation. For what good is a “biometric match” or “facial match” to justify incarcerating anyone when there can be no clear guarantees of chain of custody or authenticity? But it can and should be used against any authority or entity who nonetheless proposes its continued use in prosecutions or investigations because it then demonstrates their impermissible bias. It does the same when reliable data can be illicitly harnessed and used unreliably to manufacture “biometric matches” at will. Legitimacy is further compromised when, as Timnit Gebru points out, AI models not only favor the wealthy, but leave an unacceptably large carbon footprint that harms climate change and hits people of color the most. Legitimacy is compromised when a Facebook machine translation allegedly distorts a Palestinian's words from an innocuous “good morning” to “attack them,” leading to his arrest by Israeli police. Does this look like a friendly AI technology meant to make our lives better, or one that is de facto operating to further exacerbate divisions and inequality in our society? At what point does the promise of AI technology become cover for turning a blind eye to these realities?

We therefore reject a racially biased technology and a racially biased use of it as well. We reject the use of this technology as currently practiced until a racial reconciliation based upon technological justice, not rhetoric, is first and finally made. It is justice that begins in a national and international moratorium. And if it is violated, it should result in reparative treble damages and injunctive relief for the harm caused on platforms to those harassed by AI-induced racial profiling, or misidentification that becomes the new digital justification for stop and frisk. A student denied entry into school or college based only on an algorithm, an uber driver denied work because of faulty facial technology, or a CEO duped by voice spoof out of nearly $250,000 begin to show the depth of concern. With such broad use of this technology, can we really ever now discern, authenticate, and therefore trust what actions are taken, and by whom--be they cybercriminals, supremacist, law enforcement, or others?

With the specter of this technology, can we ever discern from now on whether probable cause is technologically manufactured? Or even if clear and convincing evidence truly exists when this technology can so easily fabricate evidence, which our history of race relations in this country shows we have been far too inclined to use as a tactic in order to deny one's liberty interest? When does doubt become reasonable doubt as a result of this increasing technological manipulation? But this technology genie has been released and it is even in the hands of cybercriminals. Therefore, it is both reasonably prudent and urgent that we should act with all deliberate intent, and we must endeavor from now on to be clear in our understanding that intent in the law must be aligned to an understanding of the impact such algorithms of injustice produce. They should be informed, scrutinized, and analyzed by its consequences on everyday people's lives.

In jury sequestrations, how do we know juror psychological preferences are not exploited unbeknownst to the juror when the AI algorithm feeds their entertainment preferences in the shows they watch in the comfort of their own home? If algorithms and code developers have learned to psychologically profile its users to manipulate an election, what has or will stop others from infiltrating the psyche of prospective jurors? Witness tampering, juror tampering and the fabrication of evidence may be taken to a whole new level where there is no transparency; and with such a high degree of secrecy that now exist over this technology, such questions must be answered forthwith with transparency and accountability.

In a world of increasing prevalence for deep fakes, voice spoofs, conspiracy theories and suspect intentions, it is only in examining their impact on our society can we ever be more certain that the purported uses intended for such technologies remain in faithful congruity with our democratic principles when it becomes unmoored from truth itself. How can we discern from now on what is true and what is not? How may we be assured what is real and what is doctored? If this challenge is taken on seriously, then these questions cannot, nor should not, be denied or delayed any longer for they call into question a legal system where sound, principled adjudication is dependent upon sound fact finding. How can we be assured as a legal community and as a society that these technologies are not now being employed for misguided ends given all of this dubious context and history?

If we can disrupt political elections, what does that say for judicial elections, sheriff elections, and those who must endure their consequences when shaping law and policy toward us? Does the technology become useful to target judges for recall or can it be used to remove bias once and for all? There is much promise yet to be substantiated, but the pitfalls are far more a danger to the rule of law and the legal profession now that the time to act has come and it is long overdue. This technology has already become increasingly pervasive and more sophisticated. We cannot legislate as if we operate on a blank slate.

Lives right now are being impacted and the ability to discern the truth which is so central to our principle of equal justice under the law has now been profoundly called into question for all. We must independently evaluate AI use, by strictly scrutinizing the code developers and the algorithm design based on the outcomes produced. We can strive to know by reverse engineering the programmed classifiers that these algorithms of injustice are trained on. We shall then know the true purpose or “algorithmic functional intent” by its outcome nexus. A good and productive use of AI technology as a means to understand how to better serve users, does not bring forth corrupt ends of social control and divisions. But neither does a corrupt use of AI technology bring forth good fruit for a democracy and its people. A code developer who intentionally or foreseeably stirs up conflict or exploit it to then sell ads that promote weapons to the perpetrator is just as much an accomplice as the one who pulls the trigger. And they must take their victims as they find them and be liable to them to the full extent of their foreseeable and unforeseeable harm their actions cause.

Yes, in the ecosystem of deep fakes, every use of AI technology must be examined by its own materialist outcomes on people of color as upon nations. But such accountability requires not just a linear understanding of a causal nexus when that nexus can be so cloaked and disguised by technology itself. Instead, it requires an inferential analysis rooted in the careful use of inferential legal tools such as a res ipsa loquitur, and the disparate impact principle. Indeed, the troubling outcomes of the AI algorithms employed on social media platforms “speak for itself.” There are promises and pitfalls to this technology. The weight of the evidence not only shows the technology is biased in design, but its use can be as well. In looking at its impact in today's context, we must ask soberly, how likely will mass incarceration increase or decrease with the exploitation of this technology in the hands of the supremacist politician, regulator, or police? Or how likely will it be used to analyze those sworn to serve and protect for psychological bias against Blacks or to shed light on any suspected supremacist affiliations based on their profiles or facial data sets to target remediation or corrective efforts? Will it be used to combat racial gerrymandering or voter suppression, or will it be used to enforce them? How likely will we use AI to constructively uncover and combat AI bias? Time will tell. But we seek a trust that aims to rebuild society on better terms for humanity, not in building a trust with the aim to further exploit it to its own demise.

But we have nonetheless allowed an invasive technology that harms all communities, but especially communities of color. And we have done so with absolutely no oversight, no enforceable regulations, no legal standards, no benchmarks, no uniform best practices, no training, and no transparency that violates so blatantly so many of our existing legal rights. We cannot stay asleep at the wheel any longer as a society or a profession. Perhaps all state and private code developers and those who contract with them should be cautioned not to use further a technology that can lead to massive legal exposure. Given the enormity of that exposure, it seems only appropriate that damages be calculated not merely to restore the victim, but to truly deter the wrongdoer, and that can only happen when we stop allowing tort judgments to be meager relative to the market capitalization of companies; that it is simply written off as the cost of doing business. That does not seem unreasonable when one considers AI can violate the law, if not it's clear spirit and intent. Indeed, AI tech is a hydra whose deep reaching tentacles violates the intent and spirit of so many of our laws including the:

• First Amendment Freedom of Association;
• Fourth Amendment Unreasonable Searches without Probable Cause;
• Fifth Amendment Due Process Notice issue because of lack of transparency in disclosure or potential Brady evidence;
• Sixth Amendment Confrontation Clause problem where evidence sounds of a testimonial nature;
• Federal Rules against prejudicial and propensity evidence under R. 403, 404, 609, and Hearsay;
• Fourteenth Amendment Equal Protection Clause problems;
• Title VI, Title VII and Title IX disparate impact concerns;
• Federal and state antirust & RICO laws;
• Defamation of Character and False Light;
• Invasion of Privacy based on intrusion upon seclusion, disclosure of private facts, and appropriation of name & likeness;
• Conversion, consumer & data rights protections;
• Informed consent principles; and
• False imprisonment and false arrest based on AI misidentification.

This behemoth of a problem in the law is precisely why a moratorium is needed now.

Transparency is desperately needed. The “proprietary work product” of private companies hawking their services to states continues to be a familiar mantra. But we deserve to see transparency in the impermissible risk factors that may have been incorporated in risk assessments systems such as those offered by the Correctional Offender Management Profiling for Alternative Sanctions (“COMPAS”), the Public Safety Assessment (“PSA”), and the Level of Service Inventory Revised (“LSI-R”). Such opaqueness leave us with this question: What can we truly know? Where the scaffolding of a worldwide COINTELPRO must be dismantled, and FOIA records continue to be litigated with resistance and incomplete piecemeal disclosure, what happens if the code is proprietary, when no open record laws apply, and when no transparency into the algorithm exist? This too also speaks to the pressing need of a moratorium. That need is only further underscored when one considers, for instance, that data anonymization, data cleaning, and direct calibration are methods and techniques that are not sufficient to eradicate bias, when word clusters and second order associations still signals identities with immutable characteristics and affiliations--all of which can reflect and amplify bias. Linear models, and especially deep networks, still hide bias and should not be used to train linear classifiers. Adversarial learning can help, but it needs to take into account not just identity classifiers, but power dynamics in context. Inclusive curation may ensure greater accuracy for race and ethnicity, but that alone will not assure case uses are appropriate as employed.

The conduct of many tech companies requires, in the words of one Republican president, that we “trust but verify.” Both compromised leaders and the unelected bureaucrats across the nation have de facto made a new strict scrutiny a ubiquitous technological reality, and it must now become a two-way street. There must be a collective audit through a transparent, and independently organized and recorded global crypto cyber ledger monitored by the world, including a new generation of civil rights lawyer-technologists and civil liberties tech watchdogs that is of the people, by the people, and for the people, wherein it must be disclosed who uses these technologies and for what purposes.

These local and global audits should be collaborative but also independent of government and private venture fiat where legal, financial, and “technological expungement remedies” are enforceable by “global digital citizens” or those granted standing whom the algorithm is demonstrably calculated to be adversely employed against. A trusted decentralized worldwide algorithmic organization for global oversight is needed. And Congress must make quick work of a national moratorium until sensible legislation that builds off the Algorithmic Accountability Act can step effectively into the void and begin to bring order out of chaos.

But legislation aside, we find ourselves inevitably returning to that deeper truth. You know it. Perhaps you have seen at dinner tables and chats with colleagues of a different political stripe. We are embedded in our biases in ways that seem resistant to reasonable and rational discourse. We know we have a deeply rooted psychological need that births misguided allegiance to misguided ideals. We learn from our modern political context, rights and remedies are rhetorically constructed even without AI. But under AI, is it big tech, and not the rule of law, that seeks to determine what and how rights may be recognized and by whom? AI is already a weaponized discursive employing and exploiting racially charged language, images and views calculated to dehumanize all of humanity for the ignoble goal and unlimited appetite for avarice profit-making at the expense of life.

When tech platforms exploit deeply rooted emotional predicates that produce violence against its own people, it is time for the law to take notice and act with strict accountability. The law provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” But when that service provider distorts, curates, targets, and amplifies hate and grossly misinformed speech, or worse, intentionally or recklessly allows it to be weaponized against the vulnerable, or to undermine open democratic choice, it no longer operates as a neutral intermediary platform--it becomes a publisher deserving of no absolute immunity and greater strict scrutiny as an accomplice to the atrocities their algorithms facilitate. If they hand the gun to someone else, they ought to be held responsible as if they pulled the trigger themselves. We still require immunity to preserve freedom of speech, but it should be removed for those with an inordinate amount of de facto market power to influence speech, evidenced through its market capitalization, global reach, past conduct, and the amplification of its algorithmic design as evidenced in its actual or foreseeable potential impact on real people's lives.

As lawyers, we supplant concepts of emotions with “intent,” but in the world of AI, it is our words, emotional emoji icons, family photos, post, likes, books, consumer habits, legal and illegal actions that all matter, and they clearly evoke an energy that has been harnessed, coded, and engineered to drive our collective behavior. It is time to rewrite our own story and begin a new plan--one where we can find common cause with media feeds rather than one calculated to spread niche paramilitary interest tied to violent supremacist views. Can we be more ambitious and try to create tech that builds rather than undermines civic society, and that strengthens rather than weakens our system of checks and balances? Can our algorithmic designs privilege a sense of love and community over fear?--To build the bridges that bind, heal, and help each other become the best versions of ourselves? That is where the promise of AI can be truly realized and not just the pretext of promising rhetoric. But what guarantees can we install in the tech and in our oversight to ensure that free speech won't be undermined by algorithms that place Black Lives Matter protests in social media feeds that invite violent confrontation? Suddenly, freedom of speech is not so free when guns are that brought to rallies and weaponized algorithms--for the sake of targeted revenue and targeted chaos--are done with ill-advised or ill-intended ends.

Justice Brandeis said: “If there be time to expose through discussion, the falsehoods and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.” But it would seem evident that this is an assumption that no longer holds true, if it ever did at all. Is more speech better when it may mean more politically motivated conspiracy theories and falsehoods and assaults on democratic institutions? We have yet to know who would do the monitoring of AI and how. How would algorithms be revised? We still have no plan and that itself is telling whether this technology will actually prove up to its promise, or if we will fall prey to the pitfalls of social control this technology may portend. Social control through algorithmic design is not a bridge too far when we consider we are a society that seems less interested in investing in education to enlighten its citizenry as we are interested in social control now as evidenced in investing billions in for-profit prisons, for profit debtor jails, and for-profit probation.

Again, context matters and it is significant that the recent authoritarian culture wars exacerbated by exclusionary algorithmic design cast doubt on its unchecked use and its ability to realize its promise and potential. When it comes to tech companies, we must verify there was a robust, good faith, effective, institutional implicit bias check conducted of the organizational culture, the staff, internal processes employed, and the final design of the AI algorithm, product, or service by independent auditors. In determining what is “robust” for purposes of an implicit bias check, it should be determined in reference to both the procedural and substantive mechanism and safeguards involved in all decision-making impacting data outcomes.

Interpretative policy guidance for determining “procedural & substantive robustness” must also be clear. For instance, we must know whether a defendant established there is diversity of engineering staff involved in algorithm development or if there are equalizing decisional mechanisms for all staff to have an equal voice in the design process. We must know if there is an implicit bias check of data driving machine learning. We need to know if there was a confidential in camera review of algorithm formula assumptions by potentially affected stakeholders and an independent technical review. Today, trade secrets are shielded through use of protective orders, in camera review, and monitoring attendees. Proof of reasonable necessity still governs the need for protected disclosure and that provides a framework to balance rights and responsibilities fairly and transparently here too. We must ask, was there was an “algorithm use im

[. . .]

We can look in the mirror if AI allows us to reflect and change ourselves, but that will and intention is up to us and what intention we bring into AI to see it is both accurate, accountable and constructive. Given the manifest destruction, division and upheaval we have seen wrought across the globe by inflammatory content on tech platforms, it is not hard to see what intent may be operating in all of the dismissive ambivalence, half-hearted measures, and cosmetic fixes we see in headlines with no real true desire to get it right for the American people. Instead, what we have now, at best with AI, is a “distorted mirror,” or it reveals the distortion in us as a society that is trained on weaponizing our biases, rather than overcoming them, and enforcing white supremacy rather than yielding to a vision where there is more than enough for everyone to meet our essential needs with basic human dignity for all--in recognizing we are children of our Creator, where each of us are just one of its many different facets of the collective One. We can become a healed society that is centered, balanced within itself, and can finally then see with our own inner eyes, the tangible outward legal and illegal creations of our own hidden intentions to externally rebuild society. We have seen what kind of world is erected around us when our internal intentions operate from boundless greed and self-interest. It is now time to imagine what an external world looks like when we begin to build it from within--from the solemn, internal, sincere intention of loving our neighbor and planet as much as ourselves. That is where and how the true promises of AI will finally materialize into justice.

Now it is said whether it is vector machine learning, which looks at multiple examples and variables to predict behavior, or deep neural networks, where several layers of interconnected but independent neurons find relational connections between data points, that each method supposedly lacks the ability to fully understand AI's decision-making process and the inability to predict AI's decisions based upon a discretionary analysis of complex factors. We often talk about this as the “Black Box Problem” AI presents for transparency and accountability, but a black box for who exactly? When it comes to Black Americans and people of color at the receiving end of algorithmic designs that harm, that simply cannot stand as a sufficient response--it never has and it never will.

The task before our generations is now clear. We must meet it head on. In his remarks, “The Calling of Our Generation,” Kennedy put it best as to the task before us when he declared:

In the long history of the world, only a few generations have been granted the role of defending freedom in its hour of maximum danger. I do not shrink from this responsibility--I welcome it. I do not believe that any of us would exchange places with any other people or any other generation. The energy, the faith, the devotion which we bring to this endeavor will light our country and all who serve it--and the glow from that fire can truly light the world.

Thank you.


Professor of Law, Suffolk University Law School; J.D., Columbia University School of Law; A.B., Columbia College, Columbia University.