Become a Patreon!


 Abstract

Excerpted From: Maya Weinstein, School of Surveillance: The Students' Rights Implications of Artificial Intelligence as K-12 Public School Security, 98 North Carolina Law Review 438 (January 2020) (Comment) (387 Footnotes) (Full Document)

 

MayaWeinsteinThe publicity around gun violence in schools has increased since the late 1990s, marked by horrific examples of mass shootings in the halls of K-12 schools and on college campuses. While the exact number of school shootings is disputed, the impact of school violence is undeniable. A national poll conducted in 2018 revealed that one-third of parents now fear for their children's physical safety in school, a statistic that reflects a twenty-two percent increase since 2013. Another survey reported that "[t]wenty percent of parents say their child has expressed concern to them about feeling unsafe at their school." The March For Our Lives movement, started by survivors of the 2018 mass shooting at Marjory Stoneman Douglas High School, saw an estimated 800,000 people turn out for its rally in Washington, D.C., to advocate for gun violence prevention legislation. In 2018 alone, state legislatures considered more than 300 school safety bills and signed over fifty into law. Student safety has also gained considerable traction as a priority within the federal government. Citing the Marjory Stoneman Douglas shooting as an impetus, President Trump created the U.S. Department of Education's Federal Commission on School Safety in March 2018 to study violence in schools and provide recommendations for proactive measures at the federal, state, and local levels to prevent school violence, mitigate outcomes of violent actions, and facilitate efficient responses to violent situations. The recommendations incorporate "best practices for school building security," including the use of technologies like video surveillance and screening systems. In response to the fears of additional school violence and calls for enhanced school security, schools have begun tightening security through the use of emerging technologies. While basic security cameras have been used as monitoring devices in schools for years, some schools are looking to more advanced technologies to gain a greater level of control over the campus environment. Recognizing the market opportunity, technology companies are developing new devices they claim will prevent or reduce the likelihood of school shootings. These new devices, which include advanced cameras and body scanners, use biometrics and artificial intelligence ("AI") to recognize faces; detect weapons, gunshots, and other threats; and track individuals' locations in schools. For the purposes of addressing school security, the main focuses of this Comment are facial recognition, ballistic detection, threat assessment, and location tracking, which schools have begun introducing in recent years. Despite the purported promise of biometric and AI technologies to protect students, these innovations present troubling students' rights concerns. An inherent tension exists between the desire to protect students from violence through the installation of biometric and AI technologies and the rights students--children--must sacrifice in service of that goal, namely their fundamental right to privacy. These technologies are intrusive; they involve capturing images of children, recording fingerprints, scanning social media, and tracking everything from movements to facial expressions. This is a significant amount of information to be recorded and associated with young people. Students may not understand the extent to which their personal information is being collected and shared, and high-level surveillance may alter the nature of the educational environment. While schools certainly need to prioritize student safety, the degree to which new surveillance technologies compromise student privacy is alarming.

The threat to student privacy is even more concerning given that these technologies are in their infancy. There is little evidence that they are effective or accurate, and there is even less information regarding the types of risks they pose to students and how to mitigate them. School districts are investing in costly security systems and sharing student data with law enforcement and security companies all in the name of protecting students, but most of these technologies have not been proven to stop school shootings. Some critics have challenged the accuracy of devices, such as facial recognition scanners, particularly when it comes to identification of younger people--the main focus in K-12 schools--and people of color, who already experience surveillance and law enforcement intrusion at disproportionate rates. This Comment addresses key legal issues surrounding advanced security technologies in public K-12 schools, including the impact on student privacy rights under relevant laws. It also explores the effects these technologies have on the educational environment. It argues that, in using AI surveillance technology in schools, privacy must be balanced against security concerns; the apparent issues with efficacy and accuracy of the technology should be addressed before implementation; and Fourth Amendment case law, federal student privacy legislation, and state laws need to be further developed, with states leading the way, to ensure the protection of students' rights. The scope of the analysis is limited to public schools because these schools are subject to more government control than private schools.

Part I presents background on AI technologies and an overview of the technologies that are currently in use or are in development for school surveillance.

Part II addresses potential harms to students resulting from AI surveillance in schools, including the implications of accuracy and efficacy issues in AI algorithms.

Part III delves into the application of relevant privacy laws, specifically the Fourth Amendment, the Family Educational Rights and Privacy Act ("FERPA"), and state laws, and demonstrates that the law has not progressed to the point of effectively protecting students from AI surveillance. In the end, this Comment argues that schools and governments have more work to do to protect students from technological intrusions that undermine their basic rights.

[. . .]

This Comment is not an argument against the implementation of lifesaving technologies in K-12 schools; the reality is that the AI surveillance technologies discussed in this Comment have not been proven to effectively save lives or prevent violence in schools. Furthermore, the black boxes and lack of intuition in AI programs, coupled with a lack of accountability by lawmakers and the U.S. Department of Education, prevent people from knowing exactly how these technologies are making decisions, which only increases the risk of pernicious behavior and due process concerns. The risks to the academic environment and the long-term impacts of machine bias and privacy violations are sufficient reasons to pause the rapid acquisition of these technologies. Instead of emphasizing prevention and using mental health assessments and awareness programs, these technologies only respond to a crisis that already exists.

It is a critical time to discuss the problems, along with the excitement, that AI brings. The information available about the impact of AI surveillance in schools and how existing privacy laws apply to this type of surveillance is limited; students and guardians must have the necessary information and ability to make informed decisions. The United States Supreme Court has ruled that Americans have the right to privacy and that this right extends to schools. It should not be so easy to compromise the societal value we have placed on individuals' rights.

More than anything, this must become an interdisciplinary conversation. Engineers should continue to develop AI surveillance technology to be maximally effective and accurate. Social scientists should research the impact of constant surveillance and potential false signals on children and young people. They should work together to eliminate bias in AI as well as determine the extent to which technologies like facial recognition work on younger, developing faces. Lawyers should consider liability and keep elevating the conversation around due process concerns. Lawmakers should become more educated about the benefits and negative consequences of the technologies. In addition, educators, students, and parents should be involved in these conversations. There are already efforts to incorporate AI education into K-12 schools to ensure students have the knowledge necessary either to enter the field themselves or at least understand their roles and opinions in a world that is watching them.

At the very least, it is critical that the issues with AI are acknowledged. The impact that bias can have on students could be traumatic and long-lasting. There has simply not been enough research into the potential risks of school surveillance technologies. It is the monetization of fear--doing whatever it takes to make people feel a little better, even if it does not work and puts education and student well-being at risk.

We are progressively normalizing the use of these technologies. When students go to school, they should feel safe. Nonetheless, we cannot turn the educational environment into schools of surveillance without doing proper reconnaissance first.


Become a Patreon!