Become a Patreon!


 Abstract

Excerpted From: Laura M. Moy, A Taxonomy of Police Technology's Racial Inequity Problems, 2021 University of Illinois Law Review 139 (2021) (284 Footnotes) (Full Document)

LauraMMoyImagine you are a city councilmember and the city police are considering adopting a new technological tool that may make the agency more effective. The tool sounds interesting and useful, but you are concerned about the possibility that if the agency adopts it today, a year from now--after precious funds have been spent on it and it has become deeply integrated into police practice--news will break that the tool is aggravating racial inequity. Fortunately, a recently passed ordinance requires the agency to get city council approval before it can adopt this new tool. You plan to use the approval process to drive a careful and deliberate analysis of the tool now to evaluate inequity potential problems before it is adopted. But how do you do that? How do you know what problems to look for, and how do you apply this analysis to an unfamiliar technology?

The city councilmember in this hypothetical might be looking to leverage a CCOPS ordinance, versions of which have passed in over a dozen cities nationwide. And she would have been hearing about inequitable police technology for some time. For years, civil rights and racial justice organizations have been sounding the alarm bell about the likelihood that new technology tools may aggravate inequity--especially racial inequity--in policing. “Law enforcement agencies have long exercised their power disproportionately in communities of color, and this imbalance persists today,” dozens of organizations wrote in 2016, adding that, “New technological tools that amplify police power can amplify existing biases in policing.” Particularly in the era of the Movement for Black Lives, warnings like this one have captured the attention of journalists, scholars, policymakers, and the public. In-depth explorations of inequity in the context of specific technologies have proliferated, spurring, for example, widespread scrutiny and rejection of facial recognition technology for police agencies.

At the same time, driven in large part by a growing public awareness of racial inequity and the increasing understanding that it is possible for technology to have bias built in, advocates, policymakers, and scholars have proposed a variety of procedural interventions. Andrew Selbst has proposed that police agencies considering predictive policing technology be required first to create “algorithmic impact statements” investigating the likely effects of the technology. A team from AI Now has recommended the adoption of “algorithmic impact assessments” to evaluate a variety of automated decision systems being used by public agencies. Bills have been introduced in Congress that would actually mandate “automated decision system impact assessments” in certain contexts. And a number of cities have already instituted new measures intended to compel greater transparency and scrutiny of new police tools. In particular, over the past few years, more than a dozen cities recently have adopted CCOPS ordinances establishing oversight mechanisms for technologies classified as “surveillance technologies.” Under these ordinances, agencies wishing to adopt a new surveillance technology must submit first to a review of the proposal and seek approval, typically by the city council.

As police agencies, city councilmembers, other policymakers, and the public consider various police technologies for adoption in their communities, they will benefit from a body of excellent research and scholarship regarding specific tools. For example, scholarly examinations in recent years of facial recognition and policing tools have helped guide community-led scrutiny of these tools in a number of cities around the country, including multiple jurisdictions that have outright rejected facial recognition.

But to date, the conversations about the collision of police technology and inequity have not sufficiently equipped policymakers, police agencies, and community advocates with all of the necessary tools to analyze unfamiliar new technologies that may be brought before them. Existing literature often addresses the challenges and pitfalls associated with a particular technology without offering a more generalizable roadmap that can be applied to other types of technology. Other literature explains how biases may be built into algorithm-based tools--or how best to attempt to prevent or correct algorithmic bias--but without providing a problem classification system that can be used by outsiders to the discipline to analyze algorithms and non-algorithms alike. Neither one of these approaches is both specific enough to support a sophisticated analysis of a proposed new technology and general enough to be adapted to the analysis of multiple kinds of technology.

To bring clarity to these issues, this Article breaks down the ways in which technology may aggravate inequity into different problem types, offering a taxonomy that is designed to fill the gap and help policymakers, the public, and agencies understand and evaluate new technologies through an equity lens. The fact that police technology may aggravate inequity in policing is not just one monolithic problem, nor is it a series of specific and completely unique problems that affect individual technologies differently. Rather, it is five major problems that appear repeatedly across different police technologies: when layered onto an existing police system, a new technology may replicate inequity, mask inequity, transfer inequity, exacerbate inequitable harms, and/or compromise inequity oversight.

The introduction of a new technology into a police system replicates inequity when the tool embeds existing police inequity into itself and then replicates it, further entrenching the underlying inequity by rigidly codifying it. Police technology masks inequity when it replaces some aspect of human decision-making understood to be inequitable with computer-assisted decision-making that is less obviously inequitable, thereby hiding the underlying inequity from outside observers. Police technology transfers inequity when it embeds inequity found outside the police system-- such as inequity residing in the development process of a third-party vendor-- that it then spreads to police agencies that adopt the technology. Police technology exacerbates inequitable harms when it augments the ability of police to do harm, so that when police officers exercise their power in an inequitable way, the disparate harm of the inequitable activity is amplified. And police technology compromises inequity oversight when it hampers the ability of legislative bodies, courts, and the public to exercise oversight over law enforcement agencies and to safeguard against injustice effectively.

These classes of equity problems in police technology are not mutually exclusive, and they likely represent only one possible way of categorizing the problems discussed in this Article. But the taxonomy offers a useful framework for parties interested in investigating the possible link between technology and inequity to do so from several different angles. Naming and defining these five separate classes of inequity problems will help police agencies, policymakers, and scholars alike to thoroughly analyze proposed new police technologies through a racial equity lens and craft appropriate responses and protections to address anticipated problems.

To illustrate these classes of equity problems, I draw from real world examples of circumstances in which the introduction of a new police technology allegedly has aggravated racial inequity in policing, with a focus on three case studies in particular:

• Police in many cities use predictive policing algorithms to find patterns in data about criminal activity and use those patterns to proactively deploy police to locations where crimes are statistically more likely to occur. But because the underlying data encodes existing racial inequity in policing, predictive policing may learn and replicate racial bias.

• Many police forces use automated face recognition technology to help identify faces captured in photos and videos of crime suspects. But because face recognition technology often works less well on faces of color, police face recognition technology may increase the likelihood that people of color will be wrongfully identified and prosecuted for crimes they did not commit.

• Some police use fake cell phone towers, sometimes called “StingRays,” to identify or locate the phones of persons of interest. But because police often exercise their power in racially inequitable ways, StingRays' harmful disruption of the cell phone network may fall disproportionately on residents of minority neighborhoods.

This Article proceeds as follows. Part II explains how police technology is situated in a context of racial inequity and argues that police technology must therefore be evaluated through a racial equity lens. Part III proposes and explains a working taxonomy of racial equity problems in police technology that defines the five classes of problems introduced above, drawing from real world case studies to illustrate application of the taxonomy in action. Part IV explains how to use the taxonomy integrated into equity impact assessments tailored for evaluation of new police technologies. Finally, Part V explains how the taxonomy proposed in this paper can be used to illuminate important equity considerations when new technologies are adopted in other contexts where inequity is a concern, such as education and hiring.

[. . .]

Fairness and rationality require that proposed new police technologies be evaluated through an equity lens. In an era of heightened public awareness both of racial disparities in policing and of potential shortcomings of police technologies, this should be clear. Yet too often, conversations about racial equity challenges and police technology either overgeneralize or over-specify the problem, failing to provide a model that can be used to evaluate racial equity considerations across all police technologies.

This Article fills that gap. The taxonomy introduced and described above will help scholars, police agencies, policymakers, and the public alike understand the five classes of racial equity problems that may accompany the introduction of a new police technology, and it will help them apply a more sophisticated racial equity analysis to proposed new police technologies.

In addition, the time is ripe for development of a police technology racial equity assessment to operationalize this goal, because cities across the country are adopting laws that establish new oversight hooks for communities and policymakers to be heard during the consideration of proposed new police technologies. Accordingly, this Article also offers a proposed model police technology equity impact assessment using the proposed taxonomy as a guide, illustrating the utility of the proposed taxonomy.

Finally, this Article explains how the proposed taxonomy and impact assessment tool can be used to evaluate new technologies through an equity lens in contexts beyond the criminal legal system. Our society is in the midst of a highly creative period when it comes to technology and innovation, with new tools cropping up nearly every day purporting to simplify decision-making and even combat inequity. To realize the promise of new technologies without aggravating existing inequity problems, we need a process to ensure we are conducting the right analysis and asking the right questions. This Article is a step in that direction.


Associate Professor of Law, Georgetown University Law Center, and. Director of the Communications & Technology Law Clinic.


Become a Patreon!