Abstract

Excerpted From: Michele Estrin Gilman, Expanding Civil Rights to Combat Digital Discrimination on the Basis of Poverty, 75 SMU Law Review 571 (Summer, 2022) (420 Footnotes) (Full Document)

 

MicheleGilmanWE live in a “datafied” society in which a vast network of public and private entities collects and combines our personal data. The digital exhaust people emit as they search and shop online, beam geolocation data from their smartphones, move through spaces under digital surveillance, and engage on social media is algorithmically combined with thousands of other data points into digital profiles. In turn, these digital profiles “serve as gatekeepers to life's necessities,” such as jobs, housing, healthcare, and education. Algorithms determine your credit score, affect your access to housing and employment, set the price of your insurance, and even decide whether the police will consider you a suspect. Numerous scholars and civil rights organizations have highlighted the potential for algorithmic bias in these profiling systems, and real-life examples of digital discrimination are ubiquitous--algorithms have administered lower quality health care to Black patients, learned to prefer male job applicants over females, excluded minorities from seeing certain housing advertisements, and more. As a result, numerous legislative proposals and emerging litigation strategies for countering algorithmic biases exist. These civil rights initiatives, however, have excluded a group of Americans who are particularly vulnerable to digital discrimination--people experiencing poverty.

American law generally does not protect people from discrimination based on their socioeconomic status (SES As a constitutional matter, the Supreme Court has ruled that poverty is not an immutable characteristic and thus does not deserve heightened constitutional protection. As a result, any law discriminating against the poor with a rational basis will survive constitutional review. As a statutory matter, federal and state civil rights laws protect against discrimination based on race, gender, disability, age, national origin, religion, sexual orientation, and genetic history, but they do not protect the poor. There are numerous reasons for this exclusion, including the American belief in the myth of meritocracy, which assumes a far greater capacity for social mobility than actually exists. This lack of legal protection has accelerated digital discrimination against the poor, fueled by the scope, speed, and scale of big data networks.

In the meantime, while low-income people are suffering in a datafied society, businesses amass large profits at their expense, and governments digitally deny them social safety-net supports. Algorithmic systems determine who will see online advertisements for desirable jobs and who will be tracked into low-wage work, who will obtain an affordable mortgage and who will be redlined into predatory loans, and who will obtain a college degree leading to a job and who will be targeted for high-interest loans to attend a for-profit school. Low-income people are usually on the losing end of these classification systems. Without their knowledge, they are sorted out of categories of credit-worthiness, tenant-worthiness, worker-worthiness, and more. At the same time, they are relentlessly targeted on the internet with offers for subprime financial products and services. Indeed, an entire sector of the consumer reporting industry exists to sell vulnerable consumers' data to interested businesses. To obtain public benefits, low-income people must navigate complex and often inaccessible online platforms that are not designed to meet their needs. These automated decision-making systems often deny or reduce benefits without transparency or due process, leaving thousands of people adrift without state support and not knowing why. Layered on top of this data profiling are surveillance tools, such as facial recognition technology, which are increasingly deployed in workplaces, schools, and public housing to control poor and minority populations. Digital surveillance of student computers feeds the school-to-prison pipeline; predictive policing algorithms reinforce and expand policies of over policing and mass incarceration; and workplace algorithms monitor low-wage workers, shaping their performance in ways that cause physical and psychological injuries. In short, low-SES people disproportionately bear the brunt of harm in the datafied society.

As society makes greater efforts to rein in digital discrimination, the time is right to consider expanding the categories of protected groups under digital discrimination laws to include people of low SES. For this Article's purposes, digital discrimination laws include statutes addressing digital civil rights, data privacy, and algorithmic accountability. Part I of this Article describes the causes of algorithmic biases and maps the range of harms facing low-income people as a result of digital profiling, automated decision-making systems, and surveillance systems. Part II sets forth the landscape of existing antidiscrimination and data privacy laws and explains how the law currently provides no protection against SES discrimination in the digital context. It then provides an overview of proposed legislative reforms to enhance civil rights in digital privacy and algorithmic accountability. If enacted and enforced, these bills would certainly provide important new tools for combatting digital discrimination but not directly address harmful practices that target, exclude, or surveil people experiencing poverty. Part III thus proposes that any new laws prohibiting digital discrimination include low SES as a protected characteristic. It considers arguments for and against legal recognition of SES in data-centric regimes and concludes that it would provide a valuable counterweight against the opaque and unaccountable digital exploitation of low-income people, which undermines any vision of economic justice.

[. . .]

People experiencing poverty suffer digital discrimination based on their socioeconomic status. Algorithmic decision-making systems act as gatekeepers to the basic necessities of modern life, such as housing, jobs, healthcare, and education. In the United States, these systems lack transparency, and there are few mechanisms to hold the entities that deploy them accountable for their harm. Credit scoring algorithms embed financial hardship and thus reinforce poverty. Tenant screening algorithms weigh characteristics with no proven connection to renter reliability. Algorithms used in higher education favor the wealthy and prey on the poor. Digital advertising systems can feed or deny opportunities to people based on their status as financially vulnerable.

These examples are just the tip of the iceberg of algorithmic harms facing low-income people. Yet American law provides scant recourse to remedy these harms because poverty is not a protected characteristic under the Constitution or in antidiscrimination statutes. We are at the cusp of a wave of lawmaking to enhance data privacy and algorithmic accountability to rein in algorithmic bias against marginalized people. We should seize this moment and include socioeconomic status as a protected characteristic, similar to the protections afforded to people on the basis of their race, gender, disability, and other recognized categories. This would enhance economic opportunity for millions of Americans, advance the fight for racial justice, and generate the data to improve anti-poverty policymaking. It also can enhance technological innovation while furthering structural reforms for economic justice. Technology should be a tool to empower people rather than oppress them. Expanding civil rights to ban digital discrimination based on poverty is one step in the right direction.


Venable Professor of Law, University of Baltimore School of Law.