Excerpted from: Rachel DiBenedetto, Reducing Recidivism or Misclassifying Offenders?: How Implementing Risk and Needs Assessment in the Federal Prison System Will Perpetuate Racial Bias, 27 Journal of Law & Policy 414 (2019) (Note and Comment) (225 Footnotes) (Full Document)
Mr. Eric Loomis, a Wisconsin defendant, was charged with five criminal counts in response to a drive-by shooting in La Crosse, Wisconsin. He pled guilty to “attempting to flee a traffic officer and operating a motor vehicle without the owner's consent.” Neither crime warranted prison time. In State v. Loomis, Justice Bradley used a state risk-assessment tool, Correctional Offender Management Profiling for Alternative Sanctions (“COMPAS”), which categorized Mr. Loomis as “an individual who is at high risk to the community.” Justice Bradley sentenced him to six years in prison and five years of extended supervision.
On appeal, appellant-defendant Mr. Loomis asserted that the risk assessment program used during sentencing violated his due process rights to be “sentenced upon accurate information,” and “not to be sentenced on the basis of gender.” The Wisconsin Supreme Court acknowledged that COMPAS could disproportionately categorize minority groups as high-risk offenders simply because of factors such as education or familial background. The court circumvented the idea of disclosing COMPAS's methodology to prevent inaccuracies which may affect a defendant's right to be sentenced using the most accurate information. Instead, the court concluded COMPAS was not a determinative factor, and that the defendant “had an opportunity to challenge his risk scores by arguing that other factors or information demonstrate their inaccuracy.” Although Mr. Loomis should have raised an equal protection claim on the basis of gender, he instead challenged the use of gender in the risk and needs assessment (“assessment”) algorithms used at sentencing. In response, the court referenced statistical evidence differentiating men and women's recidivism rates. Yet, the court evaded the issue of gender bias, and instead found that Mr. Loomis did not demonstrate how the court had relied on COMPAS assessment in imposing his sentence.
Artificial intelligence has been used to send defendants to prison in situations where, if a human had been the decisionmaker, the defendant may not have served any time at all. Further compounding this issue, after Mr. Loomis serves his time, risk-assessment algorithms may also be used to determine the conditions of his parole. Artificial intelligence, void of all human interaction, has been used to inform probation, sentencing, and parole decisions on the state level, and probation on the federal level. Allowing courts to use risk assessment tools as the foundation for their decision-making has been shown to result in disproportionate sentencing. Despite this, Congress has proposed and passed legislation permitting the use of artificial intelligence programs in the federal prison system, the Bureau of Prisons (“BOP”).
Implementing a similar assessment instrument for offenders' post-conviction and prerelease decisions perpetuates inherent racialbiases by disproportionately punishing minority groups. An idealistic, long-term solution will seek to address existing prejudice in mandatory minimums and racially driven policing. In the interim, software developers can aim to steer away from using unchangeable variables in these assessment tools and instead focus on an inmate's developmental, psychological, and behavioral changes while he is incarcerated.
Part I of this Note will begin by examining federal assessment instruments, as applied at each decision point. It will also advise against adopting similar risk assessment programs used on the state level.
Part II will compare current state risk assessment programs and studies to demonstrate the potential negative ramifications of those programs.
Part III will provide case studies demonstrating how assessment programs have inherent racial, economic, and gender biases, because the programs incorporate outdated factors that primarily focus on the initial point of incarceration.
Part IV will explore the underlying constitutional considerations of risk assessment programs, highlighting how the reliance on these instruments may violate an offender's right to due process and equal protection under the Fourteenth Amendment.
Part V will examine pieces of legislation Congress has either proposed or passed, as well as other proposed solutions to prevent racial and gender bias in assessment tools.
Part VI will argue against adopting existing assessment tools and instead propose both long-term and short-term solutions to reduce the inherent biases in determining an offender's risk of recidivism in the federal prison system.
[. . .]
Predicting an offender's risk of recidivism while incarcerated to provide appropriate rehabilitative programs proves a daunting task. Under the RNR model, probation officers, courts, and parole boards have used risk and needs factors to evaluate an individual's risk of re-offense. In an attempt to use a uniform system to bridge racial and economic disparities, these assessments merely reinforce the current biased system. Existing tools cannot be used across all settings without adjusting the risk assessment tool itself, or altering the dynamic risk factors to account for time while incarcerated. Once adjusted accordingly, developers should provide full transparency to allow offenders to challenge the scientific validity of the assessments and to avoid constitutional violations. On a large scale, policymakers should focus on racially and economically driven issues such as racial profiling, mandatory minimums, or income-inequality gaps. However, there are short-term solutions that are more realistic and plausible to combat bias in artificial intelligence. The aforementioned policy recommendations, such as addressing racially driven policing, eliminating mandatory minimums, providing full transparency, adjusting dynamic risk factors, and adopting pending legislation, will focus on tackling inherent racial and economic bias as whole, specifically in the assessment instruments recommended for the federal prison system.
J.D. Candidate, Brooklyn Law School, 2020; B.A., Binghamton University.
Please visit my Patreon Page