Vernellia R. Randall, Simulated Victims, Real Consequences: AI and Racial Inequality in Sentencing, https://racism.org/articles/law-and-justice/criminal-justice-and-racism/138-sentencing/12871-simulated-victims-real-consequences (Date Last Visited: April 4, 2026)>
In 2025, a court allowed something unprecedented—a murdered person appeared to speak at their own sentencing through an AI-generated video created by family members. This was not evidence. It was not testimony subject to cross-examination. It was something else entirely: a carefully constructed emotional narrative, delivered through simulation, and received by the court as if it carried the weight of lived voice. The case is now on appeal. That matters. This is not settled law. It is a warning. I first became aware of this issue through a video by Caitlin Doughty, which brought this issue to my attention.
The danger here is not that this practice is widespread. It is that it has been allowed at all.
The American criminal legal system has never been neutral in how it values victims. Decades of research and lived experience show that whose life is mourned, whose death is grieved, and whose loss demands punishment are all shaped by race. Crimes involving white victims are more likely to result in prosecution and harsher sentences. Black and Brown victims are less likely to be fully humanized in court or in public narratives. Victim impact statements, long before the introduction of artificial intelligence, have operated within this unequal structure.
AI does not enter this system as a corrective. It enters as an amplifier.
The shift from traditional victim impact statements to AI-generated victim voices is not a minor technological development. It is a fundamental transformation. Family members once spoke about their loss. Now, the victim appears to speak for themself. The difference is profound. A human statement describes grief. An AI simulation reconstructs presence. It creates the illusion that the person lost has returned, even if only briefly, to tell the court what that loss means.
That illusion carries power—emotional, psychological, and ultimately legal.
But the critical issue is not simply the use of AI. It is how AI interacts with race, and more specifically, with the perception of race.
In the United States, race operates as perception as much as identity. Courts do not respond to how individuals identify; they respond to how individuals are seen. Research consistently shows that people perceived as more non-white—whether through skin tone, facial features, name, or accent—receive harsher punishment, even when they identify differently. Judges and jurors make decisions in real time, influenced by what they see, hear, and feel. Perception is the mechanism through which bias operates.
This is where the AI-generated victim statement becomes particularly troubling.
The victim, through AI, is not simply presented—it is curated. The voice, tone, language, and demeanor are constructed to maximize emotional resonance. The victim is, in effect, optimized for empathy. Meanwhile, the defendant stands in the courtroom as they are, subject to unfiltered perception and all the biases that perception carries.
This creates a profound asymmetry.
On one side, a reconstructed victim whose humanity has been enhanced, shaped, and amplified. On the other, a living defendant whose humanity is filtered through the lens of racialized perception. That imbalance is not neutral. It is structural.
The question, then, is unavoidable: would the emotional impact of that AI-generated statement have been the same if the racial identities—or even the perceived racial identities—had been reversed? There is no way to prove how any individual judge would respond. But there is overwhelming evidence about how the system responds. Empathy is not distributed evenly. It is patterned, predictable, and deeply racialized.
And AI, rather than correcting that pattern, is positioned to intensify it.
If appellate courts affirm the use of AI-generated victim simulations, the consequences will not be evenly distributed. Families with greater resources will be more able to produce compelling, high-quality simulations. Victims who align more closely with dominant cultural narratives of innocence and respectability will be more easily humanized through AI. Cases involving those victims will carry greater emotional weight. Sentencing outcomes will reflect that weight.
What emerges is not simply a technological innovation, but a new layer of inequality—one that is harder to see because it operates through emotion rather than explicit rules.
The constitutional questions are real. Due process requires fairness in sentencing. At some point, emotional influence becomes undue prejudice. But courts have historically struggled to draw that line, especially when the emotion is framed as legitimate expression of harm. AI complicates that struggle by making emotional expression more immersive, more persuasive, and more difficult to challenge.
The defense cannot cross-examine a simulation. It cannot meaningfully interrogate a constructed voice. It cannot disentangle authentic grief from engineered narrative. What enters the courtroom is an uncontestable emotional artifact, and that artifact carries weight.
This case is on appeal. That is where the law will begin to take shape. Appellate courts may affirm, limit, or prohibit this practice. But whatever they do will set the trajectory. Early decisions in moments like this tend to become the foundation for future norms.
That is why this moment matters.
Because in a system where punishment is already shaped by race—by who is seen as innocent, who is seen as dangerous, and whose suffering is recognized—introducing a technology that amplifies perception is not a neutral act. It is a choice. And it is a choice that risks deepening inequality under the appearance of innovation.
The issue is not whether technology should enter the courtroom. It already has. The issue is whether courts will recognize that technology does not operate outside of bias. It operates through it.
If the law fails to confront that reality now, it will not eliminate racial disparities. It will encode them—more efficiently, more persuasively, and with the added authority of technological legitimacy.
Further Reading and Resources
The following resources provide additional context on the case, the law of victim impact statements, racial disparities in sentencing, and bias in artificial intelligence.
The AI Victim Impact Case
Associated Press, AI Video of Dead Man Speaking at Sentencing Raises Legal Questions, Associated Press (2025).
https://apnews.com/article/f47bfd50bd22469388082169ee77b7f0
(Date Last Visited: April 4, 2026)
CBS News, Murder Victim “Speaks” from Beyond the Grave Using AI at Sentencing, CBS News (2025).
https://www.cbsnews.com/news/chris-pelkey-murder-victim-ai-statement-sentencing/
(Date Last Visited: April 4, 2026)
Caitlin Doughty, Watching AI Testimony at a Real Murder Trial, Ask a Mortician (YouTube Channel) (2025).
https://youtu.be/GWir9notTag?si=bTs9xSJclvfm-_44
(Date Last Visited: April 4, 2026)
Victim Impact Statements and Sentencing Law
Payne v. Tennessee, 501 U.S. 808 (1991).
https://supreme.justia.com/cases/federal/us/501/808/
(Date Last Visited: April 4, 2026)
Racial Disparities in Sentencing
McCleskey v. Kemp, 481 U.S. 279 (1987).
https://supreme.justia.com/cases/federal/us/481/279/
(Date Last Visited: April 4, 2026)
Death Penalty Information Center, Race and the Death Penalty.
https://deathpenaltyinfo.org/policy-issues/race
(Date Last Visited: April 4, 2026)
Perception and Sentencing Bias
Jennifer L. Eberhardt et al., Looking Deathworthy: Perceived Stereotypicality of Black Defendants Predicts Capital-Sentencing Outcomes.
https://scholarship.law.cornell.edu/cgi/viewcontent.cgi?article=1040&context=lsrp_papers
(Date Last Visited: April 4, 2026)
William T. Pizzi, Suzanne M. Shuman, & Melissa A. Hoffman, Discrimination in Sentencing on the Basis of Afrocentric Features.
https://repository.law.umich.edu/cgi/viewcontent.cgi?article=1148&context=mjrl
(Date Last Visited: April 4, 2026)
Implicit Bias in the Courts
Jerry Kang et al., Implicit Bias in the Courtroom, UCLA Law Review (2012).
https://faculty.washington.edu/agg/pdf/Kang%26al.ImplicitBias.UCLALawRev.2012.pdf
(Date Last Visited: April 4, 2026)
Bias in Artificial Intelligence
ProPublica, Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks (2016).
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
(Date Last Visited: April 4, 2026)
Vernellia R. Randall, Professor Emerita of Law, University of Dayton School of Law. This article was drafted with the assistance of ChatGPT, an AI language model.

