Zoeken
  • legalaidclinicmaas

Artificial Intelligence, recidivism score and racial bias in the judicial system



With the rapid growth of technologies based on Artificial Intelligence, the AI algorithms made their way into the judicial system of the United States. The most popular uses of AI in the criminal law context include systems computing the recidivism score. It aims at predicting whether the defendant will commit violent offences following release. These systems were developed to accelerate the risk factors assessment and produce more fair results. Paradoxically, the results produced by the recidivism prediction systems are far from just.

The recidivism score computed by the software employing Artificial Intelligence like COMPAS is often treated by judges as the determining factor when deciding on the duration of the sentence or probation. However, as reported by ProPublica, only 20% of defendants predicted to commit a violent offence actually did it. The problem with the accuracy of the risk assessment is even more evident when assessing it in the context of the defendants’ racial background. For example, in Broward County, a criminal justice algorithm mistakenly assigned African Americans to the category of “high risk” nearly two times more often than white defendants.

The judicial bias in the US already adversely affects inter alia African Americans as they are six times more likely to be sentenced than white defendants. Blindly following the AI’s judgements make this situation even worse. In fact, almost 50% of the black defendants labelled by COMPAS as higher risk, did not re-offend. A high recidivism score may lead to longer incarceration and after serving a sentence, difficulties with finding a job.

Does it mean that criminal justice algorithms are discriminative by nature? Not necessarily. Artificial Intelligence is as good as the data it’s been trained on allows. The algorithm is not directly discriminating as it does not explicitly consider the defendants’ race. However, by comparing factors such as community ties or duration of the pretrial incarceration, groups already discriminated against obtain higher risk scores. Moreover, the algorithms were trained based on the cases decided by humans and therefore reflect the years of bias in the judicial system.

Depending on the use of AI systems, they can cement prejudices or help us to realise the existence of bias and contribute to combating it. Treating the systems like COMPAS as an oracle marks the path toward a dystopian future where inhumane judicial systems endorse discrimination. On the other hand, with appropriate human oversight and anti-discrimination training, such systems may indeed produce more fair results. Furthermore, examining the machine's decision-making process may provide us with insight into how judicial bias is created. Nonetheless, taking all the advantages and potential risks into consideration, the wide-scale use of such systems in courts seems premature. As long as there is no definitive answer on how to prevent racial discrimination by AI systems like COMPAS, they should be approached very carefully. Even after achieving the fairness of AI systems, their assessment should never be treated as a sole factor in deciding on a person's future.


If you want to learn more about this topic and the stories of people affected by AI bias, read ‘There’s software used across the country to predict future criminals. And it’s biased against blacks.’ by Julia Angwin, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.











29 weergaven0 opmerkingen

Recente blogposts

Alles weergeven