MetaTOC stay on top of your field, easily

Almost politically acceptable criminal justice risk assessment

,

Criminology & Public Policy

Published online on

Abstract

["Criminology & Public Policy, Volume 19, Issue 4, Page 1231-1257, November 2020. ", "\n\nResearch Summary\nIn criminal justice risk forecasting, one can prove that it is impossible to optimize accuracy and fairness at the same time. One can also prove that usually it is impossible optimize simultaneously all of the usual group definitions of fairness. In policy settings, one necessarily is left with tradeoffs about which many stakeholders will adamantly disagree. The result is a contentious stalemate. In this article, we offer a different approach. We do not seek perfectly accurate and perfectly fair risk assessments. We seek politically acceptable risk assessments. We describe and apply a machine learning approach that addresses many of the most visible claims of “racial bias” to arraignment data on 300,000 offenders. Regardless of whether such claims are true, we adjust our procedures to compensate. We train the algorithm on White offenders only and compute risk with test data separately for White offenders and Black offenders. Thus, the fitted, algorithm structure is the same for both groups; the algorithm treats all offenders as if they are White. But because White and Black offenders can bring different predictors distributions to the White‐trained algorithm, we provide additional adjustments as needed.\n\n\nPolicy Implications\nInsofar as conventional machine learning procedures do not produce the accuracy and fairness that some stakeholders require, it is possible to alter conventional practice to respond explicitly to many salient stakeholder claims even if they are unsupported by the facts. The results can be a politically acceptable risk assessment tools.\n\n"]