News

Artificial Intelligence Research and Ethics Community Calls for Standards in Criminal Justice Risk Assessment Tools

PAI Staff

April 26, 2019

The Partnership on AI Publishes Report Documenting Minimum Requirements for Responsible Deployment

San Francisco, CA, April 26, 2019 – The Partnership on AI (PAI) has today published a report gathering the views of the multidisciplinary artificial intelligence and machine learning research and ethics community which documents the serious shortcomings of algorithmic risk assessment tools in the U.S. criminal justice system. These kinds of AI tools for deciding on whether to detain or release defendants are in widespread use around the United States, and some legislatures have begun to mandate their use. Lessons drawn from the U.S. context have widespread applicability in other jurisdictions, too, as the international policymaking community considers the deployment of similar tools.

While criminal justice risk assessment tools are often simpler than the deep neural networks used in many modern artificial intelligence systems, they are basic forms of AI. As such, they present a paradigmatic example of the high-stakes social and ethical consequences of automated AI decision-making.

This report outlines ten largely unfulfilled requirements that jurisdictions should weigh heavily prior to the use of these tools, spanning topics that include validity and data sampling bias, bias in statistical predictions; choice of the appropriate targets for prediction; human-computer interaction questions; user training; policy and governance; transparency and review; reproducibility, process, and recordkeeping; and post-deployment evaluation. Based on the input of its partners, [1] PAI recommends that policymakers either avoid using risk assessments altogether for decisions to incarcerate, or find ways to resolve the requirements outlined in this report via future standard-setting processes.

Though advocates of such tools suggest that these data-driven AI predictions will produce a reduction in unnecessary detention and provide fairer and less punitive decisions than existing processes, an overwhelming majority of the Partnership’s consulted experts agree that current risk assessment tools are not ready for decisions to incarcerate human beings. Within PAI’s membership and the wider AI community, many experts further suggest that individuals can never be justly detained on the basis of their risk assessment score alone, without an individualized hearing. [2]

“This report, written by experts on fairness and bias in machine learning, describes a number of challenges facing the use of pretrial risk assessment that must be addressed in addition to the many important critiques already raised by civil rights groups and impacted communities,” said Logan Koepke, Senior Policy Analyst at Upturn. “This report shows that pretrial risk assessment tools cannot safely be assumed to advance reformist goals of decarceration and greater fairness. It also highlights, at a statistical and technical level, just how far we are from being ready to deploy these tools responsibly. To our knowledge, no single jurisdiction in the U.S. is close to meeting the ten minimum requirements for responsible deployment of risk assessment tools detailed here.”

For AI researchers, foreseeing and mitigating possible negative consequences of AI (both unintended and malicious), and rooting technology development and deployment in the context of its potential use, have become central challenges of the field. Doing so requires a cautious approach to the design and engineering of systems, as well as careful consideration for the ways in which they may impact affected communities, and the harms that may occur as a result. Criminal justice is one domain where it is imperative to exercise maximal caution and humility in the deployment of statistical tools. PAI and its partners are concerned that U.S. judicial systems may have failed to adequately address these challenges prior to widespread deployment.

“As research continues to push forward the boundaries of what algorithmic decision systems are capable of, it is increasingly important that we develop guidelines for their safe, responsible, and fair use,” said Andi Peng, AI Resident at Microsoft Research.

Across the report, challenges to using these tools fell broadly into three primary categories:

  1. Concerns about the accuracy, bias, and validity in the tools themselves
    • Although the use of these tools is in part motivated by the desire to mitigate existing human fallibility in the criminal justice system, this report suggests that it is a serious misunderstanding to view tools as objective or neutral simply because they are based on data.
  2. Issues with the interface between the tools and the humans who interact with them
    • In addition to technical concerns, these tools must be held to high standards of interpretability and explainability to ensure that users (including judges, lawyers, and clerks, among others) can understand how the tools’ predictions are reached and make reasonable decisions based on these predictions.
  3. Questions of governance, transparency, and accountability
    • To the extent that such systems are adapted to make life-changing decisions, tools and decision-makers who specify, mandate, and deploy them must meet high standards of transparency and accountability.

This report highlights some of the key challenges with the use of risk assessment tools for criminal justice applications. It also raises some deep philosophical and procedural issues which may not be easy to resolve. Surfacing and addressing those concerns will require ongoing research and collaboration between policymakers, the AI research community, civil society groups, and affected communities, as well as new types of data collection and transparency. It is PAI’s mission to spur and facilitate these conversations and to produce research to bridge such gaps.

“This report highlights how algorithmic decision-making is not neutral and unbiased just because it is data-driven,” said Alice Xiang, Research Scientist at the Partnership on AI. “Before important decisions about the liberty of individuals are automated, we must consider how these tools might further entrench rather than alleviate societal inequities.”

Resources:

About The Partnership on AI

The Partnership on AI (PAI) is a global multistakeholder organization that brings together academics, researchers, civil society organizations, companies building and utilizing AI technology, and other groups working to realize the promise of artificial intelligence. The Partnership was established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society. Today, PAI convenes more than 80 partner organizations from around the world to be a uniting force for good in the AI ecosystem.

Media Relations Contact:

Peter Lo

Senior Communications Manager

peter.lo@partnershiponai.org

650-597-0858


[1] Though this report incorporated suggestions or direct authorship from around 30-40 of our partner organizations, it should not under any circumstances be read as representing the views of any specific member of the Partnership. Instead, it is an attempt to report the widely held views of the artificial intelligence research community as a whole.

[2] Many of our civil society partners have taken a clear public stance to this effect. See The Use of Pretrial ‘Risk Assessment’ Instruments: A Shared Statement of Civil Rights Concerns, http://civilrightsdocs.info/pdf/criminal-justice/Pretrial-Risk-Assessment-Full.pdf (shared statement of 115 civil rights and technology policy organizations, arguing that all pretrial detention should follow from evidentiary hearings rather than machine learning determinations, on both procedural and accuracy grounds); see also Comments of Upturn; The Leadership Conference on Civil and Human Rights; The Leadership Conference Education Fund; NYU Law’s Center on Race, Inequality, and the Law; The AI Now Institute; Color Of Change; and Media Mobilizing Project on Proposed California Rules of Court 4.10 and 4.40, https://www.upturn.org/static/files/2018-12-14_Final-Coalition-Comment-on-SB10-Proposed-Rules.pdf (“Finding that the defendant shares characteristics with a collectively higher risk group is the most specific observation that risk assessment instruments can make about any person. Such a finding does not answer, or even address, the question of whether detention is the only way to reasonably assure that person’s reappearance or the preservation of public safety. That question must be asked specifically about the individual whose liberty is at stake — and it must be answered in the affirmative in order for detention to be constitutionally justifiable.”) PAI notes that the requirement for an individualized hearing before detention implicitly includes a need for timeliness. Many jurisdictions across the US have detention limits at 24 or 48 hours without hearings.
Aspects of this stance are shared by some risk assessment tool makers; see, Arnold Ventures’ Statement of Principles on Pretrial Justice and Use of Pretrial Risk Assessment, https://craftmediabucket.s3.amazonaws.com/uploads/AV-Statement-of-Principles-on-Pretrial-Justice.pdf.

Back to All Posts