News

To Prevent Algorithmic Bias, Legal and Technical Definitions around Algorithmic Fairness Must Align

Alice Xiang,

March 23, 2020

PAI research highlights a divergence between legal and machine learning terminology related to fairness and bias. These two communities must collaborate and align in order to effectively prevent bias and promote fair algorithmic practices.

From racially disparate risk assessments in the criminal justice system to gender-discriminatory hiring decisions in the workplace, examples of potential biases in high-stakes algorithmic decision-making have raised public awareness of the issue and prompted the creation of algorithmic fairness as a subfield of machine learning (ML). Despite this recognized need to mitigate algorithmic bias in pursuit of fairness, there exists a gap in the conversation between current machine learning approaches and anti-discrimination law. Although the ML fairness community frequently uses legal terminology to motivate their research, their use of these terms often does not accurately follow the original legal meanings. 

This dissonance in definitions becomes especially problematic given the role the courts and legal system might play in creating accountability mechanisms around algorithmic bias. Without legal compatibility, biased algorithms might not be considered to illegally discriminate, while efforts to mitigate algorithmic bias might ironically be considered illegally discriminatory.  If we seek to prevent the proliferation of biased algorithms, the law must account for the technical realities of bias mitigation methods developed by the ML fairness community, and vice versa. 

In my paper On the Legal Compatibility of Fairness Definitions, co-authored with Inioluwa Deborah Raji, we reveal the misalignment between ML definitions of fairness and associated legal concepts and terminology, and provide lessons that both communities can gain from these tensions. In particular, this paper explores divergences between concepts in algorithmic fairness literature and key terms from U.S. anti-discrimination law, including protected class, disparate treatment, disparate impact, and affirmative action, as well as more general legal fairness principles such as intersectionality and procedural justice.

A key example of these divergences lies in how specific groups are treated in order to prevent discrimination and bias. The ML community often refers to members of protected classes as those in “minority and marginalized groups,” for example, while legal approaches to fairness focus on equal treatment, regardless of attributes such as race and gender. In fact, many landmark anti-discrimination cases featured white male plaintiffs arguing against policies that sought to benefit women or minorities. As the ML community develops technical definitions for fairness, it is important for researchers to contemplate whether they would want their definitions to be used by those in the majority against those in the minority.

This paper also provides evidence of specific lessons from machine learning approaches to fairness that could benefit the legal community as laws evolve to reflect advances in technology. For instance, technical justifications for the use of protected variables (such as demographic data) to effectively measure and audit discrimination could inform legal discourse on this topic. ML fairness research also reaches beyond the specific domains articulated in current anti-discrimination laws, and can open up conversations around new anti-discrimination protections.

Alignment on terms and definitions is a fundamental first step towards effective accountability and governance—and it must happen in both communities. Shared understandings prevent unintended consequences, such as the passage of policies and laws that are impossible to implement, or that inadvertently make certain technical attempts to address algorithmic bias illegal. Alignment helps policymakers, law-makers, and judges evaluate decisions, and effectively distinguish between reasonable and unreasonable technical arguments.

Building on these concerns around legal compatibility, in this law review article, I discuss potential solutions for reconciling legal and technical approaches to mitigating algorithmic bias. In particular, I address the tension between the law’s preference for methods to mitigate bias that are blinded to protected class attributes and the technical necessity of using such variables or their proxies in order to mitigate bias. This article will be presented at WeRobot 2020, the premier international conference on law and policy relating to Robotics.

In addition, our team is launching a new multi-stakeholder project around documenting the legal, reputational, and other challenges organizations face when attempting to use demographic data in service of fairness objectives. Please look out for future announcements of PAI’s research and convenings in this space. 

Back to All Posts