News
Crucial Yet Overlooked: Why We Must Reconcile Legal and Technical Approaches to Algorithmic Bias
If we’re not careful, existing law may lead to a world where our efforts in the machine learning community to mitigate algorithmic bias are deemed illegal.
Algorithmic tools are not new technology. Credit scores, which use data to predict a person’s likelihood of paying their debts, have been used in the United States since the 1960s.
What is distinct, however, is the extent to which algorithms have proliferated, increasingly affecting a wide variety of important decisions. Although algorithmic decision-making was popularized under the assumption that algorithms would be more objective than human decision-makers, there is a growing realization that they might suffer from many of our same biases, given that they are often trained on past decisions or determinations made by humans.
In “Reconciling Legal and Technical Approaches to Algorithmic Bias,”a forthcoming paper in the Spring 2021 Issue of the Tennessee Law Review and a finalist for the Best Paper Award at WeRobot 2020, I analyze whether the technical approaches often used to combat algorithmic bias are compatible with U.S. anti-discrimination law.
My research leads me to believe that the techniques proposed by many of us in the algorithmic fairness community are likely to be viewed with suspicion under existing legal jurisprudence, specifically in reference to the equal protection doctrine.
Awareness of these legal and technical hurdles is especially crucial for those working to address algorithmic bias through policy, including elected officials, judges, legal scholars, and tech policy experts. In the absence of careful attention to this challenge, I foresee an unfortunate scenario where judges and policymakers, unaware of these tensions, might issue rulings or regulations that literally tie the hands of artificial intelligence and machine learning practitioners working to mitigate bias. If we don’t address these tensions, algorithms will continue to be biased and our ability to do anything to reduce their bias will be diminished.
Given these findings, I recommend a path forward towards greater compatibility by proposing causality as a key concept for enabling courts to distinguish between permissible and impermissible uses of protected class variables in the algorithmic development process.
Existing Strategies (And Limits) to Mitigating Algorithmic Bias
Intuitively, algorithmic bias detection involves measuring whether the scales are tilted in one group’s favor, and algorithmic bias mitigation involves putting a thumb on the scale in favor of the disadvantaged group.
This act of putting a thumb on the scales is controversial, however, and possibly prohibited from a legal perspective. This has made it difficult to adopt the numerous technical approaches in the algorithmic fairness literature and has led to the prevalent approach of simply excluding protected class variables or close proxies from the model development process.
For example, a recent proposed rule from the Department of Housing and Urban Development (HUD), which would establish the first instance of a U.S. regulatory definition for algorithmic discrimination, would create a safe harbor from disparate impact liability for housing-related algorithms that do not use protected class variables or close proxies. The final rule removed this safe harbor in response to public comment (including one from the Partnership on AI), but it suggested a worrisome trend.
An abundance of recent scholarship has shown that simply removing protected class variables and close proxies does little to ensure that the algorithm will not be biased. This approach, known in the machine learning literature as “fairness through unawareness,” is widely considered naive and can, ironically, exacerbate bias.
This tension around bias has long been known to the legal community, but the question of how to legally approach balancing the scales of bias in the algorithmic fairness context has not been explored – until now.
A Proposed Legal Path Forward for Algorithmic Bias Mitigation
A lack of legal compatibility creates the possibility that biased algorithms might be considered legally permissible while approaches designed to correct for bias might be considered illegally discriminatory.
So what can be done if there exists both a technical necessity of using protected class variables in bias mitigation and the law’s preference for decision-making that is blinded to protected class attributes?
Since whether the protected attribute caused the decision or disparity is at the core of evaluating liability under antidiscrimination law, I propose that using bias mitigation techniques rooted in causality can allow practitioners and judges to distinguish between permissible and impermissible bias mitigation under equal protection doctrine.
This is especially vital given that the absence of protected class variables does not necessarily imply a less biased algorithm, and in fact the inclusion of protected class variables can often improve both the fairness and accuracy of the algorithm.
Causal inference provides a potential way to reconcile these techniques with anti-discrimination law. In U.S. law, discrimination is generally thought of as making decisions “because of” a protected class variable. In fact, in Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc., the case that motivated the HUD proposed rule, the Court required a “causal connection” between the decision-making process and the disproportionate outcomes. Instead of examining whether protected class variables appear in the algorithm or training data, causal inference would allow for techniques that use protected class variables with the intent of negating causal relationships in the data tied with race.
While moving from correlation to causation is challenging—particularly in machine learning, where leveraging correlations to make accurate predictions is typically the goal—doing so offers a way to reconcile technical feasibility and legal precedence while providing protections against algorithmic bias.
As practitioners, we must reach across the aisles of our respective legal, technical, and policy disciplines to have a conversation about addressing the incompatibilities between how the technical and legal communities approach bias mitigation.
For more on this research, watch my presentation on this work given at WeRobot’s 2020 conference:
Back to All Posts