News
Explainable AI in Practice Falls Short of Transparency Goals
PAI’s research reveals a gap between how machine learning explainability techniques are being deployed and the goal of transparency for end users.
Machine learning systems that enable humans to understand and evaluate their predictions or decisions are key elements of transparent, accountable and trustworthy AI. Known as Explainable AI (XAI), these systems could have profound implications for society and the economy, potentially improving human/AI collaboration for sensitive and high impact deployments in areas such as medicine, finance, the legal system, autonomous vehicles, or defense. XAI has also been proposed as a means to help address bias and other potential harms in automated decision making.
While organizations and policymakers around the world are turning to XAI as a means of addressing a range of AI ethics concerns, PAI research has found that deployed explainability techniques are not yet up to the task of enhancing transparency and accountability for end users and other external stakeholders.
PAI’s recent research paper, Explainable Machine Learning in Deployment, co-authored by Umang Bhatt (PAI Research Fellow) and Alice Xiang (PAI Research Scientist), is the first to examine how ML explainability techniques are actually being used. Based on a series of interviews with practitioners, PAI has found that in its current state, XAI best serves as an internal resource for engineers and developers, who use explainability to identify and reconcile errors in their model, rather than for providing explanations to end users. As a result, there is a gap between explainability in practice and the goal of transparency, since current explanations primarily serve internal audiences, rather than external ones.
This gap has important implications for AI governance. Additional improvements to XAI techniques are necessary, PAI has found, in order for them to truly work as intended, and help end users, policymakers, and other external stakeholders understand and evaluate automated decisions.
The paper identifies the limitations of XAI as it is currently deployed, and provides a framework for improvement. In order to better promote transparency, we argue organizations need to consider more thoroughly the target audience for XAI – who the explanations are being created for, what these stakeholders need to know, and why. PAI is pleased to present these findings at the 2020 ACM Conference on Fairness, Accountability, and Transparency in Barcelona on January 30, 2020. We are additionally convening researchers and practitioners in NYC in February, 2020 to workshop this topic for solutions. To begin advancing the goals of XAI, we invite you to apply the insights revealed in the full paper.
Back to All Posts