Partnership on AI Research, Publications & Initiatives

Framework for Promoting Workforce Well-being in the AI-Integrated Workplace

Paper

This paper draws upon existing work by academics, labor unions, and other institutions to explain why organizations should prioritize worker well-being. In doing so, it explores the need for a coherent AI and workforce well-being framework. It also attempts to account for different forms of AI integration into the workplace, outlines the different instances in which workers may encounter AI and the technological aspects of AI that may impact workers.

AI and Shared Prosperity Initiative

Ongoing Project

The AI and Shared Prosperity Initiative conducts research and gathers multidisciplinary input to develop and disseminate practical frameworks that AI developing and deploying companies should adopt to ensure that AI progress advances broadly shared prosperity and not the economic betterment of a few to the detriment of many. The project strives to equip our Partners with practical approaches for making AI development and deployment inclusive by design.

The Role of Demographic Data in Addressing Algorithmic Bias

Research Project

A lack of clarity around the acceptable uses for demographic data has frequently been cited by PAI Partners as a barrier to addressing algorithmic bias in practice. This has led us to ask the question, “When and how should demographic data be collected and used in service of algorithmic bias detection and mitigation?” In response, the Partnership on AI is conducting a research project exploring access to and usage of demographic data as a barrier to detecting bias. We are presently conducting a series of interviews to better understand challenges that may prevent the detection or mitigation of algorithmic bias.

Publication Norms for Responsible AI

Ongoing Intiative

One important aspect of 'responsible AI' is the question of when and how to publish novel research in such a way as to maximize the beneficial applications while mitigating potential harms. As AI/ML is applied in increasingly high-stakes contexts, and touches increasing parts of our everyday lives, it becomes ever more important to consider the broader social impact of AI/ML research and mitigate the risks of malicious use, unintended consequences, and accidents, so that we can all enjoy the many potential benefits of this transformative technology. The Partnership on AI is undertaking a multistakeholder project that aims to facilitate the exploration and thoughtful development of publication practices for responsible AI.

Bringing Facial Recognition Systems To Light

Paper

Understanding how facial recognition systems work is essential to being able to examine the technical, social & cultural implications of these systems. This paper and interactive graphic describe how a facial recognition system works, clarifying the methods and goals of facial detection, facial verification, and facial identification. The paper explains that systems that analyze and categorize facial characteristics are not a part of facial recognition systems, because they do not verify or predict someone’s identity. The paper additionally includes a list of questions that policymakers and other stakeholders can use to elicit additional technical and related information about facial recognition systems.

Closing Gaps In Responsible AI

Ongoing Project

Operationalizing responsible AI principles is a complex process, and currently, the gap between intent and practice is large. To help fill this gap, the Partnership on AI has initiated Closing Gaps in Responsible AI, a multiphase, multi-stakeholder project aimed at surfacing the collective wisdom of the community to identify salient challenges and evaluate potential solutions. The first phase of our project is the Closing Gaps Ideation Game - an interactive ideation exercise that solicits experiences and insights from the technology community in order to collectively surface challenges and evaluate solutions for the organizational implementation of responsible AI. These insights can, in turn, inform and empower the changemakers, activists, and policymakers working to develop and manifest responsible AI.

ABOUT ML - Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles

Ongoing Project

As machine learning (ML) becomes more central to many decision-making processes, including in high-stakes contexts such as criminal justice and banking, the companies deploying such automated decision-making systems face increased pressure for transparency on how these decisions are made. Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles (ABOUT ML) is a multi-year, iterative multi-stakeholder project of the Partnership on AI (PAI) that will work towards establishing evidence-based ML transparency best practices throughout the ML system lifecycle from design to deployment, starting with synthesizing existing published research and practice into recommendations on documentation practice.

Explainable Machine Learning in Deployment

Paper

Organizations and policymakers around the world are turning to Explainable AI (XAI) as a means of addressing a range of AI ethics concerns. PAI’s recent research paper, Explainable Machine Learning in Deployment, is the first to examine how ML explainability techniques are actually being used. We find that in its current state, XAI best serves as an internal resource for engineers and developers, rather than for providing explanations to end users. Additional improvements to XAI techniques are necessary in order for them to truly work as intended, and help end users, policymakers, and other external stakeholders understand and evaluate automated decisions.

AI and Media Integrity Steering Committee

Steering Committee

The AI and Media Integrity Steering Committee is a formal body of PAI Partner organizations focused on projects to confront the emergent threat of AI-generated mis/disinformation, synthetic media, and AI’s effects on public discourse.

On the Legal Compatibility of Fairness Definitions

Paper

Past literature has been effective in demonstrating ideological gaps in machine learning (ML) fairness definitions when considering their use in complex socio-technical systems. However, we go further to demonstrate that these definitions often misunderstand the legal concepts from which they purport to be inspired, and consequently inappropriately co-opt legal language. In this paper, we demonstrate examples of this misalignment and discuss the differences in ML terminology and their legal counterparts, as well as what both the legal and ML fairness communities can learn from these tensions. We focus this paper on U.S. anti-discrimination law since the ML fairness research community regularly references terms from this body of law.

SafeLife 1.0: Exploring Side Effects in Complex Environments

Research Project

As reinforcement learning agents begin to get deployed in real-world high-stakes scenarios, it is critical to make sure that they operate within appropriate safety constraints. PAI’s new SafeLife project addresses this complex challenge, creating a publicly available reinforcement learning environment that tests the ability of trained agents to operate safely and minimize side effects. SafeLife is part of a broader initiative at PAI to develop benchmarks that integrate for safety, fairness, and other ethical objectives.

Human-AI Collaboration Framework & Case Studies

Case Studies

Best practices on collaborations between people and AI systems – including those for issues of transparency and trust, responsibility for specific decisions, and appropriate levels of autonomy – depend on a nuanced understanding of the nature of those collaborations. With the support of the Collaborations Between People and AI Systems (CPAIS) Expert Group, PAI has drafted a Human-AI Collaboration Framework to help users consider key aspects of human-AI collaboration technologies. We have also prepared a collection of seven case studies that illustrate the Framework and its applications in the real world.

Human-AI Collaboration Trust Literature Review: 
Key Insights and Bibliography

Report

In order to better understand the multifaceted, important, and timely issues surrounding trust between humans and artificially intelligent systems, PAI has conducted an initial survey and analysis of the multidisciplinary literature on AI, humans, and trust. This project includes a thematically-tagged Bibliography with 80 aggregated research articles, as well as an overview document presenting 7 key insights. These key insights, themes, and aggregated texts can serve as fruitful entry points for those investigating the nuances in the literature on humans, trust, and AI, and can help align understandings related to trust between people and AI systems. They can also help to inform future research.

Visa Laws, Policies, and Practices: Recommendations for Accelerating the Mobility of Global AI/ML Talent

Policy paper

PAI believes that bringing together experts from countries around the world that represent different cultures, socio-economic experiences, backgrounds, and perspectives is essential for AI/ML to flourish and help create the future we desire. In order to fulfill their talent goals and host conferences of international caliber, countries around the world will need laws, policies, and practices that enable international scholars and practitioners to contribute to these conversations. Based on input from PAI Partners, AI practitioners, and PAI’s own research, PAI’s policy paper on Visa Laws, Policies and Practices offers recommendations that will enable multidisciplinary AI/ML experts to collaborate with international counterparts. PAI encourages individuals, organizations, and policymakers to implement these policy recommendations in order to benefit from the diverse perspectives offered by the global AI/ML community.

AI, Labor, and the Economy Case Study Compendium

Case study

The impact of artificial intelligence on the economy, labor, and society has long been a topic of debate — particularly in the last decade — among policymakers, business leaders, and the broader public. To help elucidate these various areas of uncertainty, the Partnership on AI’s Working Group on “AI, Labor, and the Economy” conducted a series of case studies across three geographies and industries, using interviews with management as an entry point to investigate the productivity impacts and labor implications of AI implementation.

Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System

Report

Gathering the views of PAI’s multidisciplinary AI and ML research and ethics community, this report documents the serious shortcomings of algorithmic risk assessment tools in the U.S. criminal justice system, and concludes that current risk assessment tools are not ready for decisions to incarcerate human beings. The report includes ten requirements that jurisdictions should weigh heavily prior to the use of these tools.