News

Challenges for Responsible AI Practitioners and the Importance of Solidarity

Bogdana Rakova

March 8, 2021

Recent years have seen an explosion in the study of responsible artificial intelligence (AI), with more resources than ever offering guidelines for mitigating this technology’s harms and equitably distributing its benefits. At the same time, the distance between proposed best practices in AI and those currently prevalent in the field remains great. The challenge of implementing these practices is perhaps clearest at leading tech companies, where responsible AI initiatives have, in recent months, resulted in more public setbacks than visible change.

For those in the AI community working outside of these companies, such setbacks can make it feel like there is little to do but watch from afar with dismay. This community, however, has an important role to play in enacting positive change. New research, conducted by researchers at Partnership on AI (PAI) as well as Spotify and Accenture, identified dominant work practices that emerged as obstacles to responsible AI projects and potential organizational-level recommendations to transform these obstacles.

When discussing the latter, the ability of responsible AI practitioners to veto AI systems, the role and balance of internal and external pressure to motivate corporate change, the need for channels for communication between different people within organizations, and the challenges in sequencing these actions emerged as common themes. To me, these findings point to the contribution the greater responsible AI community can provide. Individuals committed to responsible AI can reach out and step up in solidarity with industry practitioners who share their beliefs.

For our paper, we conducted 26 semi-structured interviews with industry practitioners working on projects related to responsible AI concerns. Our resultant qualitative analysis mapped out what current organizational structures either empower or hinder responsible AI initiatives, what future structures would best support such initiatives, and the key elements needed to transition from our present to an aspirational future. Crucially, it is not just those within organizations that can support this transition. Indeed, all individuals can contribute by establishing networks of shared belief that serve as alternatives to current dominant structures. 

Amid growing public awareness, AI practitioners are taking an increasingly active role in addressing algorithmic bias and its associated harms. This is true on both an individual level and an organizational one, with a growing number of technology companies making commitments to machine learning (ML) fairness or responsibility. In practice, however, many employees tasked with developing responsible AI processes are left without a clear path to operationalizing their work, limiting the impact of these initiatives.

Some of the practitioners we interviewed specifically described stress-related challenges related to their responsible AI work. Additionally, a number of interviewees left their organizations between when we conducted the interviews in late 2019 and when this research was accepted for publication in December 2020. Even more recently, Margaret Mitchell, a leader of Google’s Ethical AI team, was fired amid continued controversy over the dismissal of her former colleague Timnit Gebru.

Common obstacles faced by the practitioners we interviewed included lack of accountability, ill-informed performance trade-offs, and misalignment of incentives within decision-making structures. These obstacles can be understood as a result of how organizations answer four key questions: When and how do we act? How do we measure success? What are the internal structures we rely on? And how do we resolve tensions? These are questions that every organization must have a process for answering when developing responsible AI practices.

As organizations seek to scale responsible AI practices, they will have to transition from the prevalent or emerging approaches to these questions to those of the aspirational future. Importantly, not all emerging practices will necessarily lead to that future. To help envision this transition we conducted a workshop based on the Two-Loops Theory of Change model at an ML conference. Workshop participants were given a responsible AI scenario based on the prevalent industry practices and tasked with identifying both current barriers and possible solutions. During the exercise, the following themes emerged:

1. The importance of being able to veto an AI system.

Multiple participant groups mentioned that before considering how the fairness or societal implications of an AI system can be addressed, it is crucial to ask whether an AI system is appropriate in the first place. They recommended designing a veto power that is available across many levels, from individual employees via whistleblower protections to internal multidisciplinary oversight committees to external investors and board members. The most important design feature is that the decision to cease further development is respected and cannot be overruled by other considerations.

2. The role and balance of internal and external pressure to motivate corporate change. 

The different and synergistic roles of internal and external pressure was another theme across multiple participant groups’ discussions. Internal evaluation processes have more access to information and may provide higher levels of transparency, while external processes can leverage more stakeholders and increase momentum by building coalitions. External groups may be able to apply pressure more freely than internal employees that may worry about repercussions for speaking up.

3. Building channels for internal and external communication centered on participation and inclusion.

Fundamentally, organizations are groups of people, and creating opportunities for different sets of people to exchange perspectives was another key enabler identified by multiple workshop groups. One group recommended a regular town hall for employees to be able to provide input into organization-wide values in a semi-public forum.

4. Sequencing these actions will not be easy because they are highly interdependent.

Many of the workshop groups identified latent implementation challenges because the discussed organizational enablers work best in tandem. For example, whistleblower protections for employees and a culture that supports their creation would be crucial to ensure that people feel safe speaking candidly at town halls.

My co-authors and I share these themes as a starting point to spark experimentation. Further pooling of results from trying these recommendations would accelerate learning and progress for all towards achieving positive societal outcomes through scaling responsible AI practices.

We hope that this work will offer a deeper level of understanding of the challenges faced by responsible AI practitioners. Our core motivation was to identify enablers that could shift organizational change towards adopting responsible AI practices. Ultimately, reaching out and stepping up in solidarity could inspire actionable conversations and collaborations. Ones that positively influence the organizations where AI systems are being developed.

Back to All Posts