News
PAI Research Promotes Responsible Collaborations between People and AI Systems
AI systems promise to augment human perception, cognition, and problem-solving abilities. They also pose risks of manipulation, abuse, and other negative consequences, both foreseen and unintended. Today’s AI technologies interact with humans increasingly often and in varied contexts and modalities – a human might seek mental health services from a chatbot, use autonomous vehicles for transportation, and even learn from an intelligent tutoring system. It is therefore vital that we do not think about AI systems as isolated technical systems, but rather as technologies embedded into people’s lives.
To help developers and users better understand the nature of these interactions, PAI’s Collaborations Between People and AI Systems (CPAIS) Expert Group [1] has conducted a series of research projects, with a focus on reviewing relevant literature, developing case studies with practitioners, and producing useful tools and high level insights. Developing best practices for these collaborations is a key part of PAI’s mission to advance the responsible and socially beneficial development and deployment of AI.
The final outputs from this research, described in more detail below, include a Framework of questions that help users consider various key elements of these collaborations, a series of case studies that illustrates the Framework “in action,” a review of literature surrounding issues of trust in human/AI interactions, and a discussion of the key insights produced by this literature review. They demonstrate the variety of technologies that bring people and AI together, and highlight some of the key elements that should be considered by developers and users. To learn more about this research and PAI’s work to advance responsible and socially beneficial AI systems, see our Research and Publications page.
[1] The CPAIS Expert Group consists of over 30 representatives from across technology, academia, and civil society. Within these sectors, group members represent varied disciplinary training and roles (e.g., policy, research, product).
1. CPAIS Framework and Case Studies
Best practices for human-AI collaboration systems must address important issues such as transparency and trust, responsibility for specific decisions, and appropriate levels of autonomy – all of which depend on a nuanced understanding of the nature of those collaborations. With that in mind, PAI developed a Human-AI Collaboration Framework containing 36 key questions that help explore the relevant features one should consider when thinking about human-AI collaboration. To illustrate the application of this Framework, PAI collected seven case studies from AI practitioners that are designed to highlight the variety of real world collaborations between people and AI systems. The case studies, listed below, provide descriptions of the technologies and their use, followed by author answers to the questions in the Framework:
- Virtual Assistants and Users (Claire Leibowicz, Partnership on AI)
- Mental Health Chatbots and Users (Yoonsuck Choe, Samsung)
- Intelligent Tutoring Systems and Learners (Amber Story, American Psychological Association)
- Assistive Computing and Motor Neuron Disease Patients (Lama Nachman, Intel)
- AI Drawing Tools and Artists (Philipp Michel, University of Tokyo)
- Magnetic Resonance Imaging and Doctors (Bendert Zevenbergen, Princeton Center for Information Technology Policy)
- Autonomous Vehicles and Passengers (In Kwon Choi, Samsung)
The Framework questions are designed to get users thinking about the various elements of human/AI collaboration, drawing attention to the specific nuances – including the distinct implications and potential social impacts – of different AI technologies. Together, the Framework, and the Case Studies that illustrate its application, can help researchers, developers and even policymakers identify and think through key elements of responsible AI development and deployment.
2. CPAIS Trust Literature Review – Insights and Bibliography
Trust is an essential element of any collaboration, and the dynamics of trust between people and artificial intelligence systems are no exception. As a first step towards understanding this complex topic, PAI conducted a survey and analysis of 78 multidisciplinary articles on AI, humans, and trust. Analysis of the literature surfaced key themes and high level insights. It also revealed important knowledge gaps and areas for further research. The articles and their abstracts are aggregated in a Bibliography and tagged according to one of four relationships with trust – ways of understanding trust, means for promoting trust, the entity receiving trust, and the impact of trust.
Our literature review reveals a range of different definitions of AI, as well as different assumptions about why trust matters. Many of the articles were published before the Internet’s ubiquity/the social implications of AI became a central research focus. As a possible consequence, the promotion of trust was often presented simplistically in the literature we reviewed, and discussions of the important idea of institutional trust – trust in the organizations and institutions developing AI technologies – was underrepresented. These articles also highlight the importance of context for understanding the dynamics of trust between people and AI systems, as well as the need to consider the value of distrust, rather than assuming that trust in AI is a universal good.
This project’s key insights, themes, and aggregated texts can serve as fruitful entry points for researchers, users, and developers, and can help align understandings related to trust between people and AI systems. This work can also inform future research, which should investigate gaps in the research and our bibliography to improve our understanding of how human-AI trust facilitates, or sometimes hinders, the responsible implementation and application of AI technologies.
Taken together, the projects produced by this research highlight the complexity of human-AI collaborations and the importance of considering AI systems as a feature embedded in, and not separate from, human social systems. They reveal opportunities for more nuanced thinking about essential issues such as trust, and can contribute to more thoughtful and responsible decision making around the development and deployment of AI systems in everyday life. We encourage developers, researchers, consumers and policymakers to take advantage of the provocations offered by the Human-AI Collaboration Framework, and to leverage the insights and opportunities presented by the literature on trust.
Back to All Posts