Events
Protecting Public Discourse from AI-Generated Mis/Disinformation
The workshop took place at the BBC Old Broadcasting House’s Council Chamber in London
This post was authored in collaboration with Laura Ellis (Head of Technology Forecasting, BBC) and Sam Gregory (Program Director, WITNESS)
Today’s artificial intelligence (AI) technologies can create increasingly realistic video, audio, and text.
An AI-generated video created to mislead viewers might go viral on a social network. Synthetic audio sent to newsrooms may be interpreted as legitimate evidence. AI-generated comments on a news article could be produced at a volume that influences public opinion. In each of these cases, AI presents new threats to the veracity and credibility of information – potentially distorting public discourse. As AI becomes more sophisticated and its techniques more accessible, how can organizations across technology, media, civil society, and the academic research community work together to coordinate strategies around the emergent threat of AI-generated mis/disinformation?
The Partnership on AI, the BBC, and WITNESS co-hosted a workshop in London at the end of May with other leading institutions to begin answering this question. WITNESS, a global human rights organization working proactively to address new threats to trustworthy information, and the BBC, a well-known media institution concerned with the effects of mis/disinformation, provided valuable expertise and leadership. Both organizations are members of PAI’s Social and Societal Influences of AI Working Group, from which this workshop materialized. Participants included senior-level decision makers from media organizations, key product, policy, and threat management leads from technology companies/social networks, researchers studying relevant technologies, and others working on these challenges in civil society.
The meeting intended to: 1) Connect news and media organizations, key technology companies, researchers, and others, 2) facilitate better understanding of the threats organizations currently face and will face in the future, 3) promote development of potential solutions to those threats, and how these relate to existing technical and journalistic approaches as well as the global contexts of mis/disinformation, 4) enable identification of tactics for better communication/coordination between participants, and 5) allow participants to work together on the prolonged, positive development of AI in the context of mis/disinformation.
A series of provocations set the stage for breakout sessions on key issues:
Authentication and Provenance
- For outputs from journalistic entities, how can we agree on the common use of tools to signal to audiences/platforms/machines that content is bona fide? How do platforms then go on to authenticate content?
- Can we do this in ways that avoid the reputational damage that comes from brand misappropriation, and what are the issues that arise from sharing the metadata that goes along with content to help facilitate this at an industry level? Is such a solution something we want, or something that can be done at scale?
- How would news organizations and platforms evaluate signals of authenticity attached to user-generated content, and what would be the pros and cons, globally and across a diverse media ecosystem, of having more of these signals established at capture? Where would such signals come from?
Coordination: How can we better coordinate around mis/disinformation threats, including emergent AI-generated threats?*
- Is it possible to develop a way to share information about mis/disinformation and synthetic media that will improve responsiveness across newsrooms and the broader media ecosystem?
- What would coordination or better communication look like? Could it develop out of or build upon our existing partnerships? Do we need to build something technically? Who would it include, including consideration of a range of different sized entities, and looking globally?
Synthetic Media Detection and Alerts
- What is the shared detection system? What are ways to enhance collaboration on detection and on training data for detection?
- How should we use the technology and contacts we have to create ways to detect synthetic media? What is the optimal way to maximize access to detection systems, including for less well-resources or smaller news organizations, including those in the Global South?
Alerting the Wider World
- How do we open up the dialogue with audiences, signaling harmful content, and helping to spread truthful materials?
- What research exists/is needed about how to communicate to the public around new forms of audio-visual manipulation that are invisible to the naked eye?
The group outlined tactical next steps for each of these breakout challenge areas. Potential future directions include projects on improving and standardizing methods for confirming media provenance, sharing detection techniques, research on how to signal manipulated content to the public, and a semi-regular coordination convening for organizations to share threats and plan for potential, future threats.
These workshop themes transcend organizational, geographic, and sectoral boundaries, and addressing them requires multistakeholder collaboration. The Partnership on AI, alongside the BBC and WITNESS, remain committed to expanding participation in this work with other organizations and perspectives not represented in this initial workshop. To become involved in PAI’s AI and media work, get in touch with Claire Leibowicz at (AImedia@partnershiponai.org). We look forward to continuing this work with our Partners, and we welcome the participation of others in the creation of best practices for this timely issue area.
Workshop Participants:
- Agence France-Presse (AFP)
- British Broadcasting Corporation (BBC)
- Canadian Broadcasting Corporation (CBC)
- First Draft
- Graphika
- Internews
- Microsoft
- The New York Times
- Thoughtful Technology Project
- XPRIZE
- WITNESS
- Academics from SUNY Albany & The University of Sheffield
*While this workshop was scoped to the emergent threat of AI-generated media, we often found ourselves contextualizing that threat with other forms of manipulated media that do not require AI techniques.
Back to All Posts