News

The Partnership on AI Steering Committee on AI and Media Integrity

Terah Lyons

September 5, 2019

Advances in AI and computer graphics over the last several years are now being harnessed to create, modify, and disseminate modified or fabricated images, audio, and video content, often referred to broadly as synthetic media. These new content generation and modification capabilities have significant, global implications for the legitimacy of information online, the quality of public discourse, the safeguarding of human rights and civil liberties, and the health of democratic institutions—especially given that some of these techniques may be used maliciously as a source of misinformation, manipulation, harassment, and persuasion. 

The ability to create synthetic or manipulated content that is difficult to discern from real events frames the urgent need for developing new capabilities for detecting such content, and for authenticating trusted media and news sources. AI techniques are being developed to detect and defend against synthetic and modified content. However, further investment and collaboration will be required for the advancement and application of these techniques, and for strengthening capacity in organizations and communities affected by these developments. We believe that mounting an effective response will require focused coordination by diverse institutional actors, including companies, non-profit organizations, governments, and the research community. A coordinated effort can support the development of methods, policies, and programs designed to minimize deception and enhance the integrity of media.

In order to address this challenge, the Partnership on AI is assembling a new Steering Committee on AI and Media Integrity. This Steering Committee, made up of organizations spanning civil society, technology companies, media organizations, and academic institutions, will be focused on a specific set of activities and projects directed at strengthening the research landscape related to new technical capabilities in media production and detection, and increasing coordination across organizations implicated by these developments. It also aims to be a place in which individual organizations can step forward to invest in building the field, and have their work improved by diverse expertise provided in a more collaborative setting. Initial members of this Steering Committee will include First Draft, WITNESS, XPRIZE, CBC/Radio-Canada, the BBC, The New York Times, Facebook, and Microsoft, among other PAI Partner organizations to be later announced.

This Steering Committee builds upon the essential work of an Expert Group we launched last year focused on early investigation of these topics. Co-Chaired by WITNESS and Facebook, the group is comprised of a broad coalition of organizations convened from across the Partnership, and is focused on pressing issues at the intersection of AI and the media, including on mis- and dis-information and synthetically-generated media. Together, our community has been working to promote a more global conversation on these issues; to help news organizations and platforms better understand how to authenticate content and enhance detection capabilities; to understand and bolster the research landscape related to new technical capabilities in media production and synthetic media detection; to build coordination capacity across newsrooms and the broader media landscape; and to amplify potential implications of media preparedness issues for under-represented or vulnerable communities. The new Steering Committee aims to further galvanize institutions to make commitments to share more information, resources, and expertise toward these goals. 

The first project undertaken by the Steering Committee on AI and Media Integrity will be the oversight and governance of the Deepfake Detection Challenge: an open-source benchmarking project we are announcing with Facebook and other organizations today intended to advance the detection of AI-generated “deepfakes,” which leverage AI techniques to fabricate realistic videos of fictional events. The Steering Committee and associated organizations will be independently overseeing the Challenge, including in determining competition governance and scoring participants. We are especially glad to see technology industry actors taking these concerns seriously and playing a leadership role in catalyzing further investment in technical development for the AI research field, in collaboration with civil society and other outside experts. These are shared challenges which will require collective investment and attention in order for the entire ecosystem to make progress. 

The Deepfakes Detection Challenge will be the first of other significant projects overseen by the Steering Committee for AI and Media Integrity that gather organizations to collaborate and advance investment in the development of technology tools to support information integrity. We look forward to leading shared progress with our community in these critical areas for the AI research field.

Back to All Posts