News

A Report on the Deepfake Detection Challenge

Claire Leibowicz

March 12, 2020

To expand upon our previous blog, the Partnership on AI and members of the AI and Media Integrity Steering Committee are proud to share a report out of our collective efforts in the Deepfake Detection Challenge (DFDC). In this report, we present six key insights and associated recommendations that can inform future work on synthetic media detection, many of which extend to AI and its impact on media integrity more broadly. In doing so, we also document PAI’s involvement with the DFDC and share our learnings for conducting meaningful multistakeholder work for AI’s development.

These insights and recommendations highlight the importance of coordination and collaboration among actors in the information ecosystem. Journalists, fact-checkers, policymakers, civil society organizations, and others outside of the largest technology companies who are dealing with the potential malicious use of synthetic media globally need increased access to useful technical detection tools and other resources for evaluating content. At the same time, these tools and resources need to be inaccessible to adversaries working to generate malicious synthetic content that evades detection. Overall, detection models and tools must be grounded in the real-world dynamics of synthetic media detection and an informed understanding of their impact and usefulness.

PAI will continue to iterate on initiatives that touch on synthetic media detection while also attending to the other aspects of AI and media integrity that warrant collective attention and multistakeholder engagement. In 2020, we seek to ensure more coordinated, diverse governance of technical tools and systems built for and around AI-generated mis/disinformation, and to increase access to such tools and knowledge.

For those considering future work on synthetic media detection, and ways to conduct multistakeholder work with PAI, we invite you to read our report, “The Deepfake Detection Challenge: Insights and Recommendations for AI and Media Integrity.”

Back to All Posts