Events
When Is It Appropriate to Publish High-Stakes AI Research?
By Claire Leibowicz, Steven Adler, and Peter Eckersley
While openness is a long and deeply held value in AI research — for instance, the Partnership on AI (PAI) has a commitment to open research in its foundational tenets — it has to some extent been in tension with recent precautionary efforts to mitigate the potential unintended consequences and malicious uses of new machine learning technology.
In early March, we co-hosted a dinner with OpenAI alongside other members of the AI community seeking to explore this tension and whether review processes prior to open publication of AI research might be workable and productive. This particular dinner was inspired by OpenAI’s recent decision to restrict publication of its advanced language model GPT-2 to a research paper lacking the trained model weights. [1]
Participants simulated a review panel of an imaginary technology company and considered whether to publish a hypothetical, innovative AI advance (comprising a research paper, code, training data, and a final-trained neural network model) that may have malicious applications. Groups considered major breakthroughs in generative video synthesis, automated software vulnerability detection, as well as conversational and reading comprehension models. The Social and Societal Influences of AI (SSI) Working Group also conducted the simulation at its recent meeting.
Several common decision factors emerged among simulation participants grappling with hypothetical AI advances. As the AI community debates and experiments with review processes, some relevant considerations resulting from this dinner and the SSI Working Group meeting include the following:
Standard risk assessment processes
Organizations should assess risks, model potential threats, and calculate trade-offs. How likely and serious are potential malicious uses? What resources would they require? Would they occur in centralized institutions or via individual actors? How effectively can they be mitigated? What opportunities are lost by delayed publication? How inseparable are good uses from malicious ones? Are there ways that the design of a research program or the publication strategy could change the balance?
Review process timeframes
Whether a review process could hope to accomplish anything useful may depend on timeframes; even if one party does not publish, it may be expected that other labs could be making similar progress and choose to publish. A decision not to publish may only delay the availability of similar research for a few months or years. Because it may be unrealistic to think that review processes could unilaterally prevent publication of results altogether, it may be more appropriate to think in terms of delays to publication and what mitigations for unintended consequences such delays may allow. However, several participants expressed the difficulty of keeping large-scale AI developments under wraps amidst possible news leaks, sometimes prompting publication decisions before an organization is ready.
Responsible disclosure processes
If an institution pursued a delayed publication process, it would be important to focus on what responsible disclosure processes and mitigations could be deployed in the time provided by a publication lag. Responsible disclosure is a concept developed by the computer security community, whereupon discovery of bugs that leave software vulnerable to hacking, vendors of software are given notice and an amount of time to fix them. Though the analogy to malicious uses of AI systems is not exact — only a small minority of security bugs could be considered as widespread and structural as the emergence of synthetic video, for example — thinking in terms of this framework may help decision-makers. For instance, how much lead-time is necessary to implement defenses against malicious uses? How effectively can these be implemented via a small number of centralized actors, if given advance notice?
Precaution during research design and scoping
Many participants noted that it is easier to shape the consequences of technology — thereby mitigating potential misuses — at the design stage, rather than once a research project is completed. If a project could be expected to have serious negative consequences, it may be better to avoid doing it in the first place than to spend considerable resources and then not publish or to be faced with “putting the genie back in the bottle.” Flagging risks early also allows more time for mitigation and affords more opportunities to adjust the project to change the balance of outcomes towards positive ones.
The ideas above should not be read as mutually exclusive or completely exhaustive of methods to balance openness and values informing precautionary measures.
Following the dinner, short survey results generated two insights:
- There is not a consensus on proposed AI research review norms. In our unscientific sample of 20 AI researchers, significant groups aligned with each of our three survey responses: one group felt that openness remains the best norm in almost all cases, another believed that a review process prior to publication might be appropriate and useful, and the third group felt that there should be sharing within trusted groups.
- However, there is a consensus that norms and review parameters should be standardized across the AI community if the community chooses to restrict or reduce research openness (in contrast to organizations independently designing their own review processes).
Several participants urged the AI community to grapple with these tensions before today’s state-of-the-art AI technologies become more broadly accessible (whether through improvements in open-source technologies, reductions in computing costs, or other means). PAI is therefore continuing to organize conversations among our Partners on these topics and may produce a longer synthesis document on possible practices that should be considered.
Insights from this simulation will guide future research from PAI’s SSI Working Group as well as PAI’s broader policy and research agenda. We welcome additional thinking on research norms that preserve the benefits of openness and ensure AI is used for positive purposes. Contact Claire Leibowicz at claire@partnershiponai.org to be involved. The event was an early example of the extensive collection of workshops, dinners, and other rapid-response convenings that PAI will be holding in which participants share knowledge, debate questions percolating within the AI community, and seek to provide actionable solutions. We look forward to exploring similar challenges at future events with our Partners.
[1] This discussion about the preeminence of openness has been underway for some years in parts of the AI community but has been accelerated by OpenAI’s approach to GPT-2. GPT-2 is able to predict the next word in a document better than previous neural networks, allowing it to compose fairly human-like and context-aware text in response to a very wide range of prompts. That specific decision prompted a great deal of controversy in the machine learning community but is likely to just be the beginning of longer debates around publication and precaution with advanced AI systems. PAI’s mission calls us to engage and help make progress in response to such debates.
Back to All Posts