CFP: Navigating the Broader of Impacts of AI Research (NeurIPS 2020 Workshop)
We're pleased to announce the Call for Participation for our workshop on Navigating the Broader Impacts of AI Research. This workshop is a part of the 2020 Neural Information Processing Systems conference (NeurIPS), and the event will be held virtually along with other workshops.
Paper submission deadline: October 12, 2020
Workshop date: December 12, 2020
Submit via CMT
Following growing concerns with both harmful research impact and research conduct in computer science , including concerns with research published at NeurIPS   , this year’s conference introduced two new mechanisms for ethical oversight: a requirement that authors include a “broader impact statement” in their paper submissions and additional evaluation criteria asking paper reviewers to identify any potential ethical issues with the submissions  .
These efforts reflect a recognition that existing research norms have failed to address the impacts of AI research    , and take place against the backdrop of a larger reckoning with the role of AI in perpetuating injustice     . The changes have been met with both praise and criticism  some within and outside the community see them as a crucial first step towards integrating ethical reflection and review into the research process, fostering necessary changes to protect populations at risk of harm. Others worry that AI researchers are not well placed to recognize and reason about the potential impacts of their work, as effective ethical deliberation may require different expertise and the involvement of other stakeholders.
This debate reveals that even as the AI research community is beginning to grapple with the legitimacy of certain research questions and critically reflect on its research practices, there remains many open questions about how to ensure effective ethical oversight. This workshop therefore aims to examine how concerns with harmful impacts should affect the way the research community develops its research agendas, conducts its research, evaluates its research contributions, and handles the publication and dissemination of its findings. This event complements other NeurIPS workshops this year (e.g.,  ) devoted to normative issues in AI and builds on others from years past (e.g.,   ), but adopts a distinct focus on the ethics of research practice and the ethical obligations of researchers.
The workshop will include contributed papers. All accepted papers will be allocated either a virtual poster presentation or a virtual talk slot. Authors will have the option to have final versions of workshop papers and talk recordings linked on the workshop website.
Submissions can be 4 pages maximum, excluding references and supplementary materials, and formatted in the provided NeurIPS general submissions templates. Papers should not include any identifying information about the authors to allow for anonymous review. Previously published work (or work under review) is acceptable, with the exception of previously published machine learning research.
We invite submissions relating to the role of the research community in navigating the broader impacts of AI research. Workshop paper submissions can include case studies, surveys, analyses, and position papers, including but not limited to the following topics:
Mechanisms of ethical oversight in AI research: What are some of the practical mechanisms for anticipating future risks and mitigating harms caused by AI research? Are such practices actually effective in improving societal outcomes and protecting vulnerable populations? To what extent do they help in bridging the gap between AI researchers and those with other perspectives and expertise, including the populations at risk of harm?
Analysis of the strengths and limitations of the NeurIPS broader impact statement as a mechanism for ethical oversight
Reflections on experiences with this year’s NeurIPS ethical oversight process
Ideas for alternative ethical review procedures, including how such determinations should be made and who should be involved in these determinations 
Assessments of the strengths and limitations of research ethics and institutional review boards , particularly with respect to the formulation of research questions and the broader impact of research findings 
Examples of how other fields engaged in high-risk research have handled the issue of ethical oversight (e.g., nuclear energy, nanotechnology, synthetic biology, geoengineering, etc.)
Lessons from research traditions that work directly with affected communities to develop research questions and research designs
Challenges of AI research practice and responsible publication: What practices are appropriate for the responsible development, conduct, and dissemination of AI research? How can we ensure wide-spread adoption?
Surveys of responsible research practice in AI research, including common practices around data collection, crowdsourced labeling, documentation and reporting requirements, declaration of conflict of interest, etc.    
Limitations and benefits of the conference-based publication format, peer review, and other characteristics of AI publication norms, including alternative proposals (e.g., gated or staged release)   
Collective and individual responsibility in AI research: Who is best placed to anticipate and address potential research impacts? What should be the role of AI researchers and the AI research community? And how do we get there?
Discussions of the role and obligations of different stakeholders (e.g., conference organizers, institutions, funders, researchers, users/customers, etc.) in ensuring ethical reflection and anticipating impacts of AI research 
How does the lack of diversity in the AI research community contribute to the problem of overlooking or underestimating potential harms?
Proposals for how to empower the impacted populations to shape research agendas, practices, and publication norms 
What makes for a quality ethical reflection? How can researchers prepare themselves for ethical reflection?
How do the obligations of researchers and practitioners differ when considering the potential impacts of their work? Are there meaningful differences across research and applied contexts?
Reflections on how ethical review could be integrated into different parts of the research pipeline, such as the funding process, IRB requirements, etc.
Anticipated risks and known harms of AI research: How should researchers identify the relevant risks posed by their work and what should be the different dimensions of concern? How can we ensure that researchers are well aware of known harms caused by related research and make sure that the field is responsive to the needs and concerns of affected communities?
Examples of effective and ineffective mechanisms for identifying relevant risks, including ethical review of research proposals and pre-publication review 
Proposals for creative approaches to understanding the impacts of research and prioritizing the protection of affected communities  
Case studies of AI research that had harmful impacts
Examples of AI research that had unanticipated consequences
Submit here by October 12, 2020. Authors will be notified of acceptance by October 30, 2020.
EASST's Eurograd mailing list
Eurograd (at) lists.easst.net
Unsubscribe or edit subscription options: http://lists.easst.net/listinfo.cgi/eurograd-easst.net
Meet us via https://twitter.com/STSeasst
Report abuses of this list to Eurogrademail@example.com