Eurograd message

Message posted on 17/12/2024

thematic track Governing the Use of AI in Scientific Research at Eu-SPRI 2025

Governing the Use of AI in Scientific Research

Thematic track at Eu-SPRI 2025: https://euspri2025.de/index.php/callforpapers/

convenors: Laurens Hessels, Anne van Doore, Kieron Flanagan deadline for abstract submission 31 January 2025

Machine learning techniques have accelerated the development of artificial intelligence (AI) in recent years, leading to a proliferation of new applications. AI technologies are transforming many domains of our society and economy, such as mobility, entertainment and public administration. The rapid development of AI is also generating many new ways of using AI in scientific research. Until recently, the use of AI was concentrated in data analysis and programming, but the introduction of user-friendly generative AI tools such as ChatGPT has opened up new applications in other parts of the research process, such as literature review and text writing. We can safely assume that AI will have disruptive effects on the science system in the coming years. AI brings a number of great potential benefits, in particular increasing the productivity of research (OECD, 2023). Large language models can accelerate the research process by generating summaries of notes or even carry out complete literature scans. AI can also facilitate efficient data mining, qualitative analysis or assist in writing grant applications. Generative AI could also help reduce geographic inequality between researchers by improving articles in terms of writing quality and language errors. However, there are also serious risks involved. The risk of discrimination is significant, particularly because AI models are trained on a sometimes unjust reality based on stereotypes and biases (Nicoletti & Bass, 2023). Critics have also raised concerns about the incompleteness of summaries (Barak-Corren et al., 2024), the numerous errors that occur, and thus the unreliability and inconsistency of systematic reviews (Gwon et al., 2024), as well as the loss of control over copyright (Appel et al., 2023). Additionally, directly copying or paraphrasing from a chatbot raises questions about plagiarism (Ciaccio, 2023). The use of machine learning algorithms also tends to make the research process less explainable, undermining the principle of replicability (Ball, 2023). Finally, the use of AI consumes so much energy, that it is highly uncertain whether its carbon footprint is proportion to the additional benefits gained by AI. Altogether, it is clear that the use of AI in science deserves scrutiny. In order to enable researchers to make use of AI in a responsible way, professional guidelines or public policies will probably be necessary. Some scientific publishers and funding agencies have formulated (tentative) guidelines, largely focused on AI in the writing process, but in many situations researchers are left without any guidance. This raises a number of questions with science policy and governance implications, such as:

  • What does AI mean for the nature of scientific knowledge and for the identity of researchers (across different fields)?
  • How is AI changing the social processes by which the scientific community determines which knowledge claims are valid and which methods reliable? Will the large-scale use of AI require new ways to ensure scientific integrity and reliability?
  • What competencies do the scientists of the future need, given the rise of AI?
  • What are the implications of the rise of AI for academic career paths and job markets? Will we need fewer scientists in future?
  • What new opportunities does AI offer for open science and transdisciplinary collaboration? And where does it pose a challenge to them?
  • How can research security be ensured when scientists make extensive use of AI, which partly depends on services and tools from foreign private providers?

In this track we aim to bring together researchers from science policy studies, STS, philosophy of science and related fields interested in the effects of AI on research practices in academic institutions and other public sector organizations, and in the potential of public policies and organizational policies for facilitating and ensuring the responsible use of AI.


EASST's Eurograd mailing list Eurograd (at) lists.easst.net Unsubscribe or edit subscription options: http://lists.easst.net/listinfo.cgi/eurograd-easst.net

Meet us via https://twitter.com/STSeasst

Report abuses of this list to Eurograd-owner@lists.easst.net

view as plain text

EASST-Eurograd RSS

mailing list
30 recent messages