EASST Panel 61, Madrid, July 6-9 "THE POLITICS OF AI-BASED SECURITY - PREDICTING AND IMAGINING THE FUTURE"
Apologies for cross-posting
Please find below the abstract for our panel at EASST 2022. We very much lo= ok forward to receiving your submissions by the 1st of February.
With very best wishes,
Jens and Daniel
THE POLITICS OF AI-BASED SECURITY - PREDICTING AND IMAGINING THE FUTURE Jens H=E4lterlein (Centre for Security and Society, University Freiburg) Daniel Marciniak (Max Planck Institute for Social Anthropology, Halle (Saal= e)) Abstract: Advancements in the AI-subfield of machine learning are already transformin= g security practices in various contexts, including military operations, po= licing, intelligence work, private security, and pandemic management. In ma= ny cases, the use of AI-based security technologies aims at predicting the = future based on probabilistic calculation. In law enforcement, for instance= , AI can be used to pinpoint likely places and times of future crimes, terr= orist attacks, and social unrest or to identify individuals at high risk of= becoming a future (re)offender, terrorist, or victim (Benbouzid 2019, Bray= ne 2021, H=E4lterlein 2021). Moreover, in the course of the Covid-19 pandem= ic, AI has been increasingly used for epidemiological modelling of how the = disease spreads along with different scenarios. These technoscientific pred= ictions render the future knowledgeable in order to act upon it. They are p= olitical in the sense that they legitimize certain interventions and delegi= timize others. While tech companies highlight the merits of equipping security actors with= these seemingly powerful tools and both activists and critical scholars raise awareness for= the dangers that their use can bring with regard to data protection and th= e discrimination of minorities, AI-based security technologies have also be= come a matter of concern for policy-making. In recent years, many governmen= ts and supranational organisations have published strategy papers in which = they present their visions of the future development and application of AI.= These different visions articulated by various actors at once describe pos= sible technoscientific futures and prescribe technoscientific futures that = ought to be attained or to be avoided. They aim to legitimize investments i= n and/or the stricter regulations of AI-based security technologies. In conversation with the conference theme, we invite scholarship that seeks= to discuss the practice and politics of technoscientific futures. We welco= me presentations that address at least one of the following questions:
- How are AI-based security technologies used to predict or forecast li= kely future events? How do they relate to non-AI-based practices of predict= ion?
- How do these technoscientific practices of prediction relate to (pre-= existing) practices of transforming or governing the future (pre-emption, p= revention, pre-mediation, contingency planning, risk insurance etc.)?
- What are the obstacles of doing research in this field and how can we= deal with them?
- How are questions of security addressed within the imagination of fut= ures of AI? Presentations may examine imaginaries at all levels: national g= overnments, international organisations, NGOs, communities of practice, sci= entific communities, corporations, social movements and not least science f= iction.
- How do desirable futures relate to criticisms of and resistance to AI= -based security technologies (lethal autonomous weapon systems, biometric f= acial recognition, predictive policing, etc.)? What alternatives are imagin= ed?
- What is the impact of the politics of imagining futures of AI on the = politics of predicting futures with AI?
Daniel Marciniak, PhD
Anthropology of AI in Policing and Justice
Max Planck Institute for Social Anthropology
Advokatenweg 36, 06114 Halle (Saale), Germany