Message posted on 27/06/2018

Deadline Extension: The Cultural Life of Machine Learning: An Incursion into Critical AI Studies

Apologies for cross-posting







The Cultural Life of Machine Learning: An Incursion into Critical AI Studies
Preconference Workshop, #AoIR2018
Montral, Canada
Urbanisation Culture Socit Research Centre, INRS (Institut national de la
recherche scientifique)
Wednesday October 10th 2018

Deadline for Abstracts: June 30th 2018 Extended Deadline: July 7th 2018


Keynote: Orit Halpern (Department of Sociology and Anthropology, Concordia
University)

Machine learning (ML), deep neural networks, differentiable programming and
related contemporary novelties in artificial intelligence (AI) are all leading
to the development of an ambiguous yet efficient narrative promoting the
dominance of a scientific fieldas well as a ubiquitous business model.
Indeed, AI is very much in full hype mode. For its advocates, it represents a
tsunami (Manning, 2015) or revolution (Sejnowski, 2018)terms indicative
of a very performative and promotional, if not self-fulfilling, discourse. The
question, then, is: how are the social sciences and humanities to dissect such
a discourse and make sense of all its practical implications? So far, the
literature on algorithms and algorithmic cultures has been keen to explore
both their broad socio-economic, political and cultural repercussions, and the
ways they relate to different disciplines, from sociology to communication and
Internet studies. The crucial task ahead is understanding the specific ways by
which the new challenges raised by ML and AI technologies affect this wider
framework. This would imply not only closer collaboration among
disciplinesincluding those of STS for instancebut also the development of
new critical insights and perspectives. Thus a helpful and precise
pre-conference workshop question could be: what is the best way to develop a
fine-grained yet encompassing field under the name of Critical AI Studies? We
propose to explore three regimes in which ML and 21st-century AI crystallize
and come to justify their existence: (1) epistemology, (2) agency, and (3)
governmentalityeach of which generates new challenges as well as new
directions for inquiries.

In terms of epistemology, it is important to recognize that ML and AI are
situated forms of knowledge production, and thus worthy of empirical
examination (Pinch and Bijker, 1987). At present, we only have internal
accounts of the historical development of the machine learning field, which
increasingly reproduce a teleological story of its rise (Rosenblatt, 1958) and
fall (Minsky and Papert 1968; Vapnik 1998) and rise (Hinton 2006), concluding
with the diverse if as-yet unproven applications of deep learning. Especially
problematic in this regard is our understanding of how these techniques are
increasingly hybridized with large-scale training datasets, specialized
graphics-processing hardware, and algorithmic calculus. The rationale behind
contemporary ML finds its expression in a very specific laboratory culture
(Forsythe 1993), with a specific ethos or model of open science. Models
trained on the largest datasets of private corporations are thus made freely
available, and subsequently dtourned for the new AIs semiotic environs of
image, speech, and textpromising to make the epistemically recalcitrant
landscapes of unruly and unstructured data newly manageable.

As the knowledge-production techniques of ML and AI move further into the
fabric of everyday life, it creates a particularly new form of agency. Unlike
the static, rule-based systems critiqued in a previous generation by Dreyfus
(1972), modern AI models pragmatically unfold as a temporal flow of
decontextualized classifications. What then does agency mean for machine
learners (Mackenzie, 2017)? Performance in this particular case relates to the
power of inferring and predicting outcomes (Burrell, 2016); new kinds of
algorithmic control thus emerge at the junction of meaning-making and
decision-making. The implications of this question are tangible, particularly
as ML becomes more unsupervised and begins to impact on numerous aspects of
daily life. Social media, for instance, are undergoing radical change, as
insightful new actants come to populate the world: Echo translates your
desires into Amazon purchases, and Facebook is now able to detect suicidal
behaviours. In the general domain of work, too, these actants leave permanent
tracesnot only on repetitive tasks, but on the broader intellectual
responsibility.

Last but not least, the final regime to explore in this preconference workshop
is governmentality. The politics of ML and AI are still largely to be
outlined, and the question of power for these techniques remains largely
unexplored. Governmentality refers specifically to how a field is organisedby
whom, for what purposes, and through which means and discourses (Foucault,
1991). As stated above, ML and AI are based on a model of open science and
innovation, in which public actorssuch as governments and universitiesare
deeply implicated (Etzkowitz and Leydesdorff, 2000). One problem, however, is
that while the algorithms themselves may be openly available, the datasets on
which they rely for implementation are nothence the massive advantages for
private actors such as Google or Facebook who control the data, as well as the
economic resources to attract the brightest students in the field. But there
is more: this same open innovation model makes possible the manufacture of
military AI with little regulatory oversight, as is the case for China, whose
government is currently helping to fuel an AI arms race (Simonite 2017). What
alternatives or counter-powers could be imagined in these circumstances? Could
ethical considerations stand alone without a proper and fully developed
critical approach to ML and AI? This workshop will try to address these
pressing and interconnected issues.

We welcome all submissions which might profitably connect with one or more of
these three categories of epistemology, agency, and governmentality; but we
welcome other theoretically and/or empirically rich contributions.

Interested scholars should submit proposal abstracts, of approximately 250
words, by July 7th 2018 to CriticalAI2018 [at] gmail [dot] com. Proposals may
represent works in progress, short position papers, or more developed
research. The format of the workshop will focus on paper presentations and
keynotes, with additional opportunities for group discussion and reflection.

This preconference workshop will be held at the Urbanisation Culture Socit
Research Centre of INRS (Institut national de la recherche scientifique). The
Centre is located at 385 Sherbrooke St E, Montreal, QC, and is about a
20-minute train ride from the Centre Sheraton on the STM Orange Line (enter at
the Bonaventure stop, exit at Sherbrooke), or about a 30-minute walk along Rue
Sherbrooke.

For information on the AoIR (Association of Internet Researchers) conference,
see https://aoir.org/aoir2018/ ; for other preconference workshops at AoIR
2018, see https://aoir.org/aoir2018/preconfwrkshop/.

Organizers: Jonathan Roberge (INRS), Michael Castelle (University of Warwick),
and Thomas Crosbie (Royal Danish Defence College).
___
EASST's Eurograd mailing list
Eurograd (at) lists.easst.net
Unsubscribe or edit subscription options: http://lists.easst.net/listinfo.cgi/eurograd-easst.net

Meet us via https://twitter.com/STSeasst

Report abuses of this list to Eurograd-owner@lists.easst.net

view as plain text

EASST-Eurograd RSS

mailing list
30 recent messages