Message posted on 24/11/2018

CFP EGOS 2019: Work in the Age of Intelligence Subtheme

Sub-theme 35: Work in the Age of Intelligence: Augmentation, Agency and
Infrastructure
Convenors:
Ingrid Erickson
Syracuse University, USA
imericks@syr.edu
Margunn Aanestad
University of Oslo, Norway
margunn@ifi.uio.no
Carsten Østerlund
Syracuse University, USA
costerlu@syr.edu
Call for Papers

The relationship of work to technology has long been studied (e.g., Barley,
1986; Orlikowski, 1992; Trist & Bamforth, 1951), from the roboticization of
factory lines (e.g., Argote et al., 1983; Grint & Woolgar, 2013; Smith &
Carayon, 1995) to the integration of information and computing technology into
knowledge work (e.g., Hanseth et al., 2006; Leonardi & Bailey, 2008; Osterlund
& Carlile, 2005). As more and more digital technology becomes elemental to
modern forms of work, it is sometimes difficult to separate tasks from tools,
procedures from platforms. Today, not only is work primarily digital and
computational, but it is fast becoming algorithmic with the introduction of
artificial intelligence into existing procedures and practices (Brynjolfsson &
McAfee, 2014). For instance, radiologists can now leverage artificial
intelligence to analyze patients’ scans instead of relying on their trained
eyes alone; these machines, using intelligent algorithms, are reported to have
a higher rate of tumor recognition than even the most well-trained experts
(Aerts, 2017; Prevedello et al., 2017).

Noting that there are more and more instances of organizations utilizing
artificial intelligence for strategic and operational ends, this sub-theme
seeks to better understand these relationships by drawing in empirical
scholarship that studies work at this particular human-technology frontier.
Incumbent in our desire to convene this conversation are three driving
questions:

Where and how is artificial intelligence being used in contemporary
organizations?
How do these examples help us understand shifts in work practices (i.e., are
artificial agents new collaborators, embedded technical constraints, something
else entirely)?
How can enquiries into to working with smart agents reveal what is
intrinsically human about modern forms of work?

Artificial intelligence (AI) is a current buzzword in business, but it is a
technology that has a long history (McCorduck et al., 1977). In some ways a
simple calculator displays ‘intelligence’ in its seemingly cognitive
ability to calculate sums rapidly. Yet, today’s reference to the term tends
to connote the predictive, rather than the mere processing, power of
computation (Chen et al., 2012). Of course, prediction is still a function of
processing, but more importantly it is also derivative of the analysis of
great stores of past data. These digital traces of the past, when run through
powerful machines, reveal patterns. It is these patterns that make up the
ingredients of algorithms, which are essentially recipes linking past patterns
to potential future patterns. AI occurs in our daily lives everyday when, for
example, Amazon recommends books that you might like based on a current
selection. Scale this up a bit and you have the example of an autonomous
vehicle – a machine that is able to not only see links between Item A and B,
but to string a multitude of these relations together and act on them in real
time, essentially simulating a human driver who can navigate a complex
terrain. The sophistication of the ‘intelligence’ of an autonomous vehicle
extends beyond a simple recommendation; instead, it is a result of both
predictive power and also machine learning, a computational process whereby a
computer learns from environmental feedback. As this feedback comes in, the
machine ‘learns’ and gradually improves its operations, ad infinitum.

The intersection of work and artificial intelligence is occurring along a
complex spectrum, ranging from things such as the increased use of recommender
systems in decision sequences (as hinted at in the Amazon example above) to
the incorporation of fully fledged intelligent machines, as in the case of
autonomous vehicles upending the jobs of truck drivers or robots conducting
surgery. Of course, these variations mirror the wide diversity of work tasks
today, but they also reflect the information infrastructures (Bowker et al.,
2009; Monteiro & Hanseth, 1996) in which the AI is embedded. While it is
conceptually powerful to think of the direct relationship between artificial
intelligence and work, rarely do they come together without a mediator. These
intermediaries provide platforms for necessary activities to run, they help to
integrate disparate technologies with one another, and, when functioning
properly, they fade into the background and become embedded in the norms and
rules that govern an organization or a culture. To a financial analyst, the
practice of utilizing AI may occur within the use of predictive analytics
package on a organizationally-mandated data platform – perhaps one that
optimizes a complex set of portfolios by visualizing them in such a way that a
quick decision can be rendered easily. A truck driver, on the other hand, has
quite a different experience of AI. Not only is he or she enveloped by AI in
material form, but experientially these drivers are likely limited to a narrow
set of options well before the engine is even turned on. Is the driver then an
agent of the machine and the analyst a collaborator? These are not only
questions of task design, perceived efficiency, and financial optimization but
also of a worker’s agency and the boundaries in which they are intended (or
allowed) to act.

In recent years information infrastructures have become more widely studied,
with a particular interest in the ways that their inherent digital
extensibility supports generativity and innovation (e.g., Forman et al., 2014;
Yoo et al., 2012). Less well studied, however, is the way that information
infrastructures encode certain practices because of their reliance on
algorithms and artificial intelligence. We see this emphasis in our proposed
sub-theme as a way to take up the mantle of prior work on infrastructures, but
also to provide a forum, in line with the general theme of the annual
convening, to consider how AI may be challenging (or enlightening)
organizations via the increased reliance on and organization of work via
information infrastructures.

We encourage submissions that address the broad subject of automation and work
from an equally broad array of disciplinary scholars. We invite papers that
deal with (but are not limited to) the following topic areas:
AI in the collective
AI knowledge work
AI now and then
Algorithmic infrastructures
Algorithmic phenomena in the organization of work
Breakdowns in AI and work
Designing AI-Human practices
Dynamic relationships between AI and humans
Methodological implication of algorithmic phenomena
Nature of coordination and collaboration in the age of the “smart
machine”
Predictions in practice
Roboticization and hybrid agency
Sociomaterial theorizing about new forms of work

Short papers should focus on the main ideas of the paper, i.e. they should
explain the purpose of the paper, theoretical background, the research gap
that is addressed, the approach taken, the methods of analysis (in empirical
papers), main findings, and contributions. In addition, it is useful to
indicate clearly how the paper links with the sub-theme and the overall theme
of the Colloquium, although not all papers need to focus on the overall theme.
Creativity, innovativeness, theoretical grounding, and critical thinking are
typical characteristics of EGOS papers.
Your short paper should comprise 3,000 words (incl. references, all appendices
and other material).
Due: Monday, January 14, 2019, 23:59:59 CET [Central European Time]

References
Aerts, H.J.W.L. (2017): “Data Science in Radiology: A Path Forward.”
Clinical Cancer Research, 24 (3), 532–534.
Argote, L., Goodman, P.S., & Schkade, D. (1983): “The Human Side of
Robotics: How Workers React to a Robot.” Sloan Management Review, 24 (3),
31–41.
Barley, S.R. (1986): “Technology as an Occasion for Structuring: Evidence
from Observations of CT Scanners and the Social Order of Radiology
Departments.” Administrative Science Quarterly, 31 (1), 78–108.
Bowker, G.C., Baker, K., Millerand, F., & Ribes, D. (2009): “Toward
Information Infrastructure Studies: Ways of Knowing in a Networked
Environment.” In: J. Hunsinger, L. Klastrup & M. Allen (eds.): International
Handbook of Internet Research. Dordrecht: Springer, 97–117.
Brynjolfsson, E., & McAfee, A. (2014): The Second Machine Age. Work, Progress,
and Prosperity in a Time of Brilliant Technologies. New York: W.W. Norton &
Company.
Chen, H., Chiang, R.H.L., & Storey, V.C. (2012): “Business intelligence and
analytics: From big data to big impact.” MIS Quarterly, 36 (4),
1165–1188.
Forman, C., King, J.L., & Lyytinen, K. (2014): “Special Section Introduction
– Information, Technology, and the Changing Nature of Work.” Information
Systems Research, 25 (4), 789–795.
Grint, K., & Woolgar, S. (2013): The Machine at Work. Technology, Work and
Organization. Hoboken, NJ: John Wiley & Sons.
Hanseth, O., Jacucci, E., Grisot, M., & Aanestad, M. (2006): “Reflexive
Standardization: Side Effects and Complexity in Standard Making.” The
Mississippi Quarterly, 30, 563–581.
Leonardi, P.M., & Bailey, D.E. (2008): “Transformational Technologies and
the Creation of New Work Practices: Making Implicit Knowledge Explicit in
Task-Based Offshoring.” MIS Quarterly, 32 (2), 411–436.
McCorduck, P., Minsky, M., Selfridge, O.G., & Simon, H.A. (1977): “History
of Artificial Intelligence.” In: IJCAI ‘77 Proceedings of the 5th
International Joint Conference on Artificial Intelligence, Cambridge, USA,
August 22–25, 1977. San Francisco: Morgan Kaufmann Publishers, 951–954.
Monteiro, E., & Hanseth, O. (1996): “Social Shaping of Information
Infrastructure: On Being Specific about the Technology.” In: W.J. Orlikowski
(ed.): Information Technology and Changes in Organizational Work. London:
Chapman and Hall, 325–343.
Orlikowski, W.J. (1992): “The Duality of Technology: Rethinking the Concept
of Technology in Organizations.” Organization Science, 3 (3), 398–427.
Osterlund, C., & Carlile, P. (2005): “Relations in practice: sorting through
practice theories on knowledge sharing in complex organizations.”
Information Society, 21 (2), 91–107.
Prevedello, L.M., Erdal, B.S., Ryu, J.L., Little, K.J., Demirer, M., Qian, S.,
& White, R.D. (2017): “Automated Critical Test Findings Identification and
Online Notification System Using Artificial Intelligence in Imaging.”
Radiology, 285 (3), 923–931.
Smith, M.J., & Carayon, P. (1995): “New technology, automation, and work
organization: Stress problems and improved technology implementation
strategies.” International Journal of Human Factors in Manufacturing, 5 (1),
99–116.
Trist, E.L., & Bamforth, K.W. (1951): “Some social and psychological
consequences of the Longwall Method of coal-getting: An examination of the
psychological situation and defences of a work group in relation to the social
structure and technological content of the work system.” Human Relations, 4
(1), 3–38.
Yoo, Y., Boland, R.J., Lyytinen, K., & Majchrzak, A. (2012): “Organizing for
Innovation in the Digitized World.” Organization Science, 23 (5),
1398–1408.
___
EASST's Eurograd mailing list
Eurograd (at) lists.easst.net
Unsubscribe or edit subscription options: http://lists.easst.net/listinfo.cgi/eurograd-easst.net

Meet us via https://twitter.com/STSeasst

Report abuses of this list to Eurograd-owner@lists.easst.net

view as plain text

EASST-Eurograd RSS

mailing list
30 recent messages