Category Archives: easst review

Understanding the Role of Color in the Sciences

From amazingly colorful antique relics to the attempts to standardize colors in biomedical imaging, color has gained relevance in the sciences. Yet the epistemic role of color, its long-standing neglect due to historically symbolic and partly gendered ascriptions, and the function of color in visualization for scientific purposes have not received much attention in the sciences or the humanities to date. The internal use of color in the sciences raises different epistemological questions from those that arise with images for external communication. The choice and symbolism of color in the latter case is guided to a greater degree by a need for simplification and considerations as to the expectations of a broader public. Coloured images for internal scientific use emerge during the research process itself (as a medium for self-reflection) or are produced in devices and used for intersubjective communication and to obtain feedback from the scientific community. Digital publishing has enhanced the use of color in scientific images, in contrast with the costly use of color in print media, whilst the globalization of the scientific community challenges the idea of universal color symbolism. All these issues raise the need for color awareness.

The conference “On the Epistemic Dimension of Color in the Sciences”1 invited speakers and participants to investigate the epistemic dimensions of color in the sciences, across the disciplines and across history: it was a meeting of researchers with expertise ranging from the digital and life sciences to gender studies and art history. They all shared an interest in the reflection on the historical understanding of color and of its contemporary uses in science and technology.

The conference kicked off with a keynote by art historian Ulrike Boskamp (Free University, Berlin) held at the site of the +ultra. knowledge & gestaltung exhibition at the Martin Gropius Bau Berlin, where the Cluster of Excellence Image Knowledge Gestaltung presented its research between 30.09.2016–8.01.2017. The exhibition provided a fitting context for the launching event of the conference and for Boskamp’s talk “Coding and Gendering Color: Scientific, Epistemological and Aesthetic Discourses in 18th Century France,” which laid the ground for recurrent comments on gender aspects in scientific color use in modern science throughout the conference: Boskamp discussed David Batchelor’s thesis on the longue durée of what he calls ”chromophobia“ (Batchelor 2000), showing its move from antiquity to the Renaissance (as already discussed in Jacqueline Lichtenstein’s ground-breaking study The Eloquence of Color, 1993) and into modern science. According to Batchelor, western cultures follow a binary concept of color versus line, coding the line (as in drawing and alphabetical text) in relation to cognition and the (white) male, versus color as directly addressing the senses and emotions, thereby categorizing it as female and (especially within the context of 19th century archaeological studies),2 as “oriental.”

Starting out by acknowledging the overall tendency of this color code by reference to the two central characters in Fifty Shades of Gray (fig. 1) (James 2012), Boskamp complicated this straightforward picture. She demonstrated how after the Cartesian understanding of color as “just” light, Netwton’s color theory made it possible for color to enter the scientific stage, to become an object of cognition in physics (fig. 2). The experimental approach to color, entangled with concepts of physically measurable color harmony (with the then primary colours yellow, red and blue, fig. 3), led to yet another shift sparked by Rousseau, among others. He built an argument on the opposition of mere ‘pleasure’ in such scientized corrupted color harmony (thereby female) and real ‘passion’ created by the use of the line in art. The justification of the hierarchic opposition between color and line thus had shifted from attributing the (achromatic) line with cognition to attributing it with masculine passion.


Figure 1: Christian Grey, together with Anastasia Steeles the main character in E. L. James’ popular novel Fifty Shades of Grey (2011, in the cinemas in 2015) as the embodiment of chromophobia. Dakota Johnson und Jamie Dorman, press photography for Fifty Shades of Grey (last access 13.06.2016), provided by Ulrike Boskamp.
Figure 2: An illustration from the Poem „La Peinture“, published by Antoine Marin Lemierre in 1769, gives an idea of the status of colour around mid-century. While the material origins of colour are signified by a personification of Terra, the earth, and the production of pigments by chemistry, the ruling of colour and its harmonies in painting is represented by a dominating figure personifying physics or physical optics carrying a prism, that is complemented by a rainbow in the sky.
Le Coloris, Etching by Nicolas Ponce after Charles Nicolas Cochin (fils), from Antoine Marin Lemierre: La Peinture. Poëme en trois chants, Paris 1769, p. 20. Provided by Ulrike Boskamp.
Figure 3: The preoccupations of the colour theorists of this time also entered into the realm of art. Francois Boucher, the most prominent painter of this era, like other contemporary painters, serially applied a combination of the three primary colours for the garments in his paintings, completely independent of its motif.
François Boucher, Autumn Pastorale, 1749, oil on canvas, 259,9 x 198,6 cm, London, The Wallace Collection. Provided by Ulrike Boskamp.





























The first session of the second conference day, taking place at the central laboratory of the Cluster of Excellence Image Knowledge Gestaltung in Berlin-Mitte, was dedicated to the evolution of standards in analogue and digital color print and projection. The basis of these media, working up to today with a triad of colors, as Ricardo Cedeño Montaña (Institut for Cultural History and Theory, Humboldt-University and BWG) showed, ultimately figure in Young’s, and later Helmholtz’s, color receptor theory. Cedeño Montaña’s main point however, was to show that the step from analogue television to digital cameras meant bringing together luminance and chromaticity (fig. 4) – once separate from the body in the TV – towards the human eye. This, he stated, closed a circuit initiated by the CIE, the Commission Internationale d’Eclairage (International Commission of Illumination), who in its Colorimetric Resolution of 1931 constructed a standard observer with a standard perception of luminance and chromaticity. This was also a topic discussed by Wolfgang Coy (Computer Sciences, Humboldt-University and BWG): the standard observer was developed together with the so-called horseshoe, the spectrum of differentiable colors measured in wavelengths which at a certain point merged to become white. This horseshoe followed a universal concept assuming that “the tested 20 caucasian males” were representative for any culture and historical situation an observer could be embedded in. Interestingly though, as Coy showed, the crossing wavelength that resulted in white moved in the diagram of the horseshoe during its evolution over the next decades. The presentation of the concept of the standard observer was followed by a discussion about the im-/possibility of a transcultural and ahistorical perception of color sparked by a doubtful biologist: wouldn’t people brought up in Ireland or in the Amazonian rainforest be able to differentiate more shades of green than people brought up in the arctic?


Figure 4: Low reproducibility of quantitative values and inter-observer agreement rates pose problems in clinical settings. Could colour be playing a role? Grey, jet and hot colour scales. Images provided by Aldo Badano, 11/18/2016.
Figure 5: CIE xy 1931 chromaticity diagram including the Planckian Locus. Wikipedia, PAR~commonswiki, 3 Jan 2012, last access 11/16/2016, provided by Ricardo Cedeño Montaña.
Figure 6: Nominal Data: Atom Colors. Produced and provided by Daniel Baum.




















The keynote by Aldo Badano (Center for Devices and Radiological Health, FDA, USA) addressed this issue of individual or collective learning of color perception and interpretation within the realm of medicine: in his talk on “Color Visualization in Medical Images” the Chair of the AAPM task group “Requirements and Methods for Color Displays in Medicine” discussed his empirical project on the question of whether the use of color in medical images was relevant, for example, in reaching a correct diagnosis (fig 5.). Few years ago before the medical imaging community began to become interested in consistency in image visualization there was no knowledge about assumptions of color effectiveness nor on whether there was any reliable difference between the performance of gray scale versus the so-called jet scale (using rainbow colors). Badano’s group (Zabala-Travers et al. 2015) found that attitudes towards the performance of both differed much among clinicians and that these expectations didn’t meet experimental results when comparative tests were done using as example the detection and localization of cancer. Badano stated in the discussion that the role of training was of rising relevance, as color imaging became more frequent while training with such visualization didn’t; on the other hand, the same holds for the reversed situation, especially as medical practitioners move between countries, continents and thus medico-technical cultures.

Taking a clearer position regarding the effectiveness of the rainbow color map than Badano’s results, Daniel Baum (Zuse-Institute, Berlin and BWG ) in his talk on “Data Visualization Perspective on the Use of Color” discussed its disadvantages, such as its non-linearity and lack of perceptual order. For those professionally visualizing data, the main questions in the choice of color scale are what the type of data attribute is, what the task to be carried out entails, whether we work with 2D or 3D data, and who the audience is. Besides educated choices, contingent ad hoc decisions may lead to perpetuated color codes, as exemplified in the case of atom colors, which are the result of August Hoffmann using cricket balls as a model in a 1865 presentation (fig. 6).

A cultural history oriented session started with Linda Baéz Rubí (The Warburg Institute, London; Instituto de Investigaciones Estéticas, UNAM and BWG ), who discussed the appearance of the Virgen de Guadeloupe in 1531, a painting of the Virgin Mary at a mountain in the north of the city of Mexico. The different stories about the inexplicable appearance of the image in the following century led to an entrenchment of physical theory and proof of God’s existence. To convince the Pope and the Congregation of Rites of the apparition in the 16th century Luis Becerra Tanco, a Creole Jesuit and mathematician and astronomer at the University of Mexico explained the apparition according to the model of optical geometry, which makes use of the medieval theory of the perspectiva communis. In 1756 the painter Miguel Cabrera explained: since the colors in the image didn’t change over time, it had to have been created by God – and vice versa. Nils Güttler (ETH Zurich) in his talk on the Justus Perthes’ map workshop in Gotha demonstrated the “Perthes style” in maps at the turn of the 20th century between science and marketing. The symbolism and political iconography included a distinction between Europe with golden-yellow borders in contrast to Africa in red, then connoted, i.a. in Rudolf Steiner’s work, as bellicose. Map coloring was female labor in all map workshops, with 160 women in Perthes’ workshop alone – the pedagogical discourse in girls’ and boys’ schools early on brought girls to color and boys to technical drawing, which in the discussion of course allowed for a loop back to Boskamp’s talk. This linking of color and female work was further transposed into photography and film, where the colorists also were mainly female.


In the second half of the 19th century pink was still rather worn by boys.
“Boy with whip“, anonymous, American School, circa 1840-1850, Honolulu Museum of Art, commons wikimedia, last access 4/14/2017, provided by Isabelle Grisard.


Margrit Vogt (Institute for Language, Literature and Media, University of Flensburg) and Dominique Grisard (Honorary Visiting Fellow, City University London) in their respective talks analyzed the history of scientific studies on the cultural use of color. Vogt drew attention to the fact that only since 1900 did colors begin to be produced as stable colors through a mix of technique and science, consumer culture and arts, which in addition to the introduction of electrical light at the beginning of the 20th century helped change the focus on color: its relevance in art and science was no longer the essence of colour as a static phenomenon, but rather the visual effect of one color in relation to another. The conference closed with Grisard’s talk on scientific theories that try to explain a supposedly female color preference for pink in evolution theory as well as in psychology since the early 1990s (fig. 7). The phenomenon was referred to in evolutionary psychology as “archaization”, placing sources of this preference in the female biological constitution as already indicated in 19th century biology – again looping back to the beginning of the conference with the keynote on the historical linkage of the femininity and color.






1 The conference took place on Nov. 17th and 18th, 2016 at the Cluster of Excellence Image Knowledge Gestaltung (BWG), Humboldt University, Berlin, organized by its research associate Bettina Bock von Wülfingen and co-chaired by the BWG-members Jochen Hennig, John Nyakatura, Kathrin Amelung and Martin Grewe.

2 Alexander Nagel (Department of Anthropology, Smithsonian Institution, National Museum of Natural History, Washinton D. C.), expert i.a. on the history of the “whitening” of near east antique architecture and sculpture during 19th and 20th century archeology, was hindered for health reasons.


Valuation Studies – presentation for EASST Review

Valuation as a social practice

The mission of Valuation Studies is to foster conversations in the new transdisciplinary and emerging field of studying valuation as a social practice. This field is interested in examining practices and settings where the value or values of something are established, assessed, negotiated, provoked, maintained, constructed and/or contested. The journal seeks to provide a meeting ground for studies of valuation emerging in different disciplinary settings, utilising different theoretical perspectives and methodological approaches. The open access policy of the journal and its transdisciplinary agenda facilitate intellectual exchange and debate transgressing disciplinary and geographical confines.

Valuation practices are abundant in modern societies where anything from restaurants to scientific publications may be subject to elaborate and distinct (e)valuations. Many objects, and the valuation practices that they are subject to, have featured in the journal since its first issue that was published in 2013. Examples (randomly drawn by looking at the first article in each previous issue) include: tomatoes (Heuts and Mol 2013), restaurants and online consumer reviews (Mellet et al. 2014), waste, recycling, and urban regeneration (Glucksberg 2014), impact investment (Barman 2015), the Eurovision song contest (Krogh Petersen and Ren 2015) and tropical biodiversity (Foale et al. 2016). The journal’s most recent issue published in December 2016 ( ) focused on exemplars in classic literature and hip-hop music (Dekker 2016), lean management at a children’s hospital (Hauge 2016), and fiction writers dealing with rejection (Fürst 2016).

The relevance of the journal is also visible in popular valuation metrics for scholarly publication. A vast majority of the articles are, for instance, already cited in scholarly texts published in other academic outlets. The traffic to the journal site is, moreover, large and quickly growing. In the first few months of 2017 (Jan- mid March), no less than 31 of the articles published were downloaded on average more than once a day. Crude metrics as citations and downloads seems to indicate that there is a large and growing interest in the topic of the journal and the contributions published on its pages.

Scientific profile

The journal provides a space for the diffusion and assessment of research that is produced at the interface of a variety of approaches from several disciplines, including: sociology, economic sociology, science and technology studies, management and organisation studies, social and cultural anthropology, market studies, institutional perspectives in economics, accounting studies, cultural geography, philosophy, and literary studies. This broad scope is also manifest in the many disciplines represented among the current Editors-in-chief (Claes-Fredrik Helgesson and Fabian Muniesa), the editorial office (Lotta Björklund Larsen and Amelia Mutter) as well as in the current board of editors (Liliana Doganova, Martin Giraudeau, Claes-Fredrik Helgesson, Hans Kjellberg, Francis Lee, Alexandre Mallard, Andrea Mennicken, Fabian Muniesa, Ebba Sjögren, and Teun Zuiderent-Jerak) and the advisory board consisting of 30+ scholars from a variety of relevant fields. ( )

To encourage interdisciplinary exchange, Valuation Studies refrains from a strong programmatic claim as to how the processes of valuations are to be studied or what specific empirical areas are to be focused. Valuation Studies welcomes papers using or combining a variety of methods, from ethnographic accounts to quantitative appraisal to conceptual interpretation. However, the journal encourages contributors to focus on the pragmatic aspects of valuation activities wherever they take place and to foster dialogue between different approaches working on this broad topic. Although various forms of economic valuation are of central interest to the journal, an overarching idea is that processes of valuation are not always quantitative or economic. Moreover, they regularly involve a number of different concerns and agencies (economic and non-economic, quantitative and qualitative). The journal assembles papers that provide insight into the multiplicity and disputability of valuation practices, metrics and processes and the consequences of valuation practices in terms of how they might resolve, defer or indeed foster conflicts.

Publication process

The standard peer-review of the journal is double-blind. Submitted original articles are first pre-screened by the Editors-in-chief and then assigned to a member of the board of editors as handling editor. Two, or sometimes three or four, reviewers are selected and contacted for each original article. Reviewers are selected among the members of the journal’s advisory board as well as the broader research community. To date, over 80 scholars have been performing peer-review duties for the journal since 2012.

Valuation Studies is only published in electronic form where the entire issue as well as individual articles are made available as downloadable PDF files. Everything is published as full open access from day one and authors retain copyright to their work. The homepage is operated by Linköping University electronic press, which also takes care of archiving the journal. The journal has since the start been financially supported with competitively awarded grants from the Swedish Research Council.

Information about the journal and new issues is disseminated through a variety of channels. There is a journal newsletter, a Twitter feed (@Val_Studies), and a Facebook page, ensuring that work published in the journal is disseminated widely. Moreover, editors of the journal have repeatedly taken initiative to organise conference sessions and streams related to the theme of the journal at relevant conferences. Recent examples include a 7-sessions panel at the joint 4S/EASST conference in Barcelona in 2016 with more than 30 papers and an upcoming sub-theme with 28 papers at EGOS in Copenhagen in July 2017.


The journal welcomes contributions of different kinds and origins. Apart from traditional journal articles, the journal welcomes short opinion pieces or research notes, interviews, staged debates, or indeed longer than normal journal articles.

If you wish to submit an article or propose a different form of contribution, please visit the website or send an email to the journal’s editors

Collaboration and other forms of productive idiocy

As an ‘Integrative Research Center’ at a Technical University in one of Germany´s highly industrialized regions, the Munich Center for Technology in Society (MCTS) is both privileged and cursed to do STS research in the proverbial belly of the beast. Recently re-branded as an ‘entrepreneurial university,’ TUM is a hub for EU H2020 proposals, EIT KICs, and industry-led consortiums set to solve societal grand challenges. The MCTS has, from the beginning, been invited and expected to participate in these techno-scientific initiatives, to do social research, and to speak in the name of individuals, communities, publics, societies. Willingly or not, during its short existence the MCTS has already become a laboratory for more or less experimental approaches to integrating STS research into natural science and engineering projects.

At the MCTS, we nurture these collaborations in a variety of different roles – as epistemic and political allies, as inter- and transdisciplinary counterparts, as idiotic collaborators, and as an institutional hub of social science expertise. We understand these collaborations as one generative way to engage with – and intervene in – the technoscientific setup of our current and future common worlds. Knowing about the dangers of “ELSI-fication and its analytical pitfalls” (Williams, 2005: 342) – mostly related to a certain idea of the STS scholar as the informed and critical outsider “challenging the exclusive role of technical specialists” (Williams, 2005: 342) – we are therefore experimenting with different scopes and scales of engagement to craft situated interventions.

A common thread that links these different engagements is our commitment to both disruptive criticisms and experimental co-creation. The fields we are working in are as diverse as the ways we engage with them. In this short article, we tell three stories of collaborations with our techno-scientific partners and reflect upon the different scopes and scales of collaboration that are at stake.

Starting collaborations from mutual disconcertments 

Ruth Müller, Michael Penkler, Georgia Samaras

In this first story, we meet a group of biologists, social scientists, and humanities scholars who try to think and work together to get a grasp on a shared research topic: environmental epigenetics. This topic seems to both escape and engage all of their traditional disciplinary frameworks, while at the same time raising significant epistemological, social and political questions. Epigenetics is the study of changes in gene expression that are not caused by mutations in the genetic code itself. Rather, epigenetics explores how chemical modifications on the DNA effect changes by regulating which genes can be accessed and transcribed and to what degree. An important branch of epigenetics is environmental epigenetics, which investigates how stimuli from the environment can induce epigenetic alterations. The notion of environmental stimuli includes toxins, food, but also social experiences and lifestyle practices, all of which are thought to possibly affect gene expression and hence health and illness. An epigenetic perspective therefore renders the environment and the way we live in it as crucial for what becomes of our genes and, by extension, for our chances for health or risk of disease (Landecker & Panofsky, 2013; Pickersgill et al., 2013). This has significant implications for biology and beyond.

First, while the genome of genetics was mostly conceived as a stable, central blueprint for the organism – the so-called “book of life” (Kay, 2000) – the genome of epigenetics has become a “dynamic and reactive system” (Fox Keller, 2015: 10). Second, such a conceptualization of the genome as responsive rather than fixed renders social positions and their situated environmental exposures as an important factor for understanding the biosocial becomings of health and illness (Kenney & Müller, 2017; Mansfield, 2012; Meloni, 2015). Recent studies even suggest that epigenetic effects might not be limited to the exposed generation, but could be passed on by inter- and transgenerational epigenetic inheritance.

Here the ambivalent potential of environmental epigenetics becomes apparent: On the one hand, environmental epigenetics could present a tool for social and environmental justice work, pointing out the molecular scars of inequality, possibly across generations. On the other hand, there is also the potential for deterministic logics to emerge that view the biology of individuals and groups as determined by the environments they or their parents live(d) in.

This mutual disconcertment (Kenney, 2015; Verran, 2001) about the ambivalent politics of epigenetic epistemology has been the starting point for international biosocial collaborations set in motion here at the MCTS over the last two years. Collaborative intervention is the goal, and sharing social and epistemic resources is key to the process. What marks these collaborations is a sense of urgency that is not mandated but experienced. We organize workshops, conferences sessions, and public events together, write commentary pieces in life science and public health journals (e.g., Hanson & Müller, 2017), text book chapters, and project applications, all of which pose the following questions: How can we handle this responsibly? How can environmental epigenetics become a tool for social and environmental justice rather than further distinction, division, and discrimination?



This type of collaboration depends massively on the generosity of individual scholars, particularly senior life science scholars, who give credibility to the social science interventions, and who, through continued collaboration, turn them gradually into biosocial interventions. In our case, it is further supported by a recently formed network of epigenetics researchers from different institutions here in Munich, which actively seeks out interdisciplinary dialog; and by the great interest shown by students here at TUM, who become involved in the project on different levels. A collaboration like this one is a long-term process, held together by shared intellectual fascinations and political stakes. It is a messy process of partial translations, ambiguity and compromise. But above all, it is an exciting process of transgressing disciplinary boundaries and re-imagining how we could understand life, health, and illness differently, together.

Dwelling on the traps of collaboration 

Ignacio Farías, Claudia Mendes, Hannah Varga

Our second story is about our participation in the Horizon 2020-funded Lighthouse Project ‘Smarter Together. Smart and Inclusive Solutions for a Better Life in Urban Districts’ – a project aimed at the large-scale and integrated implementation of ‘co-created’ smart infrastructures in districts of Munich, Vienna, and Lyon. Co-creation here is both a central goal and narrative, and this ambiguity is our entry point to the story.

Our role in this large-scale collaborative project is that of both ‘participation experts’ and STS scholars concerned with ‘technical democracy’ (Callon et al., 2009), ‘material participation’ (Marres, 2011) and ‘experimental collaborations’ (Estalella & Sanchez Criado, 2015) – three STS concepts that play ‘too well’ into expert understandings of co-creation. More specifically, we have been in charge of writing recommendations for these three cities on key principles of participatory co-design processes – which we did in an extremely well-received policy document that politely invited city officials to be humbler, and which is now probably resting in the drawers of these city administrations. And, more importantly, we have been organizing co-design processes in Munich’s Stadtteillabor (Farías, 2017), focusing on key ‘smart’ interventions foreseen in the project: multi-modal mobility stations, sharing and delivery of ‘smart’ district boxes, and intelligent street lamps.

Over the last year, we have run four co-design processes, each involving three to six workshops and resulting in prototypes and recommendations. In different ways, these have challenged the goal and scope of the planned interventions, as the inchoate publics we helped to constitute have turned out to be more concerned about the urban conditions affecting the planned interventions than they were about the interventions themselves.

This has raised the question of how to sensitize our technical partners to public concerns and propositions that fall outside of the scope of the project and of what they expect from the co-design process, namely, gathering reliable information about user behaviour, as well as cool and crazy ideas for new services. Resisting the trap of the pre-defined role as ‘participation experts’ who are brought onboard to engage and handle the public, we have time and again surprised ourselves by coming up with ways to conversely engage the experts – how to ‘trap’ them into situations where they have no other option than to take these issues into account. To this end, we have come up with ‘idiotic’ games to be played in our workshops that are aimed at deactivating expertise, invited ‘critical’ experts to challenge gamification strategies and data security arrangements, and set other friendly traps for our partners and colleagues.









The figure of the trap (Gell, 1996; Corsín Jimenez, forthcoming) is extremely helpful for thinking about this form of collaboration. Reflecting on animal traps, Gell implies that these are second-order observation devices containing a model of how the trapper observes how the animal observes its Umwelt. Setting traps, we have discovered, requires us to think like experts, to blend ourselves into their environments, so that we can lure them into spaces where they will seriously engage with idiotic requests and rationales. Traps are not a form of sabotage. Quite the contrary: We use them to honour the very concept of ‘Smarter Together’ as it invites us to think ‘with and against’ each other.

Ontological experiments and (idiotic) interventions 

Marcus Burkhardt, Andrea Geipel, Nikolaus Pöchhacker, Jan-Hendrik Passoth

Our last story takes us into one of the construction sites of our algorithmic future. STS is especially well equipped to fuel (idiotic) interventions in values in design (Knobel & Bowker, 2011): our work inside the belly of the beast allows us to infra-reflexively produce and amplify issues as well as shape and laterally reframe controversies in ‘ontological experiments’ (Jensen & Morita, 2015). In a project that we work on in collaboration with the Bavarian public broadcasting agency, we took on the role of an active stakeholder in the agile software development process of a big data-driven recommender system and used this role to develop institutional and coded interventions to escape potential filter bubbles and data biases.

Public broadcasting plays a distinctive role in the European ecology of media production and distribution. A multi-level policy and governance system tries to balance the mandate of public broadcasting to support a diverse range of opinions and free access to basic information needs with the economic interests of commercial press and broadcasting. In such an environment, building software for non-linear distribution of content like apps or media platforms is legally and politically tricky. Not only do commercial actors carefully watch the potential emergence of publicly financed competition, but data driven services are officially not allowed to discriminate against types of users or create echo chambers and filter bubbles that limit the diversity of content.

By turning issues that are intensely discussed in STS under the notions of critical data studies (boyd/Crawford, 2012), algorithmic culture (Striphas, 2015) and data and knowledge infrastructure (Edwards et al., 2009) into tools for intervening in the software development process, we try to creatively and productively alter its potential outputs. As ethnographers, we also study the implementation of metrics and we follow and map the organizational flows of data and meta-data to both understand the politics of personalization and produce more or less effective means of counter-politics. Over the last year, we have especially studied the design and implementation of a recommender system, a set of algorithms that, based on previous activities, selects and plays additional content: “If you liked this, you might also like that.” There are basically two ways of running those systems: Collaborative filters select items based on what other users selected under similar conditions, whereas content-based filters select items based on similar or fitting meta-data. Both would, if simply implemented, undermine the mandate of public broadcasting: they would produce content recommendations that follow a logic of ‘more of the same,’ not of ‘more diversity,’ recommending parliamentary debates only to those who already watch them and music and entertainment to the masses.

We used our embeddedness in the software design process to develop and experiment with different forms of intervention and problematization. Instead of just studying these emerging data ecologies and mourning the rise of the machines, we seek to open these black boxes of algorithmic culture. What is more, we actively paint them in bright colors: The politics of platforms can at least be contested, data bias and discrimination can be highlighted and addressed, algorithms can at least partially be made accountable – maybe not in general and from the outside, but in particular and from within.

Collaboration without caveats and hyphens

Suggestions for carving out ‘third spaces’ (Fisher, 2003) or para-sites (Marcus, 2000) mostly follow what Jörg Niewöhner has so lucidly called “co-laboration”: They are based on “non-teleological joint epistemic work without the commitment to a shared outcome” (Niewöhner, 2015: 236), based on combined but separate epistemic activities held together by sharing a common (problem) space. This is an essential part of our work, but we also try to collaborate without caveats and hyphens – to find common (political) grounds that enable us to work and think together and to follow the same objectives even if we might draw different consequences. The research space we share with engineers and scientists certainly allows for co-laboration in all the ecological relationships that can be characterized as work that is temporarily joint but epistemically separate. But it also allows us to maintain object- and issue-oriented collaborations based on doing things together and a commitment to a shared outcome. The three stories we told in this short paper are only examples. Others could have been told. In all of them, we nurture these collaborations as ways of engaging with and intervening in the technoscientific setup of our common worlds.

Innovation & Society: The diversity of innovation practice

In 2010, with the launch of the Innovation Union initiative, the European Commission declared the continent to be in a state of ‘innovation emergency’: “We need to do much better at turning our research into new and better services and products if we are to remain competitive in the global marketplace and improve the quality of life in Europe” (EC, 2016). This call for ’more innovation’ has become commonplace across countries, sectors, and organizations. Hardly a day passes without a government or organization launching an innovation strategy. Indeed, it seems as if every government or institutional initiative must answer to a ubiquitous innovation imperative in order to be desirable, economically defensible, and modern (Godin, 2012; Pfotenhauer and Jasanoff, 2017; Rammert et al., 2016). For STS scholars – whether interested in politics and the state, organizations, urban life, changing epistemic and work cultures, or broader questions of justice, responsibility, and democracy – this raises a range of critical questions.

At the MCTS, researchers across various groups analyze the politics, practices, promises, and pressures of innovation in a range of settings. What connects these researchers in their diverse projects is an interest in how the innovation pressure is reconfiguring society and its organizations in fundamental ways. That is, what does it do to societies if every university, every firm, every region, engineer, and government initiative needs to be innovative? Conversely, the projects share an interest in how innovation, despite common rhetoric and instruments, is made up of diverse practices and attempts at meaning-making that are shaped by unique social, political, and organizational factors. In other words, what do governments, firms, institutions, or individuals really do when they say they are becoming innovative?

Traveling imaginaries of innovation 

Sebastian Pfotenhauer, Alexander Wentland, Luise Ruge

In a joint project with colleagues from the U.S. and Denmark, we investigate the circulation of innovation models around the globe, focusing particularly on the ‘best practice’ models of MIT and Silicon Valley as prominent templates for reorganizing universities and regions. In this cross-country comparative study, we investigate how actors in various places envision fundamentally different things under the notion of innovation – what it is, what it is for, how it works, and who needs to be involved. We draw on the concept of sociotechnical imaginaries to show how implementations of the ‘same’ innovation model – and with it the notion of ‘innovation’ itself – are co-produced with locally specific diagnoses of a societal deficiency and equally specific understandings of acceptable remedies (Jasanoff and Kim, 2009). Analytically, the focus on supposedly standardized models in a comparative setting provides a lens onto the social and political underpinnings of innovation. This approach offers new possibilities for theorizing how and where culture matters in innovation policy: It responds to growing concerns from within the innovation studies community about the limits and prescriptiveness of existing theoretical frameworks, and takes seriously the history of failed attempts to emulate ‘success models’ like MIT or Silicon Valley elsewhere. Our approach suggests that the ‘success’ and ‘failure’ of innovation models are not a matter of how well societies are able to implement a supposedly sound universal model, but more about how effectively they articulate their imaginaries of innovation and tailor their strategies accordingly. This study ties into other ongoing MCTS projects like the reorganization of universities under the banners of ‘excellence’ or ‘entrepreneurship.’

Socio-technical futures and Industry 4.0 

Uli Meyer

Another project studies how ideas of innovation and technological progress get translated into socio-technical futures, and how these futures in turn influence society in the present. Socio-technical futures are (usually primarily technical) descriptions of what the future could possibly look like, interwoven with narratives about how certain technological developments will benefit society. They tend to start as mere descriptions of technological possibilities. If successful, they turn into socio-technical promises and even requirements. This dynamic can unfold like a self-fulfilling prophecy: Because more and more people subscribe to a particular future, society performatively develops in this direction (Dierkes et al., 1996; Jasanoff and Kim, 2009; Lente and Rip, 1998). Socio-technical futures are thus both the result and a driver of the ubiquitous innovation imperative: They can only unfold because of society’s general orientation toward innovation, but at the same they act as an important stabilizing element in the innovation discourse. Examples of past socio-technical futures are Moore’s Law for the semiconductor industry, HDTV, or the information superhighway.

A recent and extremely prominent example in Germany is the concept of Industry 4.0 (known more commonly in the U.S. as the Next Production Revolution or the industrial internet). The basic promise and claim of Industry 4.0 is that industries are undergoing large-scale digital transformations due to the growing introduction of information and communication technology. This includes, among other things, the introduction of cyber-physical systems like co-bot workspaces, the increased self-organization of machines on platforms like the internet of things, and emerging ecologies of distributed innovation and digital fabrication. Only this fourth industrial revolution, so the story goes, will secure economic competitiveness and societal welfare in highly industrialized countries. It links industrial performance to the idea of software ‘updates’ and places recent and future developments in the context of a series of ongoing industrial revolutions by way of a teleological narrative. What is more, it is also caters to the promise of re-industrialization of high-wage, post-manufacturing economies. In our project, we ask why and how socio-technical futures like Industry 4.0 gain momentum and become dominant and, as a result, influence governments, firms, institutions, individuals, or society as a whole. To do so, we analyze the role and activities of different types of organizations – e.g., governmental agencies, firms, associations, unions – in such processes. At the level of individual companies, we ask how they try to translate abstract socio-technical futures into their own organization, and how this in turn influences their inter-organizational networks.

Test beds: testing the future 

Franziska Engels, Alex Wentland, Sebastian Pfotenhauer

In the crossroads of the previous two projects the question arises how practices and promises become universally desirable – or, asked differently, when and how models become models. That is, when do we consider a local practice as sufficiently understood in order to be seen as standardizable, package-able, transferable, or scalable (Hilgartner, 2015; Latour, 1990)? One particularly interesting innovation practice in this regard is ‘test beds’ (and related concepts such as ‘living labs’ or ‘real-life laboratories’). ‘Test beds’ have emerged as a prominent innovation model across geographical regions, scales, and technical domains.

Feeding on the popular ‘grand challenges’ discourse and the growing insight that adequate responses to these challenges will require complex transformations, test beds promise to ‘pilot,’ or ‘test,’ sociotechnical futures under ‘real-world conditions’ while at the same time providing a stepping stone and a vehicle capable of bringing this very future about. Most widely invoked in the context of sustainable energy transitions, test beds are deemed particularly useful for areas that are characterized by a high degree of complexity and uncertainty and that require experimental space for new forms of collaborative innovation activity. In a joint project with the Berlin Social Science Center (WZB), we explore and problematize the notion of test beds in energy contexts at various scales, including sustainable energy campuses and regional energy initiatives. The project investigates how the test bed approach marks a shift in the conceptual understanding of how innovation operates and at what scale, and who ought to be involved in this collective innovation endeavor. Test beds, moreover, imply normative changes in the relationship between innovation and society, as society both acts as the laboratory for innovation and, at the same time, is enrolled at an early stage to performatively enact the future that it is supposed to test. In particular, we explore how test beds operate with a tacit expectation of scalability that requires social work and specific forms of vision alignment (Engels et al., 2017). Again, this project ties synergistically into various other projects at the MCTS, such as the role of urban laboratories and participatory infrastructures in ‘smart cities.’

Innovation scripts in firm settings 

Judith Igelsböck, Uli Meyer

The dominance of certain prominent role models and discourses around innovation, such as Silicon Valley and the inevitable rise of Industry 4.0, bear witness to isomorphic tendencies of imitation and homogenization in the innovation landscape. Against the background of a pervasive innovation imperative and the fear of becoming the ‘next Kodak,’ industrial organizations thus face a permanent pressure to innovate. But what does this actually mean to individual organizations? What do firms really do when they decide to – or feel pressured to – become innovative? Where do they get their ideas from and how do such ideas spread? In one project, we seek to understand how ‘innovation scripts’ function as mode of normalization within and across industrial fields. While innovation is closely interwoven with paradigms of creativity and novelty, industrial organizations tend to follow similar scripts and thus innovate in similar ways. This project is an empirical quest for the innovation scripts that guide innovation activities in terms of the human and non-human ‘agents of change’ mobilized to perform innovation, the distribution of roles and responsibilities among them, and the innovation settings in which innovation is supposed to be taking place (Akrich, 1992). The analysis attempts to contribute to a theoretical understanding of where the ideas about how to innovate come from, how such ideas circulate and manifest, and how this dynamic impacts society.

Innovation in inter- and transnational settings 

Mascha Gugganig, Nina Witjes, Nina Frahm, Verena Kontschieder, Federica Pepponi

Cutting across the aforementioned projects sits another set of questions around how science and innovation function when they explicitly seek to straddle cultural and jurisdictional boundaries. Throughout a range of projects, researchers at the MCTS explore how science, technology, and innovation play out in – and help configure – international settings, for example, in the making of institutions, identities, discourses, or representations. For instance, which visions of Europe are advanced through robotics or food innovation in EU-funded research consortia? How do national understandings regarding the need for and limits of new robotics or food technologies differ? How does this add up to one cohesive European approach (if at all)? What does it mean do foster regimes of responsible innovation in international settings such as the OECD or the EU? And how do knowledge practices and technology enter into international relations, e.g., in the form of remote sensing and security technologies? Across these projects, ongoing work seeks to address tensions between tendencies to standardize and harmonize innovation practice on the one hand, and the immutable diversity of innovation’s socio-cultural embedding on the other. It builds on long-standing comparative research traditions in STS research (Jasanoff, 2005) as well as STS literature on infrastructures and standardized regimes (Barry, 2006; Timmermans and Berg, 1997).

The Munich Center for Technology in Society (MCTS): Raising the stakes for STS in Germany

Das Hauptportal der Technischen Universitaet Muenchen (TUM) in der Arcisstr. 21 in 80333 Muenchen zur Zeit der Kastanienbluete am 01.05.2012; Foto:© Astrid Eckert / TU Muenchen

Greenfield, brownfield, center field?

Imagine the following scenario: You are being approached by the leadership of the leading technical university of a country to set up a completely new center for Science and Technology Studies (STS) as a central part of its future strategy. You are being guaranteed both strong support by the university management and a significant amount of start-up funding to make a splash in the German landscape. You are given the explicit mandate to recruit international early-career scientists on positions that hitherto did not exist in Germany (tenure track), and who ought to represent different strands and schools within STS – young faculty who are both dedicated STS-ers and compatible with an ambitious technical university. And imagine you were given free rein to establish a new portfolio in research, teaching, and public dialogue – not just about science and technology but, according to expectations, also with scientists and engineers.

On the surface, this seems like an almost surreal carte-blanche opportunity for a field that has struggled since its inception with a lack of institutionalization and institutional support. Yet it presents both a daunting task and responsibility: How do you represent an international field that has grown considerably in breadth, depth, and scholarly traditions? How do you include, revive, and/or break with existing STS traditions in Germany, where the field has suffered years of institutional impasses? How do you position STS ‘in the belly of the beast’ – that is, in the midst of an institution that embodies, in almost exemplary form, STS’s subject-matter of technoscience? And how do you balance scholarly independence at a center-in-the-making with high hopes that it will contribute a value-added and service provision to its host institution?

Thus were the opportunities and challenges when Sabine Maasen was entrusted with the task of establishing the Munich Center for Technology in Society (MCTS) as its director-elect and professor of sociology of science in April 2014. In keeping with this balancing act, the MCTS embraced its genealogical roots at the Technical University of Munich (TUM), where it could build on a long-standing tradition of philosophy of science and technology, history of technology in conjunction with the Deutsches Museum (Box 1), and the Carl von Linde-Academy for interdisciplinary education of students in science and engineering. At the same time, however, the MCTS embarked on a two-year international hiring spree to construct a strong and internationally connected STS center that could capitalize on flourishing STS research and practice taking place elsewhere. What is more, from the start, it affirmed its envisioned role as an integrative pillar at a technical university, courageous enough to combine critical analysis with co-shaping and interventionist activities in today’s TechnoSociety.



Building on strong foundations: Philosophy and History of Science and Technology at TUM

Particularly in Germany, technical universities have been a stronghold of philosophers of science and technology for nearly a century. Installed originally as a unifying counter-measure to growing disciplinary fragmentation, philosophy of science and technology has a long tradition of investigating into the epistemological and ethical foundations of science and engineering. This initial emphasis on foundational issues has partly given way to analysis of, and reflection on, the conceptual and normative frameworks of current scientific and technological developments. Either way, the philosophical inquiries envisioned at MCTS remain deeply and directly informed by scientific and engineering practice. A focus on Artificial Intelligence, Big Data, and, more generally, ICTs has been a key characteristic of research and teaching in philosophy at TUM and MCTS until the retirement of MCTS founding director Klaus Mainzer in March 2016. Concepts like ‘information,’ ‘complexity,’ and ’cognition’ have been, and remain, the nucleus of epistemological and ethical inquiries, as senior members of the group continue to work on data-intensive sciences, intelligent environments, or the pragmatic dimensions of computer simulations. The position of a full professor of philosophy of science and technology is currently open, and whoever joins the MCTS in the near future will be a pillar of the STS community at TUM.

Likewise, another trademark of the German academic landscape is the traditionally strong institutional affinity between history and technology. Engineers have been engaged with history at technical universities since the early 20th century. At the outset, they mainly constructed heroic tales of great inventors and inventories of major inventions. After World War II, reflexive accounts by historians replaced such a perspective. Historians are storytellers – and historians of technology have stories to tell about technology, its progression, application, and the impact it has on the life of past and present societies. A broad understanding of technology is at the core of historical research at the MCTS, including both the making and becoming of artefacts, techno-social systems, technical knowledge, and technology-mediated practices. Historians of technology at MCTS analyze their material in its specific spatio-temporal configuration by working with historical sources, including archival material, physical objects and structures, texts, statistics, and images of various provenance, but also sounds, signals, codes, all forms of virtual information, and much more. They are interested in theoretical resources from the humanities and social sciences, using them not only to interpret primary sources but also to give their narratives time-specific sense and significance. Historical research at the MCTS ranges from 19th century logistical infrastructure to evidence practices of technical security and technologically enhanced plants in the 20th century.


This decidedly broad mission reflects, on the one hand, the MCTS’s youth and continuing state of becoming. With every new member, we add and explore ever-new nuances of intellectual entrepreneurship and passion. At the same time, a recognizable and unique intellectual profile of the MCTS as a whole gradually emerges from the amalgamation of individual interests. On the other hand, the MCTS’s mission recognizes that we are ‘standing on the shoulders of giants’ and necessarily need to situate our activities as part of an established scholarly community of Science and Technology Studies. In its diversity, STS has become known as a field that investigates knowledge and knowledge-making in its heterogeneous forms and fashions – be they scientific, technical, or symbolic; embodied in objects (e.g., instruments) or material systems (e.g., industries); narrowly expert-centered or broadly inclusive of other stakeholders; universalizing and standardizing (e.g., through indicators and infrastructures) or bound by contingent local practice (e.g., different ways of knowing and valuing); embedded as part and parcel of politics, law, and economics as well as inextricably linked to, e.g., popular, religious, or aesthetic culture (see the article, “Innovation and Society,” below).

In short, the processes and practices interlinking science, technology, and society are non-linear, contested, and time-bound – they are “constitutive” of modern life. Science and technology both inform the ways social life is ordered and they enact certain ideas of desirable order, progress, and futures. Developments in science and technology regularly cut to the very heart of the social, political, and legal categories that order our modern states, international relations, diplomacy, community, and citizenship. At the same time, they order the categories that we employ to interpret individual and collective rights such as fair procedures, bodily integrity, sustainable development, and many others. From molecular biology to health care, from social media to cyber espionage, from evidence-based policy-making to innovation-based economic growth, science and technology are not only constitutive elements of social order, but also constitutional. It is in this sense that we at the MCTS talk about present societies as TechnoSocieties.


A comprehensive scope

Given this breadth in scope and ambition, STS today needs a broad spectrum of epistemic and methodological resources to understand the many ways in which science, technology, and society are constitutive of one another. At the MCTS, our members’ rich variety of disciplinary backgrounds provides us with a toolbox of options and approaches for inquiry. Our expertise covers various social sciences (e.g., sociology, political science, anthropology, public policy, and geography) and humanities (philosophy and history of science and technology), in part enriched by additional expertise in the natural sciences (e.g., physics and biology), or engineering (e.g. informatics and systems engineering). This mix allows us to tackle research projects as thoroughly socio-technical and normative challenges, and to work towards analyses and solutions that recognize their social, political, environmental, and industrial implications. It also acknowledges that science in democratic societies needs to reflect a range of voices and interests, both in terms of disciplinary approaches and trans-disciplinary openings. No major problem today can be tackled by scientific experts and/or technical means alone. Rather, technoscientific developments become subject to contestations and negotiations with the wider public as well as with political, industrial, and bureaucratic actors.

In this spirit, the MCTS also considers itself to be part of today’s innovation culture, practicing a culture of ‘critical engagement’ and ‘engaged critique’ across its projects, teaching, and public outreach. While mobilizing a critical intellectual distance to detect normative or epistemic assumptions that may lead to barriers, misunderstandings, or conflict, we also engage in co-creative and ‘co-laborative’ practices (see the article, “Collaboration and other forms of productive idiocy,” below). In our view, this is the unique opportunity that STS affords today: it is – or can be – at the same time both an analytic practice and a practice of intervention, two forms of engagement that enrich each other but which also need to be balanced and investigated regarding the epistemic and normative assumptions that guide them.



By way of an example, the program of the chair in sociology of science puts ‘Exploring TechnoSociety’ front and center in its research program. This research group explores how diverse societal actors explore their ‘technological existence’ by (re-)engineering ever-new interfaces between the social and the technical, an emerging roboticized life-world being but one object of study. Members of the group participate in the co-shaping of social robotics with engineers and actors in the care professions and industries. They analyze societal promises (e.g., empowerment of the elderly) and perils (e.g., the instrumentalization of citizens and experts as co-producers and legitimators of socio-technical developments). Here, scholarly analyses of the ambivalences attached to a post-technocratic regime go hand in hand with collaborative research practices, continuously informing and correcting one another.


Building blocks of an STS center

The MCTS is lucky to be able to pursue its vision and mission with considerable resources. Since April 2014, the center’s scientific staff has grown to more than 60 members, including 35 graduate students, spread across eight units (see below). In addition to becoming an stand-alone hub for STS in Germany, the MCTS is also an ‘integrative research center’ within TUM, tasked with bringing STS insights to bear at various other TUM departments and schools, and providing a crystallization point for social science research. This implies, among other things, that every professor at the MCTS is also affiliated with one or two other departments at TUM. This institutional structure enables us to forge strong links with science and engineering as well as management and the political sciences at TUM. The MCTS units are:

  • Sabine Maasen, Chair in Sociology of Science, co-affiliated with the School of Governance as well as with the School of Education
  • Karin Zachmann, Chair in History of Technology, co-affiliated with the School of Education
  • Chair in Philosophy of Science and Technology (currently vacant), co-affiliated with the School of Governance
  • Ignacio Farías, Professor of Participatory Technology Design, co-affiliated with the Department of Architecture
  • Ruth Müller, Professor of Science and Technology Policy, co-affiliated with the School of Life Sciences Weihenstephan
  • Sebastian Pfotenhauer, Professor of Innovation Research, co-affiliated with the School of Management
  • Uli Meyer: Group leader, Reorganizing Industry Lab
  • Jan-Hendrik Passoth: Group leader, Digital Media Lab



The dual focus on core STS education and integration with other TUM faculty is also reflected in the MCTS’s comprehensive teaching portfolio, which addresses a wide range of specific groups. MCTS education activities comprise, first, a study program at the Bachelor level on “Science, Technology, and Society,” open to all students at TUM. This study program is intended to train students in the natural sciences and engineering to appreciate technical problems as socio-technical ones and to broaden their range of responses. Second, the MCTS features two Masters programs: “Science and Technology Studies” (M.A. STS) and “Responsibility in Science, Engineering, and Technology” (M.A. RESET). This differentiation responds to the different backgrounds and interests of graduate students, who may prefer a greater academic or professional orientation, respectively. Third, the MCTS has a PhD Program on “TechnoScienceSocieties” that offers PhD students a range of dedicated short courses and workshops alongside research and teaching opportunities. Finally, the MCTS has established a catalogue of “STS plug-in modules” custom-tailored for Masters programs at other TUM departments, including the modules “Data Science in Society,” “Responsible Governance in Science, Technology, and Society,” “What Future of Mobility? Engaging Technologies, Politics, Economic Scenarios, and Practices,” as well as “Technoscience and the City.”

Being a young center also means being visible and being vocal. Like most STS centers, the MCTS offers a range of regular events such as public research colloquia, workshops, and the Munich Lecture on Technology in Society. Moreover, during its short existence, the MCTS has already hosted a range of workshops and symposia targeting audiences from inside TUM, the global STS community, as well as other stakeholders, e.g., from industry and politics. A short snapshot of recent activities can be found in Box 2. At the same time, the MCTS offers a rich playing field to participate in, experiment with, and critically interrogate novel forms of inter- and transdisciplinary collaborations. In “Collaboration and other forms of productive idiocy,” (see below) we offer a brief review of recent experiences and formats of MCTS collaborative activities.


In-reach and outreach at MCTS: Some recent examples.

Makeathon on 3D Printing in Prosthetics

This four-day MCTS event explored the digital production chain for upper and lower limb prostheses. It brought together researchers from the fields of industrial design and STS with users and other stakeholders from the prosthetics and 3D printing industries. The Makeathon covered the entire process from ideation to the actual production of 3D-printed physical models. As a powerful example of applied STS research, it opened up new perspectives on sociotechnical change and user-centered design, inspiring collaboration beyond the event itself.

Sensor Publics: On the Politics of Sensing and Data Infrastructures 

What happens when sensing and data infrastructures, from satellites to self-tracking devices, become objects of public concern? This two-day MCTS event brought together researchers working at the intersection of STS, sociology, critical security studies, and engineering to engage with claims that our societies are witnessing a proliferation of sensors, from satellites to smart-city devices. Featuring keynotes from two leading STS scholars, an interdisciplinary mix of research papers, participatory workshops, and a demonstration of how to hack a satellite, this event sought to critically explore and test propositions about the affordances of sensing technologies for political participation.

IGSSE Forum – Science and technology in, with, and for society

Taking responsibility for running a mandatory three-day event for the TUM International Graduate School for Science and Engineering (IGSSE), the MCTS engaged with about 120 TUM PhDs and post-docs from the natural sciences and engineering. Together, we analyzed the meaning and relevance of inter- and transdisciplinary interactions for different scientific areas. We discussed topics such as science and technology policy, responsible research and innovation, and the democratization of science and technology. Among other things, the participating PhD students were challenged to conceptually transform their own research posters (and, by extension, projects) around socio-technical questions, after having been provided a range of STS concepts, ideas, and tools.

Rethinking the Genome – Epigenetics, Health & Society 

This interdisciplinary panel discussion brought together international experts from the life sciences, social sciences, and humanities, to discuss the opportunities and challenges of epigenetic research for science and society with an engaged audience. The panel discussion was followed by a two-day workshop focused on investigating the concept of biosocial plasticity. Researchers explored the narrative and epistemological formations that enable and limit the thinking and doing of biosocial plasticity in science and society.

These points further speak to the two-fold challenges facing the MCTS on its path ahead: Finding our voice as part of both the STS community and the technical community at TUM. Regarding the STS community, the MCTS will – by design – likely never offer a unified answer to questions about theoretical commitments, topics and sites of interest, or visions for STS as a field. To do so would be both unrealistic and undesirable. Yet we are actively seeking common intellectual ground and empirical overlaps, for example, in a research group, the Engineering Responsibility Lab, which includes researchers from all MCTS units. Regarding the technical community at TUM, as STS researchers at a technical university, we are faced with the boon and bane of being both part of driving sociotechnical developments and being critical of them. Yet we consider this tension to be a positive sign: In our opinion, the founding of the MCTS reflects a growing desire for greater institutional reflexivity at (technical) universities. These universities feel the need to position themselves vis-à-vis societal challenges grand and small. The case of the MCTS in particular demonstrates how a traditional technical university can invent new institutional structures (integrated research centers, joint tenure-track based appointments) and follow through with substantial resources to respond to these challenges in an ‘entrepreneurial’ way. This opportunity will continue to excite and challenge us – our professional identities and careers, the MCTS as a networked organization, as well as our visions for the practice of STS. From here, we are already imagining what the seemingly carte-blanche scenario will have grown into in five years’ time. We invite you to stay tuned (

Is STS all Talk and no Walk?

STS talks the talk without ever quite walking the walk. Case in point: post-truth, the offspring that the field has been always trying to disown, not least in the latest editorial of Social Studies of Science (Sismondo 2017). Yet STS can be fairly credited with having both routinized in its own research practice and set loose on the general public – if not outright invented — at least four common post-truth tropes:

  1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.
  2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.
  3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.
  4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties.

What is perhaps most puzzling from a strictly epistemological standpoint is that STS recoils from these tropes whenever such politically undesirable elements as climate change deniers or creationists appropriate them effectively for their own purposes. Normally, that would be considered ‘independent corroboration’ of the tropes’ validity, as these undesirables demonstrate that one need not be a politically correct STS practitioner to wield the tropes effectively. It is almost as if STS practitioners have forgotten the difference between the contexts of discovery and justification in the philosophy of science. The undesirables are actually helping STS by showing the robustness of its core insights as people who otherwise overlap little with the normative orientation of most STS practitioners turn them to what they regard as good effect (Fuller 2016).

Of course, STSers are free to contest any individual or group that they find politically undesirable – but on political, not methodological grounds. We should not be quick to fault undesirables for ‘misusing’ our insights, let alone apologize for, self-censor or otherwise restrict our own application of these insights, which lay at the heart of Latour’s (2004) notorious mea culpa. On the contrary, we should defer to Oscar Wilde and admit that imitation is the sincerest form of flattery. STS has enabled the undesirables to raise their game, and if STSers are too timid to function as partisans in their own right, they could try to help the desirables raise their game in response.

Take the ongoing debates surrounding the teaching of evolution in the US. The fact that intelligent design theorists are not as easily defeated on scientific grounds as young earth creationists means that when their Darwinist opponents leverage their epistemic authority on the former as if they were the latter, the politics of the situation becomes naked. Unlike previous creationist cases, the judgement in Kitzmiller v. Dover Area School Board (in which I served as an expert witness for the defence) dispensed with the niceties of the philosophy of science and resorted to the brute sociological fact that most evolutionists do not consider intelligent design theory science. That was enough for the Darwinists to win the battle, but will it win them the war? Those who have followed the ‘evolution’ of creationism into intelligent design might conclude that Darwinists act in bad faith by not taking seriously that intelligent design theorists are trying to play by the Darwinists’ rules. Indeed, more than ten years after Kitzmiller, there is little evidence that Americans are any friendlier to Darwin than they were before the trial. And with Trump in the White House…?

Thus, I find it strange that in his editorial on post-truth, Sismondo extols the virtues of someone who seems completely at odds with the STS sensibility, namely, Naomi Oreskes, the Harvard science historian turned scientific establishment publicist. A signature trope of her work is the pronounced asymmetry between the natural emergence of a scientific consensus and the artificial attempts to create scientific controversy (e.g. Oreskes and Conway 2011). It is precisely this ‘no science before its time’ sensibility that STS has been spending the last half-century trying to oppose. Even if Oreskes’ political preferences tick all the right boxes from the standpoint of most STSers, she has methodologically cheated by presuming that the ‘truth’ of some matter of public concern most likely lies with what most scientific experts think at a given time. Indeed, Sismondo’s passive aggressive agonizing comes from his having to reconcile his intuitive agreement with Oreskes and the contrary thrust of most STS research.

This example speaks to the larger issue addressed by post-truth, namely, distrust in expertise, to which STS has undoubtedly contributed by circumscribing the prerogatives of expertise. Sismondo fails to see that even politically mild-mannered STSers like Harry Collins and Sheila Jasanoff do this in their work. Collins is mainly interested in expertise as a form of knowledge that other experts recognize as that form of knowledge, while Jasanoff is clear that the price that experts pay for providing trusted input to policy is that they do not engage in imperial overreach. Neither position approximates the much more authoritative role that Oreskes would like to see scientific expertise play in policy making. From an STS standpoint, those who share Oreskes’ normative orientation to expertise should consider how to improve science’s public relations, including proposals for how scientists might be socially and materially bound to the outcomes of policy decisions taken on the basis of their advice.

When I say that STS has forced both established and less than established scientists to ‘raise their game’, I am alluding to what may turn out to be STS’s most lasting contribution to the general intellectual landscape, namely, to think about science as literally a game – perhaps the biggest game in town. Consider football, where matches typically take place between teams with divergent resources and track records. Of course, the team with the better resources and track record is favoured to win, but sometimes it loses and that lone event can destabilise the team’s confidence, resulting in further losses and even defections. Each match is considered a free space where for ninety minutes the two teams are presumed to be equal, notwithstanding their vastly different histories. Francis Bacon’s ideal of the ‘crucial experiment’, so eagerly adopted by Karl Popper, relates to this sensibility as definitive of the scientific attitude. And STS’s ‘social constructivism’ simply generalizes this attitude from the lab to the world. Were STS to embrace its own sensibility much more wholeheartedly, it would finally walk the walk.

Matters of Fact(ization), Matters of Capitalization, and Matters of Care

Alternative facts is old news in STS. We can illustrate that by going back to one of STS’ beginnings in England in the 1920s. In a controversy conducted by means of popular pamphlet sales, the universal facts of the nature of human nature argued by philosopher Bertrand Russell, contested with the alternative facts of the nature of human nature presented by biologist JBS Haldane. Argued through the medium of classical Greek mythology in the forms of Icarus and Daedalus, the alternative fact making mechanisms of philosophy and natural science were pitted against each other, each claiming its facts as a guide to the future.1 This controversy was, like the controversy around Trump’s alternative facts that Howe writes of, fuelled by an emotional contagion, albeit that Icarus and Daedalus gave such populism a more decorous Englishness compared to the raw twenty first century American version.

The ugly phrase ‘alternative factizations’ more precisely names what was going on in that English re-run of the tensions between fathers and sons, but in the 1920s articulating the STS trick of turning things into processes, nouns into verbs, still lay the future. Yet, there is no doubt that informed relativizing readings of alternative factizations was done in the 1920s as it was in 2017. When it comes to facts, careful and care filled readings of the evidence, comparing of contesting analytic concepts, and articulating opposing views about felicities, or absence thereof, in rhetorical style and so on, is an ordinary part of collective life in liberal democracies. Sorting out alternative facts proposed by experts is something that the demos can do now, and could already do at the beginning of the twentieth century.

In 2017, as in the 1920s, it does not require STS analysts to jump up and down to initiate such readings. But such readings do require that the institutional landscapes of democracy in liberal polities be vibrant and cared for.2 When they—parliaments, bureaucracies, academies for example, are starved of care and resources, factizations easily go off the rails of democracy. That we as STS analysts, citizens with special response-abilities, and responsibilities which we bear as academics, currently feel a need to write about alternative facts, is a worrying sign.

STS has been offering rich and complex accounts of facts and how they work, focussing on difference as Law suggests we do, for nearly fifty years by now. There I am pointing to another beginning of STS, one perhaps more familiar to today’s practitioners: the advent of sociology of scientific knowledge (SSK) and the empirical program of relativism (EPOR) in mid twentieth century Britain. Building on this while mobilizing quite other beginnings in French thought, Latour was prescient in warning that while that standard epistemological game of relativising critique might be politically satisfying, it was also very bad politics, challenging us to lift our politico-epistemic game.3 He argued for a shift in focus from matters of fact to matters of concern as the object known in politico-epistemics,4 and went on to propose a machinery for redesigning modern institutions around values.5 Yet AIME is no more useful than Greek mythology, SSK, or EPOR when it comes to dealing with and in the myriad here and now puzzles and problems that we face in grappling with the everyday politico-epistemic work of articulating policy goods—the on-going bread and butter work of STS analytics. The fatal flaw in these standard STS analytics is conceptualization of the STS analyst: an unnoticed proposing as removed observer prevails. Flagging the shifts in reworkings of contesting matters of factization, of the sort that Pontille and Torny point to, not only has the effect of realising truth as provisional and tentative, but it can also be a means of pointing to the ineluctably partial situation of the analyst.

Contempt bred of familiarity with matters of factization is even more dangerous in the absence of interrupting. But I propose that even more dangerous, is stealthily instituting rule through and as, matters of capitalization in contemporary liberal democracies, as objects of governance.6 A particular worry is the ways matters of capitalization parade themselves as matters of factization.7 The demos in liberal democracies is only at the very beginnings of learning to do informed relativising readings of matters of capitalization.8 Related to the urgent need for democratic ‘capacity building’ in this regard, is recognizing and making explicit the work of doing matters of care.9 And here more than ever, the unrecognized conceptualizing of the STS analyst as removed observer gets in the way. When the figure of the analyst in the epistemic practices of STS is a removed observer, just as she is in the epistemic practices of both factizing and capitalizing, there is no means to constitute a generative analytic tension.

Careful and care filled readings are needed to distinguish matters of factization and the forms of its concepts, from matters of capitalization and the forms of concepts through which that is accomplished. Teasing out what is inside matters of factization, and inside matters of capitalization, approaching their working concepts as companions in the here and now, is a matter of care. The figure of the analyst here is recognisably a partial participant in the situations analysed. Expounding sturdy STS epistemic practices that make a virtue of that, is a priority.

Here I am proposing yet another STS beginning. This is the situationism articulated in separate times and places by Mannheim and Dewey, who in their different ways never forgot their experiences of being partial participants in total wars. Identifying this alternative STS beginning sets up a contrast to Bernal in particular, currently a heralded STS originator. As a Marxist, Bernal managed to maintain his removed observer position in war from the privileged position of operational headquarters. STS needs to care for itself in attending to its many beginnings, keeping the tensions between them explicit.


1 JBS Haldane (1923). Daedalus or Science and the Future, Kegan Paul, Trench Trubner & Co. Ltd., London; Bertrand Russell (1924). Icarus or The Future of Science, Kegan Paul, Trench Trubner & Co. Ltd., London,

2 Landscapes of Democracy, an emerging ethnographic research project convened by Endre Dányi (Department of Sociology, Goethe University, Frankfurt am Main) and Michaela Spencer (Northern Institute, Charles Darwin University), focussing on institutional political practices of liberal democracy in Germany and Northern Australia.

3 Bruno Latour (2004). “Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern,” Critical Inquiry 30, no. 2, 225-248; see also Bruno Latour (2013). An Inquiry into Modes of Existence: An Anthropology of the Moderns, Massachusetts: Harvard University Press, p 2-6.

4 Bruno Latour (2008). What is the Style of Matters of Concern? Amsterdam: Van Gorcum

5 Bruno Latour (2013). An Inquiry into Modes of Existence: An Anthropology of the Moderns, Massachusetts: Harvard University Press

6 Wendy Brown (2015). Undoing the Demos. Neoliberalism’s Stealth Revolution, New York: Zone Books.

7 See Verran 2015: 376 (Helen Verran, (2015). “Enumerated Entities in Public Policy and Governance” in Mathematics, Substance and Surmise. Ernest Davis and Philip Davis (eds). Springer International Publishing Switzerland.)

8 Fabian Muniesa, Liliana Doganova, Horacio Ortiz, Álvaro Pina-Stranger, Florence Paterson, Alaric Bourgoin Véra Ehrenstein, Pierre-André Juven, David Pontille, Başak Saraç-Lesavre, Guillaume Yon (2017). Capitalization. A Cultural Guide, Paris: Presses des Mines, Collection Sciences.

9 Maria Puig de la Bellacasa, (2017), Matters of Care: Speculative Ethics in More Than Human Worlds. Minneapolis: University of Minnesota Press; Social Studies of Science 2015, Vol. 45(5) “The Politics of Care in Technoscience”; and Annemarie Mol, Ingunn Moser, Jeannette Pols (eds.)(2015) Care in practice: on tinkering in clinics, homes and farms Bielefeld: transcript.

The little tools of difference

Alternative facts: quite a problem for STS and cognate disciplines. Here’s my situated halfpenny worth.

Fifty years ago, the discipline learned from Kuhn that correspondence theory works poorly, and opted instead for the workability of pragmatism. Forty years ago, this became the shaping of science by social interests. There wasn’t room for distortion (there was no benchmark for truth), but perhaps it was possible to distinguish between legitimate and illegitimate interests. Thirty years ago, the discipline was thinking about performativity. Truths are truths because realities are enacted to match them in locations such as laboratories. And then more recently some started to try to distinguish appropriate from inappropriate ways of knowing. Perhaps different kinds of expertise deserve different rights? Or different modes of existence are in need of diplomacy?

As I said, a situated story. But it also seems to me that in its fifty years, parts of the STS that I know best have started to come full circle. First it tore down philosophical stipulations about scientific method, and began instead to describe the complex practices of science. Always, to be sure, in the face of accusations that it was undermining truth, propriety, and/or civilisation. But now STS is being tempted by stipulation again. Not cognitive stipulation, but social stipulation. We are being asked to order the institutions of knowing so that something like truth will triumph. It is as if a new Comtean class of social intellectuals has stepped up to the podium, or perhaps I mean into the agora.

Okay, with alternative truths on the rampage, politics is deeply disheartening. But as you can tell from the irony, I doubt that we need new forms of rule-based stipulation. First, these lack a degree of political realism. People sometimes attend to intellectuals, but in politics, capital P, STS is just a sideshow, so who is going to listen? I’m not sure. Second, in one way I am also grateful that this is so, for while STS can surely make a difference, the prospect of STS as philosopher king is pretty scary. Do we think that we are to be trusted to regulate the generation of truth? Do we think that any elite is to be trusted? Sorry, but I am a sceptic. And then, three, perhaps most strangely, I fear we are forgetting STS 101. We are forgetting that the world and its institutions are contingent, that there is no purity, and that rules do not govern; that the world and its truths are messy practices and struggles. But if this is right then stipulation is a prince that will never rule. But there is an alternative, for we are at our strongest when we work to understand those struggling practices and their specificities; and (remembering that whatever we do is also performative) when we try to intervene in modest ways in particular places. Directly by standing up and shouting, or by writing, voting, commenting, criticising, persuading or seducing. (The modes of analytical-political practice are many). Or indirectly (perhaps this is our unique selling proposition) by re-articulating and reframing. By chipping away at common sense to show that other ways of being might be possible if (for instance) you want better disability care, or clean water in the villages of Zimbabwe.

Notice that I have talked of intervening. Large parts of STS are good at this. They know about mess and contingency. They know themselves to be situated too. And then, fired by a mix of curiosity and outrage, they make specific interventions about: nuclear waste; toxic dumping; public interpretations of science; the social agendas of primate research; bioprospecting; the struggle between care and control in health; technological genderings; dominatory legal practices; epistemicides; the colonial character of some environmentalisms; the dangers of monopoly claims to order; and land-use pressures on indigenous agricultures. But you don’t need my list. Please make your own.

The lessons? Again, please list your own. But for me the answer leads to specificity and difference.

Specificity. General nostra and high moral indignation are exciting but mostly (there are no rules) their reach is limited. General anything won’t do because there is no general. There are specificities pretending to be generalities, yes, but that is different. Indeed, this is precisely the problem. For STS tells us that such generalities are done and redone here, or there, in particular places and practices. In this newspaper, classroom, web site, office, ballot box, farm or at this border-crossing point. Always in specific material practices. And STS is good at understanding such specificities. It is good at insisting that practices and their truths are not general. And it is also pretty good at crafting possible alternative practices too. Not alternative truths but alternative practices. Creating what Kristin Asdal calls little tools. But what might it craft? What kinds of little tools?

That is for you to say. What kind of a difference do you want to make? But in the face of alternative truths I find myself joining those who craft specific practices for recognising and articulating difference. The object being to generate the discomforts of friction by creating practices of multiplicity. Because, and here we come to the point, performative success is easy when it encounters no resistance. Alternative truths prosper in social and technical monocultures that choke whatever does not fit, ecosystems populated by little tools that seal off otherness. But if this is right then well-intentioned general rules are less important than the proliferation of friction-making material tools for opening up and articulating uncomfortable differences. I am saying that we need to put effort into serious attempts to craft and seed these in endlessly many specific places and sites of struggle. Of course there is no single answer. But our discipline knows about the material specificities of struggle. It knows about practices for disrupting self-evidence and making disconcerting differences. And it knows about interference. In short, it knows about creating little tools for disrupting alternative truths. Clearly there is urgent work like this to be done.

Beyond Fact Checking: Reconsidering the Status of Truth of Published Articles

Since the 17th century, scientific knowledge has been produced through a collective process, involving specific technologies used to perform experiments, to regulate modalities for participation of peers or lay people, and to ensure validation of the facts and publication of major results. In such a world guided by the quest for a new kind of truth against previous beliefs (see Howe’s piece, this issue), various forms of misconduct – from subtle plagiarism to the entire fabrication of data and results – have largely been considered as minimal, if not inexistent. Yet, some “betrayers of the truth” have been alleged in many fraudulent cases at least from the 1970s onward (Broad and Wade 1982), and the phenomenon is currently a growing concern in many academic corners – scientific journals, funding bodies, learned societies, analysts, leading to an extensive literature. More recently, the reveal of an industry of manipulated publications behind the scenes by pharmaceutical firms (Sismondo, 2009) has strengthened the doubts about the reliability of “gold standards” of proof, while the disappointing results of specifically designed studies have led to a replication crisis in some experimental disciplines (e.g. psychology, clinical medicine). Simultaneously, the growing industry of “predatory publishing” has reshaped the very definition of a peer-reviewed journal (Djuric, 2015).

In this context, “post-publication peer review” (PPPR) has often been lauded as a solution, its promoters valuing public debate over in-house validation by journals and the judgment of a crowd of readers over the ones of a few selected referees (Pontille and Torny 2015). Along those lines, the public voicing of concerns on a result, a method, a figure or an interpretation by readers, whistleblowers, academic institutions, public investigators or authors themselves have become commonplace. Some web platforms, such as PubPeer1, have even developed alarm raising and fact checking as new forms of scholarly communication. Facing numerous alerts, journals have generalized dedicated editorial formats to notify their readers of the emerging doubts affecting articles they had published.

This short piece is exclusively focused on these formats, which consists in “flagging” some articles to mark their problematic status. Acting and writing are tightly coupled here: to flag an article consists in publishing a statement about an original paper, in the same journal, as part of its publishing record2. Instead of crossing out texts like deeds in Law or archiving the various versions of a single text like in Wikipedia, the flag statement does not alter the original paper. As a result, links between the two documents and the free availability of the statement designed to alert audiences are crucial3.

In the last twenty years, three ways of flagging articles have become commonly used by journals: expression of concern, correction, and retraction. These written acts enact peculiar forms of verification that occur alongside, even against, the traditional fact checking process in science. Designed to alert journal readership, they are not meant to test the accuracy of published articles like in usual scientific research or misconduct investigations. Rather, they perform a critical, public judgment about its validity and reliability.

An “expression of concern” casts doubt about an article and warns readers that its content raise some issues. In most cases, it describes information that has been given to the journal, which led it to alert its readers about an ongoing investigation, but does not directly state about the validity of the work4.

On the contrary, when it comes to “correction”, it is always stated that the core validity of the original article remains, some parts of its content being lightly or extensively modified. In some cases, the transformations have been carried to such an extent (e.g. every figure have been changed) that some actors have ironically coined the term “mega-correction5 to characterize them. Contrary to an expression of concern, the authors of the article are fully aware of these modifications and, even if they have not written it, do necessarily validate them before the publication of the so-called (mega)correction. If they don’t, journals sometimes publish editorial notes instead of corrections.

Finally, a “retraction” aims at to inform readership that the article validity and/or reliability does not stand anymore. Far from being an erasure, it is conceived of as the final step of the publishing record of the original article. A retraction is either conducted in close collaboration with the authors6, or against them7 upon the request of someone else who is explicitly named (e.g. a journal editor-in-chief, a colleague, a funding body…).

Briefly described, these written formats dedicated to flag articles raise three main questions: their regulation, their timeframe and their reversibility. As in other matters regarding academic publication, organizations of journal editors and publishers have issued many recommandations about these new formats: when to publish them, who shall previously be contacted, what should be included in the text of the flag, who should sign them (Teixeira da Silva and Dobranszki, 2017). COPE has even produced gigantic flowcharts8 aiming at helping editors ; nevertheless, according to the literature, editors have not been very compliant to them (Hesselman et al, 2016).

Moreover guidelines focus on very specific decision moments and do not treat the temporal dynamics of the flags: an expression of concern can be written 10 or 20 years after9 the original paper, so long after it had an impact on the literature; or, conversely, may be followed by a rapid correction by the authors, then a second expression of concern and finally a retraction. It may also lead to “in limbo” papers, which still exist with their expression of concern for years, nobody seemed to be been to solve the concern, or even care about it.

What is then the reversibility of these flags? Corrections can be later themselves corrected, expression of concern be itself retracted after 15 years10, and some have proposed that “good faith” retractions could be combined with the publication of “replacement11 papers, while the other ones would be permanent. Besides, there is life after death for scientific publications: retracted papers are still cited, and most of their citations do not take notice of their “zombie” status (Bar-Ilan and Halevi, 2017).

Instead of incorrectly equating the prevalence of retractions with that of misconduct, some consider the proliferation of flagged articles as a positive trend (Fanelli, 2013). In this vision, the very concrete effects of PPPR do reinforce scientific facts already built through peer review, publication and citation. Symmetrically, as every published article is potentially correctable or retractable, any scientific information rhymes with uncertainty. The visibility given to these flags and policies undermine the very basic components of the economy of science: How long can we collectively pretend that peer-reviewed knowledge should be the anchor to face a “post-truth” world?

Indeed, the sociology of ignorance has shown us that merchants of doubt (Oreskes and Conway, 2011) have built sophisticated ways to fight against scientific consensus, while undone science (Hess, 2016) prevents our societies from the benefits of specific knowledge. For these authors, good science, i.e. organized facts coming from a mass of publications, is a precious commons that have to be nurtured and protected. By contrast, for most STS scholars, science is what results once a scientific paper is published (see Fuller´s piece, this issue). Despite their differences, they both agree on the importance of focusing on what can be done with scientific articles, whether it should be apprehended with normative views or not.

Through this piece, we have suggested that STS should also add the political economy of academic publications to its “to do list” to try to make small differences (see Law´s piece, this issue) in the “post-truth” debates. We shall do so for three different reasons: one, it is a key element in the changing definition of truthiness; two, it highlights the continuing inventions of scientific collectives to build technologies of factization; three, the current movement of science reform, of which articles flags take part, could be used, and much more effectively than STS classical results, to defund and deny research12, which is currently at the heart of “alternative facts” promoters tactics.

The social order of facts vs. truths

Mr. Donald Trump and the notion of post-truth, alternative facts, fake news, etc. have become icons of a social order that has come creeping through Western democracies for a long time. The faces of Pia Kjærsgård (Denmark), Marine Le Pen (France), Frauke Petry (Germany) and Geerd Wilders (the Netherlands) are among the European icons of attempts to establish a new social order based on populist thoughts. They are among others characterized by an opportunistic engagement with scientific knowledge.

Merton (1957) noted long ago that scientists generally tend to feel that politics ignore their findings. It is nothing new that politics’ use of scientific knowledge is selective. Rather than caring about scientists’ feelings it is in the current situation crucial to care about what kind of social order scientific expertise contributes to establishing and maintaining, and to what extend science as an institution matters to democratic societies. Shapin and Shaffer (1985) convincingly reconstructed the early days of natural philosophy, the predecessor of science. Natural philosophy and experimental knowledge production developed out of the 17th century England that was haunted by civil wars. With the function of constructing facts that were free of religious, political and ethnic interests, natural philosophy should constitute the epistemic foundation for a united society; something religious and ethnic truths had not been able to deliver. Through its three constitutive technologies (linguistic, social and material) the facts constructed in natural philosophy and later by science would be unfaithful to all religions and to all ethnic traditions and to their attempt to install their truths in society. Science’s constructed facts would serve all of them just as much as they would be a nuisance to all of them. In this society, people of different religious beliefs and of different ethnic kin could refer to the same scientific facts as a shared common ground. That facts were later revealed to be infused with political, economic, personal and other powers and interests does not change their core function as a common ground.

Why not? Because in contrast to the truths forwarded by religious, ethnic and other social groups facts can be challenged by evidence. For our current discussions the distinction between truths and facts in my re-telling of Shapin and Shaffer’s account is crucial. While truths are mobilized by authoritative institutions, such as churches and monarchs, facts are produced through the mentioned constitutive technologies of science. Facts rely on evidence and can thus be challenged by new facts that are produced in comparable, scientific ways and that also forward evidence. Truth, on the other hand, needs no evidence, and cannot be challenged. Truth is true, full stop. Unless you don’t believe it, then it is just rubbish.

In this sense, it is incorrect to characterize Mr. Trump’s epistemic ethics as post-truth. Trump has no trouble with truths. He has troubles with facts. Populist ideologies rest on convictions that are not open to factual tests. They cannot be challenged by evidence. The utterance that Mr. Trump’s inauguration had a larger audience than that of his predecessor was forwarded as a truth. It was not a fact, since it did not rely on evidence that could potentially be challenged. It was not even an alternative fact. It was a truth. Which does not mean that it was true. That is the point with truths: you cannot test, whether they are true. Only facts can be tested – by empirical evidence.

Because truths cannot be tested empirically and they cannot be discussed, they end conversation and debate. Truths are thus dangerous as means for political power in democratic societies. The replacement of truth by facts is not just a historical matter of post-revolutionary England centuries ago. The still existing power of facts over truths in politics is a central constituent of maintaining the social order of democracy. By mobilizing his utterances as truths Mr. Trump challenges this social order and ends conversation with people, who do not share the mobilized truths. The social order of truth has people divided into separate social groups, each caring for their own truth, each protecting their own truth. In worst case, even fighting for their truth, against others.

The role of science to be the core producer of constructed facts is crucial for the social order of democracy. This social order needs facts that are constructed and that are mobilized as such not only within science, but very much so in politics. Science studies’ emphasis on the constructed nature of scientific facts does not undermine the function of science. On the contrary, it supports it. It contests scientific approaches that like to talk about and treat their facts as truths, and that in doing so challenge the social order that grants science a crucial – if only modest – function in democratic societies. When contemplating upon the Science Wars, Latour (2004) noted that “we need to get closer to facts, not farther away from them”. Indeed, we need to get closer to constructed facts, and father away from truths. This is our means to fight populism, and this is why science studies is most needed.