Beyond Fact Checking: Reconsidering the Status of Truth of Published Articles

24 Apr
David Pontille, Didier Torny

Since the 17th century, scientific knowledge has been produced through a collective process, involving specific technologies used to perform experiments, to regulate modalities for participation of peers or lay people, and to ensure validation of the facts and publication of major results. In such a world guided by the quest for a new kind of truth against previous beliefs (see Howe’s piece, this issue), various forms of misconduct – from subtle plagiarism to the entire fabrication of data and results – have largely been considered as minimal, if not inexistent. Yet, some “betrayers of the truth” have been alleged in many fraudulent cases at least from the 1970s onward (Broad and Wade 1982), and the phenomenon is currently a growing concern in many academic corners – scientific journals, funding bodies, learned societies, analysts, leading to an extensive literature. More recently, the reveal of an industry of manipulated publications behind the scenes by pharmaceutical firms (Sismondo, 2009) has strengthened the doubts about the reliability of “gold standards” of proof, while the disappointing results of specifically designed studies have led to a replication crisis in some experimental disciplines (e.g. psychology, clinical medicine). Simultaneously, the growing industry of “predatory publishing” has reshaped the very definition of a peer-reviewed journal (Djuric, 2015).

In this context, “post-publication peer review” (PPPR) has often been lauded as a solution, its promoters valuing public debate over in-house validation by journals and the judgment of a crowd of readers over the ones of a few selected referees (Pontille and Torny 2015). Along those lines, the public voicing of concerns on a result, a method, a figure or an interpretation by readers, whistleblowers, academic institutions, public investigators or authors themselves have become commonplace. Some web platforms, such as PubPeer1, have even developed alarm raising and fact checking as new forms of scholarly communication. Facing numerous alerts, journals have generalized dedicated editorial formats to notify their readers of the emerging doubts affecting articles they had published.

This short piece is exclusively focused on these formats, which consists in “flagging” some articles to mark their problematic status. Acting and writing are tightly coupled here: to flag an article consists in publishing a statement about an original paper, in the same journal, as part of its publishing record2. Instead of crossing out texts like deeds in Law or archiving the various versions of a single text like in Wikipedia, the flag statement does not alter the original paper. As a result, links between the two documents and the free availability of the statement designed to alert audiences are crucial3.

In the last twenty years, three ways of flagging articles have become commonly used by journals: expression of concern, correction, and retraction. These written acts enact peculiar forms of verification that occur alongside, even against, the traditional fact checking process in science. Designed to alert journal readership, they are not meant to test the accuracy of published articles like in usual scientific research or misconduct investigations. Rather, they perform a critical, public judgment about its validity and reliability.

An “expression of concern” casts doubt about an article and warns readers that its content raise some issues. In most cases, it describes information that has been given to the journal, which led it to alert its readers about an ongoing investigation, but does not directly state about the validity of the work4.

On the contrary, when it comes to “correction”, it is always stated that the core validity of the original article remains, some parts of its content being lightly or extensively modified. In some cases, the transformations have been carried to such an extent (e.g. every figure have been changed) that some actors have ironically coined the term “mega-correction5 to characterize them. Contrary to an expression of concern, the authors of the article are fully aware of these modifications and, even if they have not written it, do necessarily validate them before the publication of the so-called (mega)correction. If they don’t, journals sometimes publish editorial notes instead of corrections.

Finally, a “retraction” aims at to inform readership that the article validity and/or reliability does not stand anymore. Far from being an erasure, it is conceived of as the final step of the publishing record of the original article. A retraction is either conducted in close collaboration with the authors6, or against them7 upon the request of someone else who is explicitly named (e.g. a journal editor-in-chief, a colleague, a funding body…).

Briefly described, these written formats dedicated to flag articles raise three main questions: their regulation, their timeframe and their reversibility. As in other matters regarding academic publication, organizations of journal editors and publishers have issued many recommandations about these new formats: when to publish them, who shall previously be contacted, what should be included in the text of the flag, who should sign them (Teixeira da Silva and Dobranszki, 2017). COPE has even produced gigantic flowcharts8 aiming at helping editors ; nevertheless, according to the literature, editors have not been very compliant to them (Hesselman et al, 2016).

Moreover guidelines focus on very specific decision moments and do not treat the temporal dynamics of the flags: an expression of concern can be written 10 or 20 years after9 the original paper, so long after it had an impact on the literature; or, conversely, may be followed by a rapid correction by the authors, then a second expression of concern and finally a retraction. It may also lead to “in limbo” papers, which still exist with their expression of concern for years, nobody seemed to be been to solve the concern, or even care about it.

What is then the reversibility of these flags? Corrections can be later themselves corrected, expression of concern be itself retracted after 15 years10, and some have proposed that “good faith” retractions could be combined with the publication of “replacement11 papers, while the other ones would be permanent. Besides, there is life after death for scientific publications: retracted papers are still cited, and most of their citations do not take notice of their “zombie” status (Bar-Ilan and Halevi, 2017).

Instead of incorrectly equating the prevalence of retractions with that of misconduct, some consider the proliferation of flagged articles as a positive trend (Fanelli, 2013). In this vision, the very concrete effects of PPPR do reinforce scientific facts already built through peer review, publication and citation. Symmetrically, as every published article is potentially correctable or retractable, any scientific information rhymes with uncertainty. The visibility given to these flags and policies undermine the very basic components of the economy of science: How long can we collectively pretend that peer-reviewed knowledge should be the anchor to face a “post-truth” world?

Indeed, the sociology of ignorance has shown us that merchants of doubt (Oreskes and Conway, 2011) have built sophisticated ways to fight against scientific consensus, while undone science (Hess, 2016) prevents our societies from the benefits of specific knowledge. For these authors, good science, i.e. organized facts coming from a mass of publications, is a precious commons that have to be nurtured and protected. By contrast, for most STS scholars, science is what results once a scientific paper is published (see Fuller´s piece, this issue). Despite their differences, they both agree on the importance of focusing on what can be done with scientific articles, whether it should be apprehended with normative views or not.

Through this piece, we have suggested that STS should also add the political economy of academic publications to its “to do list” to try to make small differences (see Law´s piece, this issue) in the “post-truth” debates. We shall do so for three different reasons: one, it is a key element in the changing definition of truthiness; two, it highlights the continuing inventions of scientific collectives to build technologies of factization; three, the current movement of science reform, of which articles flags take part, could be used, and much more effectively than STS classical results, to defund and deny research12, which is currently at the heart of “alternative facts” promoters tactics.


Bar-Ilan, J. & Halevi, G. (2017).Post retraction citations in context: a case study. Scientometrics [online first:]

Broad, W., & Wade, N. (1982). Betrayers of the Truth. Fraud and Deceit in the Hall of Science. New York: Simon & Schuster.

Djuric, D. (2015) Penetrating the omerta of predatory publishing: the romanian connection. Science and engineering ethics, 21(1), 183-202.

Fanelli, D. (2013). Why Growing Retractions Are (Mostly) a Good Sign. PLOS Medicine, 10(12): e1001563.

Hesselmann, F., Graf, V., Schmidt, M., & Reinhart, M. (2016). The Visibility of Scientific Misconduct: A Review of the Literature on Retracted Journal Articles. Current Sociology, [online first:]

Hess, D. J. (2016). Undone Science: Social Movements, Mobilized Publics, and Industrial Transitions. MIT Press.

Oreskes, N., & Conway, E. M. (2011). Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. Bloomsbury Publishing USA.

Pontille, D., and Torny, D. (2015). From Manuscript Evaluation to Article Valuation: The Changing Technologies of Journal Peer Review. Human Studies, 38(1): 57-79.

Sismondo, S. (2009). Ghosts in the machine: publication planning in the medical sciences. Social Studies of Science, 39(2), 171-198.

Teixeira da Silva, J. A., & Dobránszki, J. (2017). Notices and Policies for Retractions, Expressions of Concern, Errata and Corrigenda: Their Importance, Content, and Context. Science and Engineering Ethics, 23 (2): 521-554.

Author information

author David Pontille is a senior researcher at the Centre de Sociologie de l’Innovation in Paris (CNRS UMR 9217). His interests focus on writing practices, the technologies of research evaluation, and the maintenance of infrastructures dedicated to urban mobility.
author Didier Torny is a senior researcher at the Centre de Sociologie de l’Innovation in Paris (CNRS UMR 9217). He currently develops a political economy of academic publication, while continuing his research on controversies affecting preventive health policies.

Leave a Reply