For 350 years, the scientific world has been shaped by a model of publishing research results that has emphasised journals (and their editors) as the unique forum for the accredited source of scientific information. This model was entirely based upon the skills of editors and, more recently, peer reviewers to select, evaluate, edit, and publish the articles describing the most relevant research results. Since the foundation of ‘Journal des Savants’ in 1660 (no longer active) and ‘Philosophical Transactions’ (started in 1665 and still publishing: https://royalsociety.org/journals/publishing-activities/publishing350), scientists have known that the only way to convey information to the community is to write a manuscript, send it to an appropriate journal, and wait, wait... first for reviewers’ opinions, suggestions, and comments, and then for the final editorial decision on the manuscript.
Getting published in today's competitive environment is a matter of survival for the typical research scientist: without published papers, no certified (peer-reviewed) information is delivered to one’s peers; no tenure or promotion is awarded; no funding is secured; and no result is returned to society. That is the message we have been learning from our mentors, graduate supervisors, funding agencies, research centres, and university managers. Competition and priority attribution for discoveries are deeply rooted in scientific activity, and competition is clearly fiercer nowadays than three centuries ago. The main consequence of this publishing framework is that science facts do not flow immediately from the generators (scientists) to users (research peers and then society in general).
The advent of digital information technology, with its low cost, fast pace, instantaneous dissemination, lack of geographic barriers, lack of frontiers, wide range, lack of centralised control, and continuous innovation, has changed this scenario and brings possibilities not imagined some years ago: immediate visibility, accessibility, and continuous tracking for scientific articles. From the point of view of traditional journals, digital innovations seem a disturbing event, posing many apparent threats to the actual consolidated publication model, in which decisions are highly concentrated with editors, and the editorial system ‘certifies’ published scientific articles. So far, this system is considered to be the worst form of publication except for all those other forms that have been tried. One enduring question remains: how to decide what is good for publication and what should be rejected.
Much intellectual effort is certainly to be deployed to resolve it, but new insights are emerging with the now ubiquitous digital innovation storm, which provides an alternative pathway: a very large group of individuals (research peers, post-docs, graduate students, and readers from other fields) interactively read through open scientific articles and add comments, corrections, and suggestions for improving the text, with all of these activities being shared, tracked, measured, and permanently recorded on servers at relatively low cost for funding agencies and society in general. This is the closest we get to a ‘permanent collective review’ of the scientific literature.
Important driving forces oppose each other in this model: on the one hand, funding agencies and public investors need immediate and transparent results; on the other hand, mainstream publishers wish for this movement to take place at the slowest possible pace.
This resistance notwithstanding, how feasible is the implementation of such an innovation in the near future? Well, it is right around the corner. The preprints are the start-up of this new game! Learning from over 20 years of successful open publishing practice of Arxiv physics (https://arxiv.org/help/general), some innovators have launched initiatives that are gradually changing the science publishing landscape (in the biosciences field, the Bioarxiv and peerJ preprints are worth mentioning). For most fields of knowledge, there is now some kind of ‘preprint’ in full operation (for a good list of preprints, see https://osf.io/preprints/). The consolidation of such a publishing model, of course, poses some challenges to the editors of traditional journals and their publishing practices. The most pressing consequence is a change in the ‘modus operandi’ of an editor: rather than passively controlling the flux of manuscripts to the journal, in other words, choosing which ones are admitted or go into the garbage, editors must carefully look at preprints (or the impact they are causing!) and then convince authors to submit them to their journals. This is both an active and challenging role for editors, who are very comfortable with the three-century-old publishing model of ‘I am knocking at your door, please let me in’!
Time has come to think about the current framework for publishing research results in traditional journals such as Memórias do Instituto Oswaldo Cruz. For more than 100 years (we will be 110 next 12th December), Memórias has been committed to offering both authors and readers the most updated science publishing practices. It could not be different in these times of ‘digital storm’. The editorial board of Memórias has decided that this journal now accepts articles that are already published in preprints. This decision is in agreement with recent developments in the scientific editorial scenario, but it also seeks to grant support for a relevant decision by one important player in Brazilian research publication: ScieLo has embraced preprints. This organisation announced last February its strategic planning to launch a preprint initiative (details can be found here: http://blog.scielo.org/blog/2017/02/22/scielo-preprints-a-caminho/). This is very welcome news, and we eagerly wait for its start-up. Memorias wants to be a partner and to collaborate in the success of the ScieLo preprints. This certainly will be an important event in the recent history of Brazilian scientific publishing.
Adeilton Brandão
Claude Pirmez