In an ideal world, publishing the results of scientific research might be compared to an algorithm that includes the following steps:
a) obtain all the resources needed to finish (timely!) a project for testing a scientific hypothesis;
b) write a concise, objective text describing the tested hypothesis, methods, data collection, analysis to reject or support the hypothesis, and conclusions;
c) be aware of your responsibility as a scientist, and strictly comply with the ethical statements and good practices of science communication;
d) decide whether you will communicate the research results through a science peer meeting, book, or specialised journals;
e) choose an appropriate scientific meeting (congress, symposium, conference), a publisher for the books, or a specific journal (for single articles);
f) regardless of the chosen medium, carefully delimit the research community to whom you think the science results are most suitable for and might generate interest among and feedback from;
g) wait for the comments, critical analysis, and peer reports on either successful or failed attempts to reproduce your newly published scientific results;
h) if necessary, kindly and rigorously provide answers to the comments, criticism, and peer reports on the failed reproducibility attempts;
i) do not be angry with or desperate about the actions and demands of step h;
j) continue to think and identify new scientific problems, and try to solve them through new projects and hypothesis testing;
k) go to step a.
It is expected that the above algorithm will be rigorously executed by a diligent scientist. However, the real world has imperfections. In this world, one characteristic can sometimes induce a bypass in these steps: the intense competition for resources and prestige among scientists. This is not a problem ipso facto. For the dualism of scarce resources and increasing demand, the economic thought initiated by Adam Smith points out that competition leads to the efficient use of scarce resources, which clearly benefits society (Stiglitz 1991).
What does this have to do with scientific publishing in the contemporaneous research environment? After all, science and its derived or associated processes (e.g. editorial activity, research communication, scholars’ association) are self-regulated; thus, there is no need for an external agent to supervise the process. Scientists generate new facts and themselves decide which of these facts are worth publishing (most editors of science journals as well as peer reviewers are practicing scientists). What is wrong with this? Well, as far as the formal regulation of competition is concerned, there is no external agent (free of any conflict of interest) to take care of what we might define as ‘the virtuous code of science practice’. Indeed, the emergence of the Research Integrity Committee in some countries (e.g. the Office of Research Integrity - ori.hhs.gov - NIH, USA) is the beginning of such a regulation. However, idealistically, science is a no-borders enterprise, which means that world regulation is not easily feasible or possible.
The paramount question we may ask now is: Is there really a need for such a regulator? Some facts behind the doors of the scientific world might prompt us to think so (in fact, the dissemination of most of these facts has led to creation of the research integrity committee), namely:
1. plagiarism and absence of credit for others’ work;
2. fraudulent papers submitted as original research work;
3. theft of data/projects/ideas/ from a fellow lab colleague or collaborators;
4. use of anonymous peer reviewing to prevent competitors from publicising their research work;
5. lack of acknowledgement for people who have provided support to a research work;
6. inflation of the scope of research work, for example, claiming to have solved a problem that is either beyond the capacity of the methodology used or not covered by experimental design;
7. in case of publishers (important agents in the scientific world): launching journals for the sole purpose of making easy money, without commitment to publishing ethics, scientific rigour, or professionalism;
8. submission of manuscripts to journals recognised as ‘money gatherers’ (i.e., predatory journals);
9. giving too much value to the ‘publication score equation’: Successful researcher A = X papers published in T years with Y impact factor and Z citations;
10. fractionation of research results into many ‘new scientific articles’
Because some of these practices are accepted and even encouraged in some countries (for example, publishing too many papers within short periods of time is a requisite for a scientist to be successful in Brazilian universities and research centres), any Research Integrity Committee/Office can only partially deal with these issues. This directs the problem back to the scientists themselves: they must reach consensus on what is legitimate and acceptable from the ideal science practice.
The majority of the practices listed above are clearly unacceptable (the first eight items, for example), but for the last two ones, it is not unusual to hear from many of their advocates. Without undermining the power of competition in promoting the efficient use of scarce and finite resources, scientists and their organisations must convey a clear message to the world: Even though some practices seem to be legitimate for some members of the scientific community, they cannot be accepted under the ‘common good’ policy, that is, actions that will result in the benefit of society.
Adeilton Alves Brandão | Editor
Stiglitz JE. The invisible hand and modern welfare economics. NBER Working Paper No. W3641. 1991. Available from: http://www.nber.org/papers/w3641.pdf.