Academic science has suffered for a few years a disease that consists of an enormous increase in the number of scientific publications without the corresponding advance in knowledge. The finds are sliced as thin as salami and sent to different magazines to produce more articles.
These spurious achievements of the Academy, represented by mountains of unappreciated and unread publications, are undoubtedly a waste of write-only articles. It is a post-and-perish process in which most jobs are lost.
If we consider academic articles as a kind of currency scientist backed by gold bars at the central bank of science true, we are witnessing a phenomenon of article inflation, an authentic scientometric bubble.
The situation was described as early as 1981 in the magazine Science, with a criticism of the reduction of the length of the articles and the abuse of the so-called minimum publication units (LPU for its acronym in English). Things have gotten worse since then.
Why scientific publications are necessary
We do not question the need to publish scientific results. Science is a public matter that must be discussed in the public square, that is, in workshops, conferences and scientific journals.
In addition, today anyone can publish anything in any corner of the global network. Therefore, prior screening by a responsible program committee or editorial board is beneficial.
Filtering adds value as the core of science (the gold bar) becomes more accessible… because it remains little. Thus, the larger and less filtered the bubble, the less accessible the nucleus will be.
Scientific publications should be a remedy for information overload (a term popularized by Alvin Toffler in his 1970 book The Shock of the Future). On the contrary, the Academy has created an artificial need to publish, not for the advancement of knowledge, but for the advancement of professional careers. The Academy has succumbed to infoxication.
Scientific productivity metrics
Science is expensive. Governments and private investors rightly hope that paying scientists’ salaries will be profitable. Therefore, it is desirable to promote good scientists and research centers, while discouraging bad ones.
Now in our modern industrial society we think we can achieve this goal measuring the productivity. But scientific productivity is not like industrial productivity. Ideas cannot be measured like bricks.
The current scientific productivity metrics are aimed at evaluating the quality of the publications and, through them, the quality of the researcher.
The quality of a publication is estimated with the impact factor of the journal where it appears, which is the number of citations that other articles have received in the same journal in recent years. The assumptions implicit in this measurement procedure are:
A publication is good if it is published in a good magazine.
A journal is good if it has received enough attention from scientists.
In other words, it is assumed that there is a positive correlation between the impact factor and the scientific quality. The idea is interesting, but it has many negative side effects: popularity is favored over quality, fast science is promoted, the Matthew effect is triggered, local and regional forums are destroyed, and so on.
The root of the problem
The main problem underlying all this is that the impact factor is used as an indicator of quality. Supporters of Scientometrics will argue that for all its shortcomings, it is the best system we can have, because it is based on objective measurements. This reminds us of the drunk who was looking for the keys under the lamppost because it was the only place where there was light, although in fact he had lost them several meters away.
Scientometrics presents the inevitable tendency that all performance indicators have to measure what can be measured, and leave aside what cannot be measured, so that what is measurable acquires an excessive importance.
Scientometrics can probably avoid some of its worst effects by improving measurement systems. But, in the end, the problem itself is the conception of the Academy as a feedback system. The problem is to insist on measuring scientific productivity, and to feed back the system with those measurements. This is exactly what Goodhart’s Law states: when an evaluation metric becomes a target, it is no longer a good metric.
It is practically inevitable: scientists and publication spaces will adapt to ensure their own survival, developing strategies like science salami, self-quotes and friend quotes, etc.
All of these strategies combine to create an unethical and unscientific culture in which political skills are rewarded too much and imaginative approaches, unorthodox ideas, high-quality results, and logical arguments are too little. And everything contributes to inflate the scientometric bubble and make the gold bars of the most valuable science less accessible.
We cannot do without human judgment
There is only one way out of this vicious cycle: to recognize that quality is something that essentially cannot be measured, that it is beyond numbers and algorithms, that it can only be judged by humans despite the fallible nature of their judgment. .
The postulate that there is a positive correlation between the impact factor and scientific quality is far from being proven. The belief that citation statistics are inherently more accurate than human judgment, and therefore outweigh the possible subjectivity of peer review, is unfounded: “Using just the impact factor is like using just weight to judge the health of a person ”.
Objective measurements can certainly aid human judgment. But we are delusional if we think that we can avoid corruption and achieve blind justice using mathematical formulas.
There is no algorithmic solution to the problem of measuring scientific quality. That is why the San Francisco Declaration on Research Evaluation emphasizes “the need to eliminate the use of journal-based metrics, such as the Journal Impact Factor, when deciding on financing, appointments and promotions; the need to evaluate the research on its own merits and not on the basis of the journal in which the research is published ”.
It is much easier to collect a few figures than to think seriously about what a researcher has accomplished. As Lindsay Waters says, it is simpler to rely on anonymous numbers to fire someone or dismiss a research project, without having to reasonably explain a negative value judgment to them.
The Human Factor in Science Assessment
Our main interest is to raise awareness about the problem. Thousands of scientists have signed the San Francisco Declaration, but we believe the message deserves to be spread more widely: the scientometric bubble is unethical and detrimental to science.
The overwhelming value that numbers and formulas are acquiring in the academic world is detrimental, to the detriment of the true evaluation of the quality of individual works. We need an alternative to the culture of publish or perish.
But of course, evaluate through the impact factors and the ranking of magazines is so cheap… In fact, the real beneficiaries of numerical evaluation are neither researchers nor science itself, but evaluation agencies, which can replace scientists (capable of peer review) with mere bureaucrats (capable of counting citations) .
There is also a threat to ethical values that affect the way a researcher approaches his scientific activity. The perversion in the way of evaluating scientific productivity stimulates the scientist to worry about publish so as not to perish, instead of obtaining a truer and more reliable knowledge.
The researcher, urged to survive within this system, will prefer popularity to intrinsic value, will consider it better to choose where publish and not that to post.
The obsession with finding quantitative and algorithmic methods to evaluate scientific productivity hides an intellectual cowardice: the abdication of the evaluator of his responsibility to make a personal judgment on the scientific quality of the evaluated work. The evaluator thus ends up becoming an obedient but absurd bureaucrat who limits himself to applying mathematical formulas. Replace the human factor with a metric objective in evaluating science will not prevent corruption.
Human judgments are fallible, but at least they don’t promote this scientometric bubble that threatens to paralyze the advance of knowledge by hiding the gold bars of true science under a huge overload of publications.
This article is a translated and abridged version of the article The scientometric bubble considered harmful, published in Science and Engineering Ethics in February 2016. The interested reader can also refer to the manuscript in English.