This post is also available in Dutch.
When I was younger, my idea of doing a PhD was spending years studying and thinking, developing and testing theories, like the great inventors I learned about in school. But once I started my own PhD, I realized that reality is quite different.
“Publish or perish,” “Four years, four papers.” You hear these phrases often in academia and they allude to the constant pressure researchers feel to produce output (i.e., publications). So where does this pressure come from?
A scientist’s worth
Research costs money. In order to secure funding, researchers have to apply for grants. Who receives those grants is determined based on a set of assessment criteria established by the funding agency, which are largely a researcher’s “productivity,” that is, their products (see above). For example, NWO (Netherlands Organisation for Scientific Research), The Netherlands’ largest funding agency, highlights a researcher’s productivity in one of their main funding schemes for new doctors (Veni). The agency pre-selects candidates based on a score composed 50% by their key output, compared to 5% for the actual research idea.* And funding agencies aren’t the only ones— universities also rely on similar metrics to assess researchers’ performance.
Researchers aren’t just valued by the number of publications they have but also by how often their publications are referred to in other scientific articles. This is thought to be an indicator of how important their work is. Both number of publications and number of citations are at the core of the Hirsch- or h-index: an index developed to determine a scientist’s quality.
Another metric used to quantify scientists is impact factor (IF), originally designed for the journals where articles are published. Like the h-index, IF attempts to approximate the importance of the journals researchers publish in by averaging the number of times the journal’s articles are cited. There’s also an author impact factor (AIF) which takes into account the average number of times a researcher’s work is referred to per year. Scientists are thus also evaluated by how often their work and that of the journals they publish in is cited.
A scientist is only as good as the ruler you use to measure them
Scientists love numbers, right? So what’s wrong with valuing them based on their output? The problem is that it makes the research I imagined when I was a child impossible. As this article nicely points out, if you value researchers based on their number of publications, in order to survive and stay in the system they will focus on just that: upping their publications. That means that going after big, complicated, difficult, unpredictable, but often more important research questions is risky — they take longer, their results are less predictable and might turn out to be messy or inconclusive, which unfortunately usually means that they will be harder to publish. At a time when there is increasing attention to the societal relevance of research, the focus on publications encourages researchers to think smaller and strategically.
Another problem with overvaluing publications is that not every type of research or research project will fare equally well with those metrics. In some fields, research can take much longer before culminating in a publication, and publication requirements differ. Thus, a one-size-fits-all measure is not appropriate.
The pressure to publish can also lead to questionable research practices such as tampering with results to ensure that they will be publishable. There are currently many movements, such as open science, aiming to dissuade these research practices… but one way to nip the problem in the bud would be to stop reducing researchers to their publications.
So why has so much importance been given to publications? Is it really a good indicator of a good scientist? Likely, it’s just one of the easiest things to measure. Indeed, it is hard to think of quantifiable attributes of a good researcher, but maybe more research needs to be done precisely on this. In a recent survey, NWO asked researchers how they’d like to be evaluated. Notably, researchers called for more value being given to education, leadership, collaboration, and the societal relevance of their work.
The tide is turning
Luckily, there are already some initiatives to move away from output-centered metrics (e.g., https://sfdora.org/read/, https://scienceintransition.nl/en). NWO is currently re-evaluating its assessment system and is already eradicating the use of the h-index. In the pre-selection criteria mentioned above, 45% of the score is determined by a section where the researcher can describe their (non-output) academic achievements and research motivation. Ghent and Radboud University have also already started to stray away from quantitative measures in favor of more qualitative ones that favor, for example, the importance of the research done (but read our blog about how this disadvantages basic research). Hopefully more universities and funding agencies will follow suit.
*This pre-selection is currently in pilot phase and only corresponds to the domains Social Sciences and Humanities and Applied and Engineering Sciences. During the actual selection, publications are contemplated within the category “Quality of the applicant” which is weighted at 40%.
This blog was originally written in English.