Aarhus University Seal

DEBATE: We must not be afraid to make mistakes

Researchers publishing research results that cannot be replicated is not solely a sign of crisis in health science research. Researchers have an obligation to publish their findings, even though there is a risk of incorrect conclusions, says Lars Bo Nielsen, dean of Health, Aarhus University

By Lars Bo Nielsen, Dean of the Faculty of Health, Aarhus University

Health science research for the benefit of the individual patient and society is a combination of long-term processes in research environments around the world. The final results – which may seem simple and which may also be felt in everyday life – are therefore most often based on extremely large volumes of knowledge created in the interplay between basic research environments and clinical research and development units. Such results are built on countless experiments, attempts, mistakes and successes that others have brought to fruition. Precisely because of the long process where many partial results and the many researchers involved together build our knowledge, it is essential that we can count on the results. These could be basic discoveries in the laboratory just as well as calculations of the value of a given intervention for a patient. 

This premise is challenged, writes Professor and Consultant Anders Perner in his feature article in the Danish newspaper Berlingske (the article is in Danish). In fact, he thinks that we are dealing with a health science crisis when journals such as Nature report on research results that cannot be replicated.

There can be no doubt that we as research institutions have a crucial task in the form of raising the quality of research and preventing poor scientific practice, not least at a time where leading journals are being criticised for publishing research results that cannot be replicated.

But we must also uphold the right to make mistakes. Because researchers naturally make mistakes. Hypotheses turn out not to hold water, while correlations that were expected to be found are not found anyway. Good research is often characterised by unexpected results that send other researchers in a completely new direction. This can mean earlier research results ending in the rubbish bin, not because the research was bad, but because the analyses were done on an incorrect – or more often – simplified basis. There is nothing questionable about making mistakes in your research. Mistakes will be corrected sooner or later – if not by the researchers themselves, then by others. Drawing up hypotheses that turn out to be wrong is a normal and essential part of the research process.

For the same reason, I would urge caution when it comes to declaring that there is a crisis in the health sciences. I would not want to represent a research environment where the researchers were afraid to publish their findings and hypotheses because they could turn out to be wrong. Instead, we must be meticulous in our choice of methodologies and describe them very precisely to other researchers. We must take it upon ourselves to draw attention to the limitations and possible visible (and invisible) disparities which lie in our results and interpretations. This is because we must not hesitate to communicate what we find and how we interpret these findings. To do research is to publish. Also the negative results. Part of the global code of research ethics is that results must be published. 

At the same time, in the research environment we should discuss how we respond to the publication rat race. These days there is a widespread (incorrect) perception that a health science PhD dissertation is always based on three articles in which the PhD student should preferably be the primary author. We ought to have the courage to advise a junior researcher to publish one excellent and strong article – perhaps with primary authorship with other (international) expert environments, rather than smaller articles, each of which can be attacked. We need to support junior researchers in joining forces to publish sensational results which are the result of interdisciplinary collaboration across research environments and national borders. In this way can we can help promote the coming generation of researchers so they enjoy greater focus on international impact and quality of research rather than the length of their own list of publications.

In the discussion, we must consider the fact that we find ourselves in a situation where the number of publications and bibliometric data carry a lot of weight. Not least in relation to getting one’s own research financed. Here we must continue to maintain that peer review in the proper sense – ideally with patient involvement in the assessment of the worth of clinical research and its support – cannot be based on bibliometrics alone. The best assessment of research, i.e. its value for other researchers and thus, ultimately, for the individual patient and society, takes place as it always has – by reading and not simply counting articles.

The column was in Altinget on the 15th of March 2018