Psychology publication bias: what it is and why it is problematic

Psychology, in particular its research component, has been in crisis for several years, which in no way contributes to its credibility. Not only is the problem with classic experiment replication issues, but also when it comes to publishing new articles.

The big problem is that there seems to be significant publication bias in psychology.In other words, it seems that the publication of articles is based more on aspects such as their interest to the general public rather than on the results and scientifically relevant information that they offer to the world.

Today we will try to understand the seriousness of the problem, what it involves, how this conclusion was reached and whether it is something unique to behavioral sciences or is there any. there others who are also at the same crossroads.

    What is the publication bias in psychology?

    In recent years, several psychology researchers have warned of the lack of field replication studies, which has suggested the possibility that there has been a publication bias in the behavioral sciences. While this was something to happen, it wasn’t until the late 2000s and early 1990s that there was evidence that psychological research had problems, which could lead to the loss of valuable information for the child. .

    One of the first suspicions about the problem was what happened with Daryl Bem’s experience in 2011.. The experiment itself was simple:

    It consisted of a sample of volunteers who were shown 48 words. They were then asked to write as many words as they could remember. Once that was done, they had a practical session, during which they were given a subset of those 48 words previously shown and asked to write them down. The initial assumption was that some participants would remember these words better which were then put into practice.

    Following the publication of this work, three other research teams, separately, attempted to reproduce the results observed in Bem’s work. Although they basically followed the same procedure as the original work, they did not achieve similar results. This, while it would allow some conclusions to be drawn, was reason enough that the three research groups had serious problems getting their results published.

    First, in the case of a replica of an earlier work, it gave the impression that scientific journals were interested in something new, original, not a “simple copy” of something earlier.. Added to this was the fact that the results of these three new experiments, while not positive, were seen more as methodologically poorly done studies and that would explain the poor results before thinking that, perhaps, the new data meant a new step forward. for science.

    In psychology, studies that confirm their hypotheses and therefore obtain more or less clear positive results seem to end up behaving like rumors. They are easily disseminated by the community, sometimes without even consulting the original source from which they came or without carefully reflecting on the conclusions and discussions made by the author himself or by the critics of this work.

    When attempts to replicate previous studies that have yielded positive results fail, those replicas are routinely left unpublished.. This means that, although they conducted an experiment which confirms that a classic was not reproducible for any reason, since it is not of interest to journals, the authors themselves avoid publishing it, and in this way there is no recording in Literature. This keeps what is technically a myth to continue to spread as a scientific fact.

    On the other hand, there are habits anchored in the research community, quite questionable ways of proceeding are still so widespread that they become very crude: modifying the experimental plans to obtain positive results, deciding to sample size after checking if the results are significant, select previous studies that support the current study’s hypothesis, omitting or ignoring, such as those who don’t want the thing, those who disprove it.

    Although the behaviors just described are questionable but, as far as possible, understandable (but not necessarily tolerable), there are cases of manipulation of study data to ensure that it ends up being published and that we can talk about. Openly fraudulent and a total absence of scruples and professional ethics.

    One of the most embarrassing cases in the history of psychology is the case of Diederik StapelThe fraud is considered to have biblical proportions: he came to invent all the data of some of his experiences, that is to say, to speak clearly, like one who writes a fictional novel, this gentleman invented research .

    This means not only a lack of scruples and a scientific ethics that shines through in its absence, but also a complete lack of empathy towards those who have used their data in subsequent research, making these studies more or less to a lesser extent a fictitious component.

    Studies that have highlighted this bias

    Kühberger, Fritz and Scherndl analyzed in 2014 about 1000 articles published in psychology since 2007, selected at random. The analysis overwhelmingly revealed a clear publication bias in the field of behavioral science.

    According to these researchers, in theory, the effect size and the number of people participating in the studies should be independent of each other, however, their analysis found that there is a strong negative correlation between these two variables. on the basis of the studies selected. This means that studies with smaller samples show larger effect sizes than studies with larger samples.

    In this same analysis, it was also shown that the number of published studies with positive results was higher than the studies with negative results, Being the ratio of about 3: 1. This indicates that it is the statistical significance of the results that determines whether the study will be published more than whether it is truly of benefit to science.

    But from what we see, it’s not just psychology, science, that suffers from this kind of bias towards positive results. In fact, one could say that it is a widespread phenomenon in all sciences, Although psychology and psychiatry are the most likely to report positive results, leaving out studies with negative or moderate results. These data were observed through a review conducted by sociologist Daniele Fanelli of the University of Edinburgh. He reviewed around 4,600 studies and found that between 1990 and 2007, the proportion of positive results increased to over 22%.

      Is a replica that bad?

      There is the mistaken belief that a negative reply invalidates the original result. That one research performed the same experimental procedure with different results does not mean that neither the new research is methodologically flawed nor that the results of the original work have been exaggerated. There are many reasons and factors that can cause results not to be the same, and all of them allow us to have a better understanding of reality, which after all is the goal of all science.

      New replicas should not be viewed as a harsh criticism of the original work, nor as a simple “cut and paste” of an original work only with a different sample. It is thanks to these replicas that a better understanding of a previously studied phenomenon is given, and it is possible to find conditions in which the phenomenon does not recur or does not occur in the same way. When the factors which condition or not the appearance of the phenomenon are understood, better theories can be developed.

      Prevent post bias

      Solving the situation in psychology and science in general is difficult, but that does not necessarily mean that the bias has to be compounded or narrated. so that all useful data can be shared with the scientific community involves the effort of all researchers and a greater tolerance on the part of the journals towards studies with negative results, some authors have proposed a series of measures that could help to end the situation.

      • Elimination of hypothesis testing.
      • More positive attitude towards insignificant results.
      • Improved peer review and publication.

      Bibliographical references:

      • Kühberger A., ​​Fritz A., Scherndl T. (2014) Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size. PLoS One. 5; 9 (9): e105825. doi: 10.1371 / journal.pone.0105825
      • Blanco, F., Perales, JC and Vadillo, MA (2017). Can psychology save itself? Incentives, bias and reproducibility. Psychology yearbook of the Valencian Society of Psychology, 18 (2), 231-252. http://roderic.uv.es/handle/10550/21652 DOI: 10.7203 / anuari.psicologia.18.2.231
      • Fanelli D. (2010). Are the pressures to publish increasing the prejudices of scientists? Empirical support for US state data. PloS one, 5 (4), e10271. doi: 10.1371 / journal.pone.0010271NLM

      Leave a Comment