This post is also available in
There is a crisis in social psychology. A recent study has shown that many important discoveries from the past cannot be reproduced. How can that be?
Last year, a large group of scientists from all over the world repeated 100 famous experiments from social psychology. What did they find? Only 39 of these experiments yielded the same result. A small identity crisis ensued. Is social psychology unreliable? Well, no, but there are inaccuracies that cannot be ignored. These inaccuracies stem from statistics and from the process of scientific publication.
Imagine you want to know who spends more time on Snapchat in the Netherlands: boys or girls. To find out, you randomly select 20 boys and 20 girls from all over the country. Guess what: the guys, on average, spend 5 more minutes more per day on Snapchat than the girls!
How can you tell that this finding is indicative of a true difference in Snapchat use between genders, and not just an accidental selection of a couple of Snapchat-addicted boys? To answer this question, scientists often compute the statistical p-value: the chance that you would have found the same difference if the actual average Snapchat time of boys and girls over the Netherlands were identical.
If the p-value is smaller than 5%, the researcher says:”this finding is probably not caused by an accidental uneven selection of boys and girls, and therefore likely indicates a true difference between the two groups”. She then calls the result significant.
This statistical rule has a flip side. If you have a 5% chance that your effect appeared because of unfortunate selection, then one out of twenty ‘significant’ effects will be false from the outset.
Aside from this uncertainty in statistics, there are imperfections in the way scientific publishing works. Scientific journals like to publish papers that report on a significant effect. They are less eager to print research that did not find any effect.
Under this consideration, it is a real possibility that from all the research done on a non-existent effect, only those studies that accidentally found a significant effect are published. It is not surprising, then, that a large share of findings in social psychology cannot be reproduced – they may have been accidental hits picked up by journals eager to print sensational stories.
If you are dealing with inaccuracies in statistics and the publication process, how can you tell which scientific findings can be trusted? This is where theories come in. Any scientific study should start with a theory, which yields predictions (hypotheses). You can test these hypotheses in an experiment, based on which you adjust your theory.
But in social psychology, and in cognitive neuroscience too, theory often plays too small a role. Accidental hits are commonly published in isolation, without a good theory to explain them. And if you do not have a proper way to explain a finding, how can you expect it to be reproduced?
This raises questions as to what constitutes a proper theory. Aside from that, I have left a few important factors out of the equation here, such as the pressure to publish in science. Nevertheless, we ought to start our experiments with more precise theories and predictions. If we do so, we will be able to place accidental findings in perspective.
This means that every scientific study must be in service of theory. So remember: if you don’t have a good reason to expect a difference in Snapchat use between boys and girls, you should not be carrying out that study in the first place.
This blog was written by Jeroen. Edited by Angelique.