Written Evidence Submitted by Professor Stephen McKay, University of Lincoln

(RRE0066)

 

by Stephen McKay, University of Lincoln.

  1. I am a professor of social research at the University of Lincoln, and held a similar post at the University of Birmingham before that. I wanted to ensure that the committee was aware that the issues under discussion have a lengthy history in social science research – a longevity that perhaps testifies to the difficulties of making progress in this area – and cover a broad range of academic disciplines.

 

BREADTH

  1. In 1962, Leroy Wolins of Iowa State University wrote to the American Psychologist with news of his [unnamed] graduate student’s attempts to obtain raw data from articles published during 1959-61. That was, perhaps, the first attempt to check the reproducibility of published results in the broad social sciences, and the results from that attempt were far from encouraging. He recorded that:

 

  1. Of course, in the 1960s the use of data was a much more complex undertaking than today, so perhaps the lack of access to data is understandable. However, fast forward to 1986, and the publication of results from the replication attempts from the ‘Journal of Money, Credit and Banking project’. They had similar issues in obtaining responses and data/software, concluding that ‘inadvertent errors in published empirical articles are a commonplace rather than a rare occurrence’ (Dewald, Thursby and Anderson, 1986; p. 587).

 

  1. Some 30 years later, authors within economics could still note that ‘Because we were able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is often not replicable’ (Chang and Li, 2017; p. 2). Within psychology, replication efforts have also not been encouraging, with one large study replicating 36% of results, and often with effects about half the size of the original papers (Open Science Collaboration, 2015).

 

  1. It is routinely noted across different disciplines that replication studies are rarely published (Hubbard and Vetter, 1996; Reid, Soley and Wimmer, 1981). However, many subjects across a broad range of disciplines are at least now recognising the issues – e.g. the September 2021 issue of the journal Linguistics was devoted to this topic (Sönning. and Werner, 2021).

 

CAUSES

  1. The importance of replication in taking forward empirical results is broadly acknowledged (see Table 4.1 in Hubbard, 2015). There are several overlapping sets of issues that contribute to the concern that in much quantitative research in the social and behavioural sciences, ‘most published research finding are false’ (Ioannidis, 2005). In particular:

 

  1. Christensen and Miguel (2018) looked in detail at how these issues have played out within economics, whilst open science supporter Nate Breznau explains why change is needed in sociology (Breznau, 2021).

 

  1. The extreme example of fraud is also possible, but perhaps rather less frequent (e.g. Diederik Stapel  https://www.apa.org/science/about/psa/2011/12/diederik-stapel).

 

WAYS FORWARD

  1. There are various well-known means of addressing some of the issues listed above. These include:

Accessing data

Making data open

Accessing code

Making software/code open source, or similar

Publication bias

Encouraging publication of null results

Researcher degrees of freedom

Pre-registered study plans

 

 

REFERENCES

Breznau, N. (2021). Does Sociology Need Open Science? Societies 11, no. 1 (2021): 9.

Chang, A. C., & Li, P. (2017). Is economics research replicable? Sixty published papers from thirteen journals say 'usually not'. https://cfr.pub/forthcoming/papers/chang-li-2018.pdf

Christensen, G., & Miguel, E. (2018). Transparency, reproducibility, and the credibility of economics research. Journal of Economic Literature, 56(3), 920-80.

Dewald, W. G., Thursby, J. G., & Anderson, R. G. (1986). Replication in empirical economics: The journal of money, credit and banking project. The American Economic Review, 587-603.

Hubbard, R. (2015). Corrupt research: The case for reconceptualizing empirical management and social science. London: Sage Publications.

Hubbard, R., & Vetter, D. E. (1996). An empirical comparison of published replication research in accounting, economics, finance, management, and marketing. Journal of Business Research, 35(2), 153-164.

Ioannidis, J. P. (2005). Why most published research findings are false. PLoS medicine, 2(8), e124.

Open Science Collaboration (2015) Estimating the reproducibility of psychological science. Science, volume 349 (6251). http://dx.doi.org/10.1126/science.aac4716

Reid, L. N., Soley, L. C. and Wimmer, R. D. (1981) Replication in Advertising Research: 1977, 1978, 1979. Journal of Advertising, 1981, 10, 3-13.

Sönning, L. and Werner, V. (2021). The replication crisis, scientific revolutions, and linguistics. Linguistics, vol. 59, no. 5, pp. 1179-1206. https://doi.org/10.1515/ling-2019-0045

Wolins, L. (1962). Responsibility for Raw Data. American Psychologist, 17, 657-658.

 

(September 2021)