I attended only part of the 2nd session with Dr Oransky and Dr Clark, and I may have missed something, but in the section I heard, I felt there was some confusion on the topic of peer review, which is something that is clearly of interest to the committee.

 

When asked about the quality of peer review, Dr Clark emphasised how good it was, and how well-trained and serious peer reviewers were. Mr Stringer seemed surprised and asked if this was the case in other countries.  It seemed that Dr Clark was talking about the NIH’s peer review system for their grants. I suspect their peer reviewers are carefully selected and trained. Funders in general are particularly scrupulous because of the high-stakes nature of the decisions.

 

Peer review for journals is rather different, and far more varied.

 

A key point is that it is unpaid labour – like a lot of academic work, it is something done on a voluntary basis to keep the system running. Anyone can refuse to review if they want, and some researchers do no reviewing. Most people do some, motivated partly by a sense of duty, partly by curiosity and interest in others’ research, and partly by feeling that if nobody acts as gatekeeper, then bad work will get published. There’s been quite a lot of discussion on social media as to whether peer reviewers should be paid, but I’m not personally convinced that would be a solution. There is a sense, though, that publishers make a lot of profit from journals, and benefit from free labour from academics, which seems unjust.

 

As Dr Ferguson noted, peer review can take a lot of time to do thoroughly – I estimate on average I usually spend around 3-4 hr reviewing a paper, but there is a huge range, and it can be several days, especially if checking for complete reproducibility.  Another problem is that these days, you also get papers of such complexity that it is hard to find anyone who can thoroughly understand all the methods and you end up taking things on trust.

 

It is very much down to journal editors to allocate reviewers and supervise the review process: in this regard they have substantial power. They are very varied in how seriously they take the job (I wrote a blogpost about this 11 yr ago: http://deevybee.blogspot.com/2010/09/science-journal-editors-taxonomy.html)

 

The best reviewers can add value to a paper, providing a critical eye and suggestions for improvement. The worst can be ignorant, biased or lazy. A good editor will moderate reviewer comments.  We get serious problems when editors themselves are corrupt and playing the system for their own benefit  - a phenomenon that is not common but I have a particular interest in exposing.

 

This is not a national issue – peer review cuts across boundaries and reviewers are often asked to comment on work that is published in non-UK journals. (This is also to some extent true of grant reviewers).

 

More generally, though, that as I mentioned in my session, it could be argued that peer review comes too late to be useful – A system such as registered reports, where peer review is applied to the project protocol before data is collected, is more useful.

 

Currently, peer review can let through flawed work, and, perhaps more seriously, can block valuable work that has taken years to do, on the grounds that it is not ‘exciting’.  The difficult question remains as to how best to have a filter that keeps out rubbish but which does not trap good work just because it is not ‘groundbreaking’. There is a lot of interest in modifications to the system – especially post-publication peer review and open peer review (it is currently mostly anonymised and unpublished).

 

Thanks for giving me the opportunity to contribute to the Committee’s deliberations. I am happy to elaborate on any of these points or answer any other questions you may have.

 

1 December 2021