Feature

Click here for Top of Page
Right Arrow: Next
Right Arrow: Previous
Newsletter

The (Frustrating) State of Peer Review

I’ve been meaning to write this article for a long time. As an author of technical papers, every time I receive fewer than 3 reviews of one of my submissions, I wonder just how many people were requested to perform the reviews before this minimal number came through in the end. As an associate editor who has served on several editorial boards, every time a request for review is declined or ignored, I wonder if that person is pulling his or her reviewing weight with respect to their own submissions and also with respect to how often this person takes advantage of the whole system of peer reviewed literature. As a program committee member for conferences and workshops, every time I am inundated with from 5 to 20 submissions to be reviewed in a very short time, I wonder why more people aren’t sharing this responsibility.

Let me step back a moment to explain, for those who don’t know, how papers find their way into journals and conferences. The world of science is built on the foundation of sharing information. The traditional way to do this has been through peer-reviewed scientific publications. (I will not discuss the merits of informal publication through blogs, email, and other non-peer-reviewed publications.) What “peer-review” means here is that, for a submitted article to be published, it must pass the important hurdle of being read and recommended for publication by a number of other knowledgeable people in the field of the submission.

After an author submits a paper to Journal X, the process is the following. The editor-in-chief looks at the paper and decides which of the editorial board members is best qualified to oversee its review. That associate editorial board member then chooses a number of reviewers. The choice of reviewers can be made in several ways. The editor might know one or more experts. Paper references can be scanned to learn other authors in the field. Or, the editor can make use of one of the sophisticated web tools available to publishers today that can access potential reviewers by name, field of expertise, past reviewing for the journal, etc. Through any of these means, reviewers are chosen. The typical minimum is 3. These reviewers are contacted through email. Some may decline, in which case more reviewers are chosen. The reviewers are typically asked to complete the review in 6-8 weeks. When the reviews are received, the associate editor reads them and decides what to advise: the paper can be rejected, accepted, or requested to be revised and resubmitted—in this latter case, the review process is performed again.

There are some differences for a conference submission as compared to that described above. Because of time constraints—time from receipt to accept/reject might be as little as a month—often only program committee members perform the reviews. A minimum of 2 reviews might be acceptable. There is usually no option for “revise and resubmit”; only accept or reject; though perhaps “accept with suggested revisions” is also an option.

So, what is wrong with peer review? Let me first say that this article can be described as only a “flame”, that is, I reveal my frustrations, but I don’t know the remedies or alternatives. In fact, as frustrating as some aspects of peer review have been to me (and I’m sure to many other authors, reviewers, and editors), the system ultimately works well for readers of these papers. This is because the quality of papers directly correlates with the quality of the peer review process — even as it is today.

My main complaint is a burning suspicion that the task of reviewing is not shared fairly. By fairly, I mean that I think there are an awful lot of authors out there who are not pulling their reviewing weight. It’s easy to calculate a fair “reviewing weight”. If an author submits n papers per year, then at a 3-reviewers per submission rate, that author should be reviewing 3n papers. We can complicate this a bit by saying that a paper submitted by co-authors reduces the reviewing burden. For example, if author A is a co-author for 3 publications per year, and the average authors per paper is 3, then author A should review 3/3=1 paper that year. (This assumes the co-authors will also review their share, so beginning graduate students without adequate knowledge for reviewing do not count.)

I don’t have general statistics on authors not pulling their weight. It’s understandable that all of us will have to decline to perform a review at some times due to other commitments. However, if the reason is that the prospective reviewer is too busy writing more of his or her own papers to review others’ papers, then I’d say this is an example of an imbalance of the reviewing load. Table 1 shows some reviewing statistics. This is a small sampling of reviews requested for IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), and I make no claims that this represents broader statistics for TPAMI or any other journal. The table shows that about 16% of requested reviewers decline immediately upon request. Of the 84% who agree to perform a review, only 74% actually complete it. Of all requested reviewers, the percentage of completed reviews is only 62%. What this means is that, for every paper needing at least 3 completed reviews, 3/0.62 = 4.8 reviewers must be requested. If these statistics are representative, editors should be requesting about 5 reviews per paper to likely receive 3.

Table 1 Review statistics for a small sample of TPAMI submissions.

 

Requested

Declined

Agreed

Completed

number

77

12

65

48

percentage

 

15.7

84.4

62.3 (of requested)

73.8 (of agreed)

 

Let’s consider other reasons a prospective reviewers might decline. One is that they do not have expertise in the exact field of the paper. I suggest that this explanation is valid only up to a certain point. If the field mismatch is large, this is legitimate. However, if reviewers decline because they are not doing extremely similar work to the submission, then this may be more of an excuse than a valid reason. I say this because of the following points. 1) The paper has at least 3 reviewers, so the complement of each of these can provide adequate coverage of the paper, despite less than 100% expertise overlap of each.  2) Although a prospective reviewer may not have worked on the same problem, any good scientist should know the fundamentals of technical experimentation and publication, and so can assess the clarity of writing, depth of background material, quality of experimentation, and soundness of conclusions. 3) Most review forms have a space for reviewers to enter how well acquainted they are with the field, so the editor can take into account this when assessing the review.

Another frustration is the following. Consider a technical field, XYZ. This field is small, having only 10 or so researchers who publish, and thus who are visible to review submissions. When a paper is submitted in this field, the editor finds 3 XYZ-experts to review it. Because these few people are reviewing papers within their own small group, several problems can occur. A minority of researchers in XYZ who approach a problem differently may never have their work accepted by the status quo majority. Alternatively, if the submitting authors are respected “incumbents”, any submission regardless of merit might gain publication acceptance by group members/reviewers. Indeed, a recent article in Science Magazine [1] recognized that, “Teams publishing in high-impact journals have a high fraction of incumbents.” However, this article goes on to say, “The temptation to work mainly with friends will eventually hurt performance.”

But I think the worst consequence of peer review in this small XYZ community is the following. The few researchers always accept other submissions in the field because they believe their field is worthy of publication. This might be so, but a small field can indicate one of the following: 1) the field is new and set to grow, 2) the field has shrunk and these are the remaining researchers in a field perhaps past its time, 3) the field size remains static over many years, indicating little interest in it outside of the small community. I suggest that options 2 and 3 are problems that inbred peer review will not reveal, thus papers will continue to be published in broad-audience publications despite very small interest in those papers.

As I’ve said, despite these problems, I believe most good work is published and most peer-reviewed published work is good. If you disagree with this statement or any frustrations I’ve shared, or just wish to add other comments, please send email and – with minimal peer review – these opinions can be published and shared with other readers.

 

References:

[1] Albert-Laszlo Barabasi, “Network Theory – the Emergence of the Creative Enterprise,” Science Magazine, Vol. 308, 29 April 2005, pp. 639-641.

Lawrence O'Gorman

Avaya Labs, Basking Ridge, NJ, logorman@avaya.com