what is the right benchmark number though? journal acceptance rates are so low anyway. I know these numbers are after desk reject, which takes away at least half or more of rejections. Some of these are probably courtesy reports as others have said.

So 1/20 is low but only by a couple papers.

Moments computed from a binomial distribution can be a good benchmark. In this case, Mode the most likely outcome, would be the easiest to interpret.

Assuming JFE's unconditional rejection rate is 91% and papers are distributed IID, mode is int(p*(n+1)). For some selected numbers the most likely outcome would be:

Refereed Rejected Accepted

5 5 0

10 10 0

20 19 1

30 28 2

40 37 3

50 46 4

60 55 5

An observation with 20 rejections out of 20 reviews is not that unusual (it would be the third most likely outcome). But a 0 out of 30 or 40 or more would be rare to occur.

A 60 paper referee has likely accepted 5 and rejected 55. Chance of that referee rejecting all 60 is very low (~0.3%). Thus, this referee is either 1) biased toward rejection, or 2) has been receiving bad papers (p>91%)