That is useful, but might not quite get at the opportunity cost angle. I chose to be a part of the crowd-sourced project because I have a low opportunity cost (in reality I did not participate the project because I was too busy). Nevertheless, that finding highlights the relevance of a "good" journal review process: potential convergence in empirical choices driven by anonymous peer pressure. That also makes me wonder to what extent the "publication pressure" drives empirical choices. For example, if I know that a likely referee absolutely disagrees with the use of a certain empirical method, I might avoid it. So, this might be another source of endogeneity that does not quite show up in the paper's results - dispersion is inflated relative to reality.
Some of the co-authors obviously lack experience according to their ssrn pages. I wonder if co-authors’ findings deviate from “average” by more if they are less experienced. If so, the paper might be exaggerating the issue because lower-ability co-authors may have seen this project as an opportunity to publish; hence, self selection. Put differently, opportunity costs are highere for high-quality researchers. The journal peer review process, on average, can correct for some of this non-standard error as long as the referees are “experts” in the field.Good point. The paper also shows that dispersion is similar if the sample is restricted to high-quality scholars