If I can't run the code I recommend reject.
Do you ask for author's code as a referee?
-
Journals do not require code or data for submission. Maybe they should, but they don't. Requesting code is going to slow down the review process, especially if the author does not have it cleanly archived. Most editors are probably not looking for ways to slow things down.
If you think something is "fishy", you must have some reason. If you can't clearly articulate that reason in a referee report, then perhaps you are "fishing". Either write very clearly what you find odd, and what you'd like to see in a revision that would convince you that the authors are correct (this does not mean you need to recommend R&R, but I assume you must plan to since you're thinking of requesting code).
I do think requiring code and data (if available) at submission is a good idea worth considering as long as anonymous referees can be trusted not to steal it for their own work. That's not the system we have now, though.
Of course, you can always decide that you won't review any paper if you can't have the code and data, but you should tell that to editors when they ask you for a report.
-
I have a referee report and a good idea about the identity of one of the reviewers who is my enemy. The journal does not require data for submission or publication, but the a-hole is requesting the data. I have spent years collecting the data and will not release my data to him, sorry.
-
That's a really wicked question... so if reviewers do not look at the codes, serious errors are only possible to be raised through post-publication reviews, perhaps leading to retractions?
lol yes imagine what a fallen world that would be. One in which papers are not considered infallible scripture once published, but part of a provisional, ongoing process of replication, testing, and refinement.
-
No one is talking about the author making the code available once the paper is published. The thread is about reviewers requesting code and data.
That's a really wicked question... so if reviewers do not look at the codes, serious errors are only possible to be raised through post-publication reviews, perhaps leading to retractions?
lol yes imagine what a fallen world that would be. One in which papers are not considered infallible scripture once published, but part of a provisional, ongoing process of replication, testing, and refinement.
-
A coauthor and I once had a particularly acrimonious set of interactions with a referee across 2 journals and 5 rounds of review.
The referee -- who was a moderately big fish -- treated my coauthor and me as though we were borderline criminals and almost certainly trying to pull something dishonest. Yet, at the second journal, he kept giving us R&Rs until he recommended accepting the paper.
After a couple of near-accusations of dishonesty, my coauthor and I fully commented up our (complex) code and provided it and the data to the editor under the proviso that only he and the referees could look at the code and results. That is, we had purchased the data so the data set couldn't be posted.
I am pretty sure that the troublesome referee never actually unpacked our code and ran it. I can't recall how I knew but I knew. Our code did do exactly what we claimed that it did. We did nothing dishonest.
The paper was finally accepted after 4 rounds of review at the 2nd journal.
-
This
I have a referee report and a good idea about the identity of one of the reviewers who is my enemy. The journal does not require data for submission or publication, but the a-hole is requesting the data. I have spent years collecting the data and will not release my data to him, sorry.
-
A much better solution compared to the clown show that is economics
The American Journal of Political Science has found a good solution to this problem- after acceptance, they have a team check whether the code and data provided actually replicate the results in the paper:
https://ajps.org/ajps-replication-policy/ -
Cochrane writes:
“Author's interest
Authors often want to preserve their use of data until they've fully mined it. If they put in all the effort to produce the data, they want first crack at the results.
This valid concern does not mean that they cannot create redacted slices of data needed to substantiate a given paper. They can also let referees and discussants access such slices, with the above strict non-disclosure and agreement not to use the data.
In fact, it is usually in authors' interest to make data available sooner rather than later. Everyone who uses your data is a citation. There are far more cases of authors who gained notoriety and long citation counts from making data public early then there are of authors who jealously guarded data so they would get credit for the magic regression that would appear 5 or more years after data collection.
Yet this property right is up to the data collector to decide. Our job is to say "that's nice, but we won't really believe you until you make the data public, at least the data I need to see how you ran this regression." If you want to wait 5 years to mine all the data before making it public, then you might not get the glory of "publishing" the preliminary results. That's again why voluntary pressure will work, and rules from above will not work.”
There is a fundamental problem with older HRMs requesting data from (untenured) younger LRMs. Instead of chastising younger scholars as hoarders of data, why not offer an incentive compatible solution. It should not be that hard with modern technology. NDAs and slices won’t solve the problem.
-
I like the recent development in some journals that refuse to send out to review any papers that do not use easily accesable data and provide the code.
As a referee, sometimes I do not need to see the code to know things are not how it should be. And any paper that uses confidential proprietory data is a reject for me. But for those blurry ones where things are kind of how they should be, but the results look weird, yes, you should ask for code. It should be in authors´interest to provide it too. As if there is a simple bug, everyone´s better off once that´s fixed. -
The American Journal of Political Science has found a good solution to this problem- after acceptance, they have a team check whether the code and data provided actually replicate the results in the paper:
https://ajps.org/ajps-replication-policy/Well, I had a similar process with an Econ journal - but the "team" was a PhD Student of the editor. Of course the guy wanted to show to his advisor how smart he is and went through the code like a bloodhound. We ended up with numerous requests almost like a fourth referee report (after three we had before acceptance).