Crowdsourced paper showing how you can get anything out of the data: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3961574 (even without publication pressure)
Non-standard errors
-
Harsh take on the paper by AG yesterday.
details?
He basically said that dispersion in results is expected given the ambiguity of the hypotheses. For instance, different researchers may have different definitions of market efficiency in mind. But that only strengthens the message of the paper. Weird take.
-
위스콘신 기준인가요?
안녕하세요 얘기를 들어보니 해외 유수 정책연구기관 정책 리포터 관련 얘기인 것 같네요.
R&D 유형 분류시 참고한 문헌은 2006년까지 데이터를 사용하였으면 그걸 언급하는 것보다 우리나라 2019년 혁신형 AT 기업 실태를 언급하는게 맞다고 보는데요. 이게 학술연구도 아니고 2019년 얘기를 하는게 맞죠.
그리고 2010~2020년 코스피 데이터를 썻는데, 2010~2017 코스피를 연구한 관련 문헌이 있다면 언급하는게 맞겠지만, 그런 상황도 아닌 것 같구요.
연구의 범의와 데이터의 범의 및 한계는 한페이지 넘게 상술하였구요.
편집장님이신 분께 마지막단계에 그래서 다른 건 안 건드리고 가독성 부분만 건드리다고 대면해서 말씀드린 상태였구요.?
-
Disagree. As long as the different research teams use different tests, then these is a standard multiple hypothesis testing problem
This is quite different. Multiple testing is about sequential tests of related hypothesis until significance is found as a response to to publication incentives. Dispersion in results across multiple tests is due to chance. In this experiment, dispersion is due to different choices made my different researchers with no incentive whatsoever to find significance.
-
What are the implications? It means that estimation error has two components: sampling error plus dispersion due to choices. Hence, standard errors are over-reported rather then under-reported. Hence, many hypotheses that are rejected should have been accepted.
No, standard errors only account for sampling error. Therefore, reported uncertainty understimates true uncertainty. Consequently, many hypotheses that have been rejected should not have.
-
You mean ‘accepted should not have’
What are the implications? It means that estimation error has two components: sampling error plus dispersion due to choices. Hence, standard errors are over-reported rather then under-reported. Hence, many hypotheses that are rejected should have been accepted.
No, standard errors only account for sampling error. Therefore, reported uncertainty understimates true uncertainty. Consequently, many hypotheses that have been rejected should not have.
-
No. If dispersion in outcomes is higher than what we thunk it is, based only on sampling error. A high value of the test statistic may seem highly unlikely under the null, so the null is rejected, but is in reality within confidence bounds if we account for the vast variety of researcher choices that can be taken to produce it, so the null should not have been rejected (false discovery).
You mean ‘accepted should not have’
What are the implications? It means that estimation error has two components: sampling error plus dispersion due to choices. Hence, standard errors are over-reported rather then under-reported. Hence, many hypotheses that are rejected should have been accepted.
No, standard errors only account for sampling error. Therefore, reported uncertainty understimates true uncertainty. Consequently, many hypotheses that have been rejected should not have.
-
Some of the co-authors obviously lack experience according to their ssrn pages. I wonder if co-authors’ findings deviate from “average” by more if they are less experienced. If so, the paper might be exaggerating the issue because lower-ability co-authors may have seen this project as an opportunity to publish; hence, self selection. Put differently, opportunity costs are highere for high-quality researchers. The journal peer review process, on average, can correct for some of this non-standard error as long as the referees are “experts” in the field.
-
Some of the co-authors obviously lack experience according to their ssrn pages. I wonder if co-authors’ findings deviate from “average” by more if they are less experienced. If so, the paper might be exaggerating the issue because lower-ability co-authors may have seen this project as an opportunity to publish; hence, self selection. Put differently, opportunity costs are highere for high-quality researchers. The journal peer review process, on average, can correct for some of this non-standard error as long as the referees are “experts” in the field.
Good point. The paper also shows that dispersion is similar if the sample is restricted to high-quality scholars