Oh man, p-values of 0.05 and 0.04 in Table 2. Nothing else significant. Hanging by a thread here. Is this is the "run your regression and pray" kind of paper?
The Gender Problem in Economics is SOLVED!
-
Table 4, col. 4: all coefficients--including the 0.08 effect of interest--are insignificant & their sample size is infinitesimal, yet the abstract says that "[t]he intervention significantly impacted female students' enrollment in further economics classes, increasing their likelihood to major in economics by 8 percentage points." LMAO what a joke.
-
Table 4, col. 4: all coefficients--including the 0.08 effect of interest--are insignificant & their sample size is infinitesimal, yet the abstract says that "[t]he intervention significantly impacted female students' enrollment in further economics classes, increasing their likelihood to major in economics by 8 percentage points." LMAO what a joke.
Table 4, the last column with controls: conf. interval >16% where the main effect is 8%, very imprecise estimate and n.s.
-
"As shown in Table 2, in 2015, the proportion of women who enrolled
in Intermediate Microeconomics within a year of Principles is around 13 percent and is the same across treatment and control groups."Two things are weird about this. One, why do the authors paint over this? One class has 11% and the other 15%, they are not "the same".
Two, and this is the confusing part, why would you have two pre-treatment cohorts in 2015 that are separate from the post-treatment and control cohorts in 2016?Think of it this way: there are two professors, Peter and Paul. The treatment happens in 2016, in Paul's class.
So the 2015 class difference is going to help boost the post treatment difference in a DiD (this "control group" is just that you were in class A in professor John's class versus class B in professor Paul's class, in the prior cohort before the treatment). A potentially spurious difference for 2015 means that even if the 2016 classes had approximately the same enrolment rate between Peter and Paul, you may find a significant difference in the DID. Not obvious this is much better than not doing a DiD.Also note that the balance test fails, proving how tenuous this result is. You could have also written the paper to show even stronger results that
a) the speech treatment causes students in the treatment class to be born internationally rather than in America, twenty years earlier (p value 0.00 rather than 0.05)
b) the speech treatment causes students to go back one year in the and enrol in principles as freshman rather than juniors (p value 0.00 rather than 0.05)Or maybe international students are more likely to major in econ, and students who choose to do econ as freshmen rather than juniors are more likely to major in econ.
But who knows.
-
Table 4, col. 4: all coefficients--including the 0.08 effect of interest--are insignificant & their sample size is infinitesimal, yet the abstract says that "[t]he intervention significantly impacted female students' enrollment in further economics classes, increasing their likelihood to major in economics by 8 percentage points." LMAO what a joke.
I am a man. Actually, this result may reveal that the paper is credible as long as data and code are publicly available. Joke is to realize that top-5 journals publish mainly empirical work rather than mathematical work and that most empirical works do not reveal data and codes. Here, yes, there is a considerable joke coming from an economics non-credible niche
-
If you have doubts about the paper, why don’t you run a robustness check and write a comment? That’ll improve our discipline. Disparaging a paper in an anonymous forum doesn’t make economics better.
Table 4, col. 4: all coefficients--including the 0.08 effect of interest--are insignificant & their sample size is infinitesimal, yet the abstract says that "[t]he intervention significantly impacted female students' enrollment in further economics classes, increasing their likelihood to major in economics by 8 percentage points." LMAO what a joke.
-
Is it really first year courses that prevent women from joining PhD programs?
Why would self-respecting and talented women be encouraged to enter our field if all they see is nonsensical research grabbing all the headlines?
Remember, academia pays cr@p compared to top industry jobs. The real gratification is intellectual pursuit. If academia abandons intellectual rigor, what’s compensating differential to attract future talented females into the profession?
In the end, we scare all the good ones away and attract more rent seekers.