The absolute state of Stata users ljl
Datacolada: highly cited QJE paper falls apart when data is analyzed properly
-
Also, I would bet that the major inference problems in RCTs come from clustering, not heteroscedasticity. Specifically, using vce(cluster whatever) when the number of clusters is not big enough to justify the asymptotics.
Quant psych people show that you don’t need a ton of clusters going back to Winer for randomized experiments (Jake Westphall has a paper on this?) but I don’t think the assumptions for the lab hold up as well in an RCT.
-
STATA spits out different parameters for mixed effects models/HLM than R and I’m not sure why. Weird differences in df’s and t’s are one thing, parameters suggest something else. It’s easy to figure out what algorithm R uses, not so much stata
It is embarrassing how people rely on off-the-shelf algorithms to make strong claims and don't even bother to look under the hood. My students taking undergrad metrics have wondered about this issue when comparing R to Stata.
-
It is embarrassing how people rely on off-the-shelf algorithms to make strong claims and don't even bother to look under the hood. My students taking undergrad metrics have wondered about this issue when comparing R to Stata.
Our field is full of unfounded assumptions, why is this any different.
-
"It turns out that there are five main ways to compute robust standard errors. STATA has a default way, and it is not the best way.
In other words, when in STATA you run: reg y x, robust you are not actually getting the most robust results available. Instead, you are doing something we have known to be not quite good enough since the first George W Bosh administration (the in-hindsight good old days).
The five approaches for computing robust standard errors are unhelpfully referred to as HC0, HC1, HC2, HC3, and HC4. STATA's default is HC1. That is the procedure the QJE article used to conclude regression results are inferior to randomization tests, but HC1 is known to perform poorly with 'small' samples. Long & Ervin 2000 unambiguously write "When N<250 . . . HC3 should be used". A third of the simulations in the QJE paper have N=20, another third N=200."
Economists are just second-raters all around.
Theorists are second-rate, failed mathematicians.
Econometricians are second-rate, failed statisticians.
Even reg monkeys are second-rate, failed data scientists."It turns out that there are five main ways to compute robust standard errors. STATA has a default way, and it is not the best way.
In other words, when in STATA you run: reg y x, robust you are not actually getting the most robust results available. Instead, you are doing something we have known to be not quite good enough since the first George W Bosh administration (the in-hindsight good old days).
The five approaches for computing robust standard errors are unhelpfully referred to as HC0, HC1, HC2, HC3, and HC4. STATA's default is HC1. That is the procedure the QJE article used to conclude regression results are inferior to randomization tests, but HC1 is known to perform poorly with 'small' samples. Long & Ervin 2000 unambiguously write "When N<250 . . . HC3 should be used". A third of the simulations in the QJE paper have N=20, another third N=200."
Economists are just second-raters all around.
Theorists are second-rate, failed mathematicians.
Econometricians are second-rate, failed statisticians.
Even reg monkeys are second-rate, failed data scientists.This would deserve to be pinned. I bet almost no one is aware of this inadequate default. I surely was not
If you did your estimates using a modern language you would be aware of this issue, as you have to explicitly choose the stand errors, there is no default. I'm surprised reg monkeys haven't heard of randomization tests.
-
You can look at Stata's documentation and see what it does. Don't blame guns, blame gun owners.
STATA spits out different parameters for mixed effects models/HLM than R and I’m not sure why. Weird differences in df’s and t’s are one thing, parameters suggest something else. It’s easy to figure out what algorithm R uses, not so much stata
It is embarrassing how people rely on off-the-shelf algorithms to make strong claims and don't even bother to look under the hood. My students taking undergrad metrics have wondered about this issue when comparing R to Stata.
-
...
Economists are just second-raters all around.
Theorists are second-rate, failed mathematicians.
Econometricians are second-rate, failed statisticians.
Even reg monkeys are second-rate, failed data scientists.This is amazingly self aware. Are you sure you're an economist?
-
Doesn't matter. Whether the QJE paper is correct or wrong, it will have zero impact on the real world anyway.
The article is just pretext for rejecting experimental papers reviewers wanted to reject anyway. Tons of papers like this, it’s nothing new. It’s rare that a pretext paper is so sloppy
-
I am a stata monkey and I am angry at stata developers. Why do you call one option robust? It just lends users to use that one.
The same thing they are doing now with the whole treatment effects and diff-in-diff suite. What a garbage thing to program this up like click-to-click without need to read any documentation.
-
Also don't forget in Stata merge m:m is not doing a cross product as you would expect. It does a random m:1 merge. When confronted with this weird behavior, the Stata agent on twitter replies that it is documented in the handbook with a smiley face, as if it is legitimate so long as it is documented.