Is Andrew really so dense?
Gelman vs. Sunstein

No one is harming statistics as Gelman with his campaign against statistical tests...
Then it's not just Gelman but a good chunk of statisticians who are unhappy with regards to the misuses of statistical testing, even among frequentists who do not believe pvalues are useless like Bayesians do.
And Gelman is absolutely right with regards to Sunstein, even though replication threatens those academics with power the most.

Gelman is so overrated. Isn’t he the guy who says we shouldn’t use robust standard errors and that we should always use random effects rather than FE because the data estimates theta (in Stata’s notation) and so we should let the data choose.
I’d have quit statistics if I’d said such stupid things. And, oh yeah, he’s also wrong about pvalues. And before someone says, “But but Bayesian,” please tell me how to obtain a credible interval with a panel data analysis with unspecified distribution and heteroskedasticity and serial correlation of unknown form.
pvalues are useful information. We just need people to use them properly.

Gelman is so overrated. Isn’t he the guy who says we shouldn’t use robust standard errors and that we should always use random effects rather than FE because the data estimates theta (in Stata’s notation) and so we should let the data choose.
I’d have quit statistics if I’d said such stupid things. And, oh yeah, he’s also wrong about pvalues. And before someone says, “But but Bayesian,” please tell me how to obtain a credible interval with a panel data analysis with unspecified distribution and heteroskedasticity and serial correlation of unknown form.
pvalues are useful information. We just need people to use them properly.Imagine being this dumb...

Gelman is so overrated. Isn’t he the guy who says we shouldn’t use robust standard errors and that we should always use random effects rather than FE because the data estimates theta (in Stata’s notation) and so we should let the data choose.
I’d have quit statistics if I’d said such stupid things. And, oh yeah, he’s also wrong about pvalues. And before someone says, “But but Bayesian,” please tell me how to obtain a credible interval with a panel data analysis with unspecified distribution and heteroskedasticity and serial correlation of unknown form.
pvalues are useful information. We just need people to use them properly.To be more concrete why you are a retard, if the interval is not credible, why is the pvalue credible? If you have rational expectations about the inferential problems created by issues with the standard errors, can you adjust the reported confidence interval? Is a noisy signal worthless? Honestly, you are very dumb and have no vision.

Gelman is so overrated. Isn’t he the guy who says we shouldn’t use robust standard errors and that we should always use random effects rather than FE because the data estimates theta (in Stata’s notation) and so we should let the data choose.
I’d have quit statistics if I’d said such stupid things. And, oh yeah, he’s also wrong about pvalues. And before someone says, “But but Bayesian,” please tell me how to obtain a credible interval with a panel data analysis with unspecified distribution and heteroskedasticity and serial correlation of unknown form.
pvalues are useful information. We just need people to use them properly.To be more concrete why you are a retard, if the interval is not credible, why is the pvalue credible? If you have rational expectations about the inferential problems created by issues with the standard errors, can you adjust the reported confidence interval? Is a noisy signal worthless? Honestly, you are very dumb and have no vision.
Because I can at least make the pvalue robust to problems that I know are present in the vast majority of applications. How about this: go flip through the past five years of the top 10 journals that publish empirical micro work, and tell me how many use Bayesian methods. Then tell me what the market failure is. None of the top empirical micro people use Bayesian stuff, and they all report pvalues. We all know what’s what.

Gelman is so overrated. Isn’t he the guy who says we shouldn’t use robust standard errors and that we should always use random effects rather than FE because the data estimates theta (in Stata’s notation) and so we should let the data choose.
I’d have quit statistics if I’d said such stupid things. And, oh yeah, he’s also wrong about pvalues. And before someone says, “But but Bayesian,” please tell me how to obtain a credible interval with a panel data analysis with unspecified distribution and heteroskedasticity and serial correlation of unknown form.
pvalues are useful information. We just need people to use them properly.To be more concrete why you are a retard, if the interval is not credible, why is the pvalue credible? If you have rational expectations about the inferential problems created by issues with the standard errors, can you adjust the reported confidence interval? Is a noisy signal worthless? Honestly, you are very dumb and have no vision.
Because I can at least make the pvalue robust to problems that I know are present in the vast majority of applications. How about this: go flip through the past five years of the top 10 journals that publish empirical micro work, and tell me how many use Bayesian methods. Then tell me what the market failure is. None of the top empirical micro people use Bayesian stuff, and they all report pvalues. We all know what’s what.
Yes, a common misuse of the statistic and a potential venue for phacking.
What if readers were able to fully see those ultra wide confidence intervals that almost include 0 right next to those p<.05 stars without performing any extra calculations, as opposed to just seeing the point estimate? Phacked or not, would people take those results all that seriously if an economically irrelevant magnitude was compatible with the observed data?
Also, using RE will give exactly the same point estimates as using FEs if done correctly.