The traditional calculusbased introduction to statistical inference consists of a semester of probability followed by a semester of frequentist inference. Cobb (2015) challenges the statistical education community to rethink the undergraduate statistics curriculum. In particular, he suggests that we should focus on two goals: making fundamental concepts accessible and minimizing prerequisites to research. Using five underlying principles of Cobb, we describe a new calculusbased introduction to statistics based on simulationbased Bayesian computation.
A Bayesian redesign of the first probability/statistics course

> calculusbased introduction to statistics based on simulationbased Bayesian computation.
good stuff. I recall reading a long discussion on Gelman's blog about this. He has been championing simulation/computation using Stan as a uniform and elegant way to teach statistics too.

I would like to see some big shot taking the lead in redesigning undergrad econometrics teaching. Most undergrad metrics textbooks are covering the same contents as 30 years ago.
"Mastering metrics" is an interesting approach but too much about applied economics academic research. This is not what undergrads choosing economics are interested in. 
I would like to see some big shot taking the lead in redesigning undergrad econometrics teaching. Most undergrad metrics textbooks are covering the same contents as 30 years ago.
"Mastering metrics" is an interesting approach but too much about applied economics academic research. This is not what undergrads choosing economics are interested in.Mastering Metrics is meant as an undergrad econometrics text, not for a firstever stats intro

I would like to see some big shot taking the lead in redesigning undergrad econometrics teaching. Most undergrad metrics textbooks are covering the same contents as 30 years ago.
"Mastering metrics" is an interesting approach but too much about applied economics academic research. This is not what undergrads choosing economics are interested in.Mastering Metrics is meant as an undergrad econometrics text, not for a firstever stats intro
Which part of "I would like to see some big shot taking the lead in redesigning undergrad econometrics teaching" did you not understand?

I would like to see some big shot taking the lead in redesigning undergrad econometrics teaching. Most undergrad metrics textbooks are covering the same contents as 30 years ago.
"Mastering metrics" is an interesting approach but too much about applied economics academic research. This is not what undergrads choosing economics are interested in.Again, please tell me why it might be problematic in this case to cover the same contents as 30 years ago. These contents are the foundations for quantitative economic work.

Probability theory + frequentist inference has been popular over years. Why change?
https://www.econjobrumors.com/topic/replicationineconisdisaster
Dude, this has nothing to do with the difference between frequentist and Bayesian approaches.
It does. NHST, with its stringent p<.05 decision rule, led to rampant publication bias and explains the comically distorted distribution of zvalues in journal articles.

Probability theory + frequentist inference has been popular over years. Why change?
https://www.econjobrumors.com/topic/replicationineconisdisaster
Dude, this has nothing to do with the difference between frequentist and Bayesian approaches.
It does. NHST, with its stringent p<.05 decision rule, led to rampant publication bias and explains the comically distorted distribution of zvalues in journal articles.
https://marginalrevolution.com/marginalrevolution/2020/12/thedistributionofonemillionzvalues.htmlAny publication rule that conditions on the value of the results is going to lead to the same thing. If we woke up tomorrow and everyone used credible intervals, referees would complain that your 95% credible intervals cover zero. The basis of the problem is that having a "positive" result is the criterion for publication, not whether one uses frequentist or Bayesian methods.

Any publication rule that conditions on the value of the results is going to lead to the same thing. If we woke up tomorrow and everyone used credible intervals, referees would complain that your 95% credible intervals cover zero. The basis of the problem is that having a "positive" result is the criterion for publication, not whether one uses frequentist or Bayesian methods.
This. In frequentist statistics, if you have a test statistic and a decision rule, you can calculate the pvalue and interpret it as you wish. There is no reason to check whether it is smaller than 0.05 or whatever. This is what I teach students in my statistics classes.

Any publication rule that conditions on the value of the results is going to lead to the same thing. If we woke up tomorrow and everyone used credible intervals, referees would complain that your 95% credible intervals cover zero. The basis of the problem is that having a "positive" result is the criterion for publication, not whether one uses frequentist or Bayesian methods.
There would be no reason to adopt such a rule, since Bayesian inference doesn't require any declaration of significant versus insignificant. Also, credible intervals only exist to make Bayesian methods palatable for insufferable frequentists; unnecessary to make them the focal point of inference.