Currently doing PhD but haven't explored Bayesian stuff much. There's a course delivered next Fall and I'm wondering whether it has much application in theoretical/applied econometrics.
How useful is Bayesian econometrics?
-
Three advantages: 1. Shapes your thinking in stats and metrics; 2. It has a computational focus with general takeaway messages, and that is exactly what you will need in most life situations you'd encounter; 3. You'll be simply conversant in this area so that you don't have to ask the question you asked in your original post. You'll be able to choose the tools you need, or you can advise other on the appropriate tools they would need (be it bayesian or not).
Downsides: nothing, really.
And I'm saying this without doing bayesian stuff myself.
-
I'm wondering whether it has much application in theoretical/applied econometrics.
Traditionally the Bayesian approach has been the escape route for when your time series sample is too small estimate your VAR models the old frequentist way. Quite frankly there aren't many applications beyond that.
I'm willing to change my mind if someone can point me to a recent top-5 publication using Bayesian metrics (but not Bayesian VAR).
-
Very. With the current ML hype, bayesian is the future of econometrics.
Bayesians are like bitcoiners, for 300 years they've been saying "few people are Bayesian now but everybody will be Bayesian in the future"
to be frank, do not worship bayesian but instead treat it as a benchmark tractable learning scheme. We can then modify it by, say, incorporating recency bais etc.
would be quite interesting.
-
Under regularity conditions, if the parameters have a finite dimension then the Bayesian posteriors asymptotically shrink to the maximum likelihood estimator, and therefore to the parameter of interest. Moreover, any centrality measure of the posterior (conditional mean, or mode) has the same asymptotic distribution as maximum likelihood. In practice, there is absolutely no advantage in one over another, they work the same. Any other statement is just cheap talk of illiterate people.
However, if the parameters have infinite dimension (a nonparametric curve modeled by sieves), Bayesian estimators can have important failures (inconsistent), there are well known examples, and only in some specific cases the Bayesian nonparametric estimators work well.
-
Machine learning is a term than emcompases many different statistical problems, from the selection of OLS regressions out of a very large pool, to certain nonparametric regression or classification methods, mostly related to neural netwoks. An no, they are not Bayesian in general. You can use Bayes or another approach, Bayes is more demanding in terms of assumptions (as maximum likelihood is also more demanding), at least for nonlinear models.
-
Oh this is a nice way of putting it.
Under regularity conditions, if the parameters have a finite dimension then the Bayesian posteriors asymptotically shrink to the maximum likelihood estimator, and therefore to the parameter of interest. Moreover, any centrality measure of the posterior (conditional mean, or mode) has the same asymptotic distribution as maximum likelihood. In practice, there is absolutely no advantage in one over another, they work the same. Any other statement is just cheap talk of illiterate people.
However, if the parameters have infinite dimension (a nonparametric curve modeled by sieves), Bayesian estimators can have important failures (inconsistent), there are well known examples, and only in some specific cases the Bayesian nonparametric estimators work well. -
Do people actually use mismatched priors and posterior distributions in practice? The trade offs in terms of what you can say about the models seem mostly not worth it.
Machine learning is a term than emcompases many different statistical problems, from the selection of OLS regressions out of a very large pool, to certain nonparametric regression or classification methods, mostly related to neural netwoks. An no, they are not Bayesian in general. You can use Bayes or another approach, Bayes is more demanding in terms of assumptions (as maximum likelihood is also more demanding), at least for nonlinear models.
-
You don't need to show Baeysian approaches are equivalent to Frequentists approaches "under" certain assumptions and/or conditions. Why the whole Bayesian ecosystem still appeals to a huge chunk of statisticians until now is that their underlying philosophy begs to differ than frequentiats, even though they admit the lack of practicalities. But they have come up with data-driven fixes recently.