Once a Bayesian always a Bayesian
Bayesian priors are a joke

To be fair to the OP, I have never seen a textbook at clearly explains why having subjective prior beliefs is supposedly an advantage. I have respect for Bayesian inference, but I think it will work out a lot better in fields like computer science instead of economics. Most economists say that they feel uncomfortable to start analyzing data with a subjective prior. All texts I have seen start the analysis by assuming some kind of an "uninformative prior". But then, if you use the uninformative prior, the results often end up being the same as in classical analysis. Only the interpretation of results changes.

To be fair to the OP, I have never seen a textbook at clearly explains why having subjective prior beliefs is supposedly an advantage. I have respect for Bayesian inference, but I think it will work out a lot better in fields like computer science instead of economics. Most economists say that they feel uncomfortable to start analyzing data with a subjective prior. All texts I have seen start the analysis by assuming some kind of an "uninformative prior". But then, if you use the uninformative prior, the results often end up being the same as in classical analysis. Only the interpretation of results changes.
Lol, so you really think that by NOT doing bayesian analysis you are NOT using any prior assumption? I think you should read your Casella again (if you ever have)

To be fair to the OP, I have never seen a textbook at clearly explains why having subjective prior beliefs is supposedly an advantage. I have respect for Bayesian inference, but I think it will work out a lot better in fields like computer science instead of economics. Most economists say that they feel uncomfortable to start analyzing data with a subjective prior. All texts I have seen start the analysis by assuming some kind of an "uninformative prior". But then, if you use the uninformative prior, the results often end up being the same as in classical analysis. Only the interpretation of results changes.
Lol, so you really think that by NOT doing bayesian analysis you are NOT using any prior assumption?
Yes, I think so. The apology that "you all have assumptions, so why not just use Bayesian analysis" is just appalling.

To be fair to the OP, I have never seen a textbook at clearly explains why having subjective prior beliefs is supposedly an advantage. I have respect for Bayesian inference, but I think it will work out a lot better in fields like computer science instead of economics. Most economists say that they feel uncomfortable to start analyzing data with a subjective prior. All texts I have seen start the analysis by assuming some kind of an "uninformative prior". But then, if you use the uninformative prior, the results often end up being the same as in classical analysis. Only the interpretation of results changes.
1) Prior beliefs in Bayssian analysis are no different from your prior beliefs embedded in the likelihood function that regressors enter the model linearly, that certain variables are Gaussian, etc.
2) The best way to set a prior is neither subjective nor objective. You should instead try to use priors that are either a) based on previous historical data, or b) hierarchal in nature so you can combine inference across multiple data sets. This is one of the main advantages of Bayesianism
3) It is totally wrong to say that a bayesian analysis under an objective prior is 'no different to classical analysis'. The class of prbolems that can be solved using frequentist methods is tiny. Being a Bayesian means you can fit models that are much more complex than frequentists can, because unlike them you have principled methods to eliminate nuisance parameters, combine multiple types of information in one model (continuous, categorical, etc), dont need to rely on bootstrapping for confidence intervals so there is no problem with time series analysis, etc etc.

To be fair to the OP, I have never seen a textbook at clearly explains why having subjective prior beliefs is supposedly an advantage. I have respect for Bayesian inference, but I think it will work out a lot better in fields like computer science instead of economics. Most economists say that they feel uncomfortable to start analyzing data with a subjective prior. All texts I have seen start the analysis by assuming some kind of an "uninformative prior". But then, if you use the uninformative prior, the results often end up being the same as in classical analysis. Only the interpretation of results changes.
1) Prior beliefs in Bayssian analysis are no different from your prior beliefs embedded in the likelihood function that regressors enter the model linearly, that certain variables are Gaussian, etc.
That's not entirely true. For example, the linear regressor assumption is something you use in both classical _and_ Bayesian analysis. After all, you need to assume a model of some kind. The argument is not about this. It's whether you can bring in assumptions about what the parameter may be. A lot of statisticians and economists do not feel comfortable to bring any prior assumptions about parameters into analysis. Of course, you can argue that Bayesian analysis still works, because you can use some uninformative prior, but at this point ppl wonder why not just use classical inference.

But I feel more comfortable with the Baynesian formulation of our knowledge about a parameter. The classical approach is complex in a sense that, sometimes we think that the point estimate is our best knowledge about the parameter (and so we take this number in further interpretation such as decomposition or welfare calculation), and sometimes we put a lot of faith on the null hypothesis of a test, such as zero coefficient, and we can only reject the hypothesis (such a faith) when the data has strong evidence that it's otherwise. It's not coherent at all.

The point is that viewing assumptions about parameters as being 'subjective' while viewing assumptions about the likelihood (linearity, etc) as being 'nonsubjective' is both silly, and incoherent. When you perform a statistical analysis, you have knowledge about the process being studied. Your knowledge gets embedded into both the form of the likelihood, and the prior for the parameters. Neither is any more 'subjective' or 'objective' than the other, they are both different sides of the same coin. Objecting to subjective priors is like objecting to parametric likelihood functions and insisting that all analysis should be nonparametric to avoid specification errors. There is merit in this position, but it is extreme.To be fair to the OP, I have never seen a textbook at clearly explains why having subjective prior beliefs is supposedly an advantage. I have respect for Bayesian inference, but I think it will work out a lot better in fields like computer science instead of economics. Most economists say that they feel uncomfortable to start analyzing data with a subjective prior. All texts I have seen start the analysis by assuming some kind of an "uninformative prior". But then, if you use the uninformative prior, the results often end up being the same as in classical analysis. Only the interpretation of results changes.

All texts I have seen start the analysis by assuming some kind of an "uninformative prior". But then, if you use the uninformative prior, the results often end up being the same as in classical analysis.
It depends what you mean by 'classical analysis'. If you use a uniform prior (which is not necessarily uninformative) then the posterior is just the likelihood. So your inference will be very similar to what a member of the likelihood school of statistical inference would get. However this is NOT frequentism, and the difference cannot be overstressed. Frequentism is not equivalent to likelihood inference, and these two schools of philosophy clashed numerous times throughout the 20th century. Frequentists tend to reject the likelihood principle for example (pretty much all frequentist hypothesis testing violates it).
To take an obvious example, Frequentists tend to struggle deeply with confidence intervals when there are nuisance parameters present, since there usually are not pivotal quantities and replacing them with pointestimtate MLEs means you dont get proper coverage (and is also completely unjustifiable from any rasonable practical or philosophical viewpoint)

simple example: suppose X_1,...X_10 are observations from a locationscale studentt family with unknown degrees of freedom v. There are hence 3 parameters: v, mu, and sigma. You would a) a confidence interval for mu and sigma, and b) the predictive distribtion for the future observation X_11
This is completely trivial to a Bayesian. Good luck doing it as a frequentist (I'm not saying it cant be done [I have no idea], but it would probably be a publishable research question rather than something you could do in 10 minutes by just putting sensible noninfomrative priors on the model parameters and running MCMC)

The Bayesian propaganda is in full force, I see. So a reality check: Every Bayesian model requires full specification of likelihood function. Assuming linear functional form is not the same as having a prior over parameters  both frequentist and Bayesian methods usually work with the former, but only Bayesians require the latter. Finally, given likelihood, one could always just maximize it and obtain frequentist point estimates, asymptotic standard errors, etc. That Bayesians have developed some nice tools to deal with complicated technical problems is good, but one doesn't have to be wedded to a a Bayesian philospophy of inference to use those methods (c.f. Chernozhukov & Hong, 2003).