So if you read a Bayesian paper, the argument that is often made in favour of Bayesian methods over Maximum Likelihood is that the former has protection against overfitting through prior specification (i.e. an inbuilt form of regularization).

This makes sense. But what doesn't make sense to me is:

(a) Surely incorrect prior specification can lead to underfitting the data? Why should we be less worried about this problem than overfitting?

(b) The extent of regularization can be determined by specifying the variance term in the priors (a large variance means the likelihood dominates and the solution approaches something like MLE). Surely we have no bearing on whether our prior specification underfits/overfits the data unless we holdout a part of the data and check after the fact? But in the absence of holdout, we likely have no idea whether our choice of variance parameter for our prior is too informative or informative? i.e. we don't know whether our solution helps overfitting (or prevents underfitting?)

Thanks.