My friend, not everyone dwells in the asymptotic world. If something is not symmetric about the mean in finite sample, why should pretend N tends to infinity so that our estimator is symmetric about the mean?

Under regularity conditions, if the parameters have a finite dimension then the Bayesian posteriors asymptotically shrink to the maximum likelihood estimator, and therefore to the parameter of interest. Moreover, any centrality measure of the posterior (conditional mean, or mode) has the same asymptotic distribution as maximum likelihood. In practice, there is absolutely no advantage in one over another, they work the same. Any other statement is just cheap talk of illiterate people.

However, if the parameters have infinite dimension (a nonparametric curve modeled by sieves), Bayesian estimators can have important failures (inconsistent), there are well known examples, and only in some specific cases the Bayesian nonparametric estimators work well.