I just want to understand the advantages...
What’s wrong with p-values?
Why aren’t prior assumptions a problem?
Dunno, just a guess, maybe because they need to constantly update some variable (propensity of a user to be interested in X, to click on y, to buy Z, etc.) instead of re-estimating everything from scrap each time?
Bayesian algorithms (think e.g. the Kalman filer) may be quite good to do so?
“Dunno”, so just make a random dumfuhk uneducated guess?
Good grief..,
Dunno, just a guess, maybe because they need to constantly update some variable (propensity of a user to be interested in X, to click on y, to buy Z, etc.) instead of re-estimating everything from scrap each time?
Bayesian algorithms (think e.g. the Kalman filer) may be quite good to do so?
“Dunno”, so just make a random dumfuhk uneducated guess?
Good grief..,Dunno, just a guess, maybe because they need to constantly update some variable (propensity of a user to be interested in X, to click on y, to buy Z, etc.) instead of re-estimating everything from scrap each time?
Bayesian algorithms (think e.g. the Kalman filer) may be quite good to do so?
Isn’t that the essence of Bayesian statistics?
“Dunno”, so just make a random dumfuhk uneducated guess?
Good grief..,Dunno, just a guess, maybe because they need to constantly update some variable (propensity of a user to be interested in X, to click on y, to buy Z, etc.) instead of re-estimating everything from scrap each time?
Bayesian algorithms (think e.g. the Kalman filer) may be quite good to do so?
Isn’t that the essence of Bayesian statistics?
lol, take that abf0
Dunno, just a guess, maybe because they need to constantly update some variable (propensity of a user to be interested in X, to click on y, to buy Z, etc.) instead of re-estimating everything from scrap each time?
Bayesian algorithms (think e.g. the Kalman filer) may be quite good to do so?
Kalman filter is not bayesian.
wtf is this. you dont even understand bayesian statistics. Updating the prior??
Dunno, just a guess, maybe because they need to constantly update some variable (propensity of a user to be interested in X, to click on y, to buy Z, etc.) instead of re-estimating everything from scrap each time?
Bayesian algorithms (think e.g. the Kalman filer) may be quite good to do so?
For something to be bayesian, you need to compute the posterior distribution of your parameters conditional on the data. Kalman filter only gives you a point estimate of whatever parameter you are estimating with it.ok, my bad
No, your good: The Kalman filter also gives you the posterior variance (at least for Gaussian cases).
The Kalman filter is not Bayesian. If the parameters are fixed, then it gives a recusive formula for the state (at time t) given all observation (up to time t). Under state space assumptions (Gaussian additive errors) then it is optimal and the filter distribution is Gaussian. If the parameters are unknown then you can stick the KF likelihood into an optimizer and so get the MLE and Hessian. Anyway, I though silicon valley is obsessed with deep learning which is not generally Bayesian.
answer: the less the people who are investing in your product, the more likely you are to be able to take their money and run. Bayesian stats are less familiar to funding types.
Serious answer: Bayesian stats are somewhat more amenable to continuous addition of data, and I think still more memory efficient. So it's cheaper, quicker, and closer to what a computer is actually doing under the hood, even if the final output isn't always intended for bayesian inference.
answer: the less the people who are investing in your product, the more likely you are to be able to take their money and run. Bayesian stats are less familiar to funding types.
Serious answer: Bayesian stats are somewhat more amenable to continuous addition of data, and I think still more memory efficient. So it's cheaper, quicker, and closer to what a computer is actually doing under the hood, even if the final output isn't always intended for bayesian inference.
jeez, I meant "the less people who you are asking for money for know about your product, the more likely you can get away with not doing anything useful with their money"
answer: the less the people who are investing in your product, the more likely you are to be able to take their money and run. Bayesian stats are less familiar to funding types.
Serious answer: Bayesian stats are somewhat more amenable to continuous addition of data, and I think still more memory efficient. So it's cheaper, quicker, and closer to what a computer is actually doing under the hood, even if the final output isn't always intended for bayesian inference.
Only true if the model is conjugate. Beyond that the Bayesian approach is pretty expensive (MCMC methods etc..). Although I think machine learners use variational methods as an approximation. But real machine leaners like deep learning which is not Bayesian at all. I have no idea how their overparameterized models work so well but apparently they do.