Hacker News new | past | comments | ask | show | jobs | submit login

For anyone interested in seeing this in the context of statistical decision theory, see the final page of this lecture notes pdf:

http://www.stat.cmu.edu/~siva/705/lec16.pdf

In particular, for the parametric estimation setting, it can be shown that the Bayes Estimator under L_0 loss corresponds to simply finding the posterior distribution of a parameter given data, and then finding the mode of this distribution. Similarly, for L_1 loss, all we need do is find the median of the posterior distribution. And under L_2 loss, it’s just the expectation of the posterior. CMU’s 705 course is a great intro to statistical decision theory and stats more broadly for anyone interested!

(Disclaimer: I am a CMU PhD student in the machine learning department so I am somewhat biased to thinking these notes are good having taken this course myself)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: