In particular, for the parametric estimation setting, it can be shown that the Bayes Estimator under L_0 loss corresponds to simply finding the posterior distribution of a parameter given data, and then finding the mode of this distribution. Similarly, for L_1 loss, all we need do is find the median of the posterior distribution. And under L_2 loss, it’s just the expectation of the posterior. CMU’s 705 course is a great intro to statistical decision theory and stats more broadly for anyone interested!
(Disclaimer: I am a CMU PhD student in the machine learning department so I am somewhat biased to thinking these notes are good having taken this course myself)
http://www.stat.cmu.edu/~siva/705/lec16.pdf
In particular, for the parametric estimation setting, it can be shown that the Bayes Estimator under L_0 loss corresponds to simply finding the posterior distribution of a parameter given data, and then finding the mode of this distribution. Similarly, for L_1 loss, all we need do is find the median of the posterior distribution. And under L_2 loss, it’s just the expectation of the posterior. CMU’s 705 course is a great intro to statistical decision theory and stats more broadly for anyone interested!
(Disclaimer: I am a CMU PhD student in the machine learning department so I am somewhat biased to thinking these notes are good having taken this course myself)