Hacker News new | past | comments | ask | show | jobs | submit login

Well, in reality tools like Tensorflow probability can help you model both aleatoric and epistemic uncertainty with probabilistic layers that have learnable priors and posteriors. The issue there is that for the average ML person might not have the required math skills to model the problem in these terms.

For instance, if you look at https://blog.tensorflow.org/2019/03/regression-with-probabil... until the case 4 it's easy to follow and digest, but if you look at the _Tabula rasa_ section I am pretty sure that such content isn't understandable by many. Where you get stuck because the ideas become too complex depends on your math skills.




Yeah I've used those methods and am a fan, though they are far from perfect. For one thing they're somewhat invasive methods to implement and they still require you to formulate a likelihood function to varying degrees; a task which is not always possible up front. I've also had issues with getting them to converge during training when using them. They also sometimes don't estimate uncertainty correctly, particularly if you make a mistake modeling the likelihood.

I guess my point is, there is no silver bullet. Adding defensible uncertainty is complicated and problem specific, and comes with downsides (often steep).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: