Hacker News new | past | comments | ask | show | jobs | submit login

Maybe you've gotten it but I don't know. Here is a maybe over-tired coin-flipping example:

Say you have a coin that might be unfair, and you want to estimate its bias. You flip it a bunch of times, and it mostly lands heads.

Predictions of the coin's bias based on this observed data are usually going to be that it's biased some % towards heads. (unless maybe you have a strong prior, but that's a different topic)

But there is also a chance that really the coin is fair, or even biased towards tails and you just got unlucky in your flips.

There's a mismatch between the true model (the coins actual bias) and the observed data (the result of your flips) because of this chance of `unluckiness'




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: