Hacker News new | past | comments | ask | show | jobs | submit login

We have enough events, because in this case it is appropriate to model the actual random event as 'million miles driven' with a chance of fire happening or not happening. Gasoline cars have a mean of 0.05 fires per million miles, and given the current Tesla data, the mean is 0.01 fires per million miles. I'm not taking out a calculator, but it would come out to an extremely low (0.0001%) the 'true' fire chance is the 0.05 gas car rate or higher; the 95% confidence interval should be 0.01 +- 0.02 or tighter, so still twice better than gas cars.

For an exaggerated example, if Tesla had driven a billion miles and had 0 fires, you shouldn't say that there's not enough data - you definitely would have enough data to say that the chance of fire is below the gas-car rate of 5 fires per 100 million miles.




Why is million miles driven a better way to model it than billion miles driven, for example? If you happen to choose that, there clearly isn't anywhere near enough data.

I'm honestly curious how one models this type of thing statistically, and I am not convinced enough of its obviousness to just accept numbers that someone throws around.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: