Heh. You're very right about the generation of 1. Even 2^54ish is not rare enough to ignore. Put it in GPU code and you could find it breaking in seconds.
But would you be comfortable ignoring double precision 0, with a proper algorithm that makes the probability somewhere around 2^-1070 or 2^-1020?
I'd always prefer an algorithm to be correct 100% of the time than any rate than 100%. This is the type of thing for which there is no fundamental reason why you can't simply do it right. The more things you can do right the less you have to worry about! It's much simpler.
That said, I'd certainly listen to the tradeoffs of being correct 100% of the time vs being wrong a mere 1 in 2^1020 times. I wonder if 2^-1020 is greater than or less than the chance of a cosmic ray flipping a bit...
Since I was curious, it appears that IBM did research in the 90s and came up with 1 per 256mb RAM per month. That's 3.7 × 10-9 per byte per month or 1.4 × 10-15 per byte per second.
But would you be comfortable ignoring double precision 0, with a proper algorithm that makes the probability somewhere around 2^-1070 or 2^-1020?