Hacker News new | past | comments | ask | show | jobs | submit login

I did recognize it, ars. I think it's hilarious that if 100000000000000 (1e14) isn't enough reliability then 1000000000000000 (1e15) should do it.

As mentioned if you look at, for example, the change from normal bit flips (from cosmic rays) on normal RAM, versus that on ECC RAM, you would see many orders of magnitude. Given that we're speaking about errors (which have a huge standard deviatoin anyway) it is quite normal to see orders of magnitude difference. A factor of 10x is just not that big of a wish, if the error levels are causing you problems!

I also don't think that multiplying by the size of the drive is relevant to anything. It doesn't increase the chance you have an error in a specific application, etc.

What can you fill 4 TB with, that you wouldn't use software error correction on, but for which a single bit flip is unacceptable?

I can think of things like movies, raw sensor data, and other media. I don't see how multiplying by the size of the drive gives you an interesting result. You've just fallen for marketing :)




> should do it.

They actually sell 10^15 disks, so there is no excuse for 10^14 ones to exist anymore. They aren't even using more error correction to get it - they are binning drives and the better ones get sold for more.

This would be fine is the lower spec'd ones were still reasonable, but for a 4TB drive 10^14 is NOT OK, and they should not be selling them anymore.


I just disagree with you and think that your perspective on the matter and use of the numbers is funny. Again you repeat "For a 4TB drive 10^14 is NOT OK"... but 10^15 is. Well, if you say so.

Note: you shouldn't downvote just because you disagree with someone's perspective. I'm not going to delete these comments because I stand by them. If 1 bit flip every 4 TB is a problem for you, then don't move to 1 bit flip every 40 TB - get some kind of ECC etc.


He's not saying that 10^15 is good enough for all purposes. He's saying that it's closer to good enough without significantly increasing the cost. This should not be that hard for you to grasp.


Your tone with me is completely unwarranted, wtallis. This is what he wrote:

"

Note: This article is from 2007 and is quite prescient.

It's completely shameful how bad the specified read error rates are now.

It's to the point that if you read an entire disk 4TB you have a 32% chance of one bit being wrong!

That means hard disks can no longer be considered reliable devices that return the data written to them, you now need a second layer in software checking checksums.

http://www.wolframalpha.com/input/?i=4TB+%2F+10^14+bit++in+%...

For extra money they sell hard disks with 10^15 reliability instead of 10^14 - this should be standard!

"

Even after his (and your) clarification, it makes me chuckle to read that :) He started, "It's completely shaemful how bad the specified read error rates are now" and then ended "For extra money they sell hard disks with 10^15 reliability instead of 10^14 - this should be standard!"

This is hilarious. I guess some people here have no sense of humor though.

Error rates are not nearly as exact as that - more like marketing figures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: