Hacker News new | past | comments | ask | show | jobs | submit login

At my previous employer we had a 24 disk zfs storage server built using these disks (thanks to an external consultant who replaced enterprise grade disks because "they're basically the same").

While they were still under warranty we replaced 50% of them and after warranty 100% of them were replaced using another brand.




This model was really bad, but he wasn’t wrong in general - while consumer drives have a higher failure rate, you can’t pretend the enterprise version can’t fail - and if you’ve built your system to withstand failure, whether 2 or 5 fail every year makes little difference if you have competent IT.

I do make an effort to source them from as many different batches as possible, though - based on a DeskStar (“DeathStar”) experience a couple of decades ago. And there have been occasional bad batches for many models.

(The iirc 500MB DeathStar and this 3TB seagate are special in having mostly bad batches)


> I do make an effort to source them from as many different batches as possible, though

If you can, also stagger the initial power ons. There's a history of disk firmware bugs that are triggered by runtime; if it makes sense, you want to have enough of a difference in power on time to do a lossless replacement on your first disk before your second disk hits the magic number of time on.


Indeed.

Synology, which I’ve been using for the last 10 years or so when the project cannot afford NetApp/EMC class storage, does this out of the box. (I’m sure netapp and EMC do too, but it’s not my problem when the disks inside them fail)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: