Hacker News new | past | comments | ask | show | jobs | submit login

Love ZFS, hate having to use it in FreeBSD or Illumos, so I like seeing someone successfully using it in Linux even if licensing concerns keep me from doing it. When I saw that he was using "WD Caviar Green" hard drives, though, I cringed.

These "green" hard drives tend to be very aggressive about parking heads and spinning down the platters. While this is fine if a disk is going to sit idle for a long period of time, in cases like an OS partition and memory buffering, these drives start destroying themselves spinning down and cranking up several times a minute.

We had 4 out of 16 fail one week after burn-in in a raidz configuration. We had plenty of hot spares for various reasons of paranoia, and they didn't go all at once, so we recovered and replaced the entire batch with Constellations.

The only positive note was that we are now firmly in love with zpools and zfs.. It made egregious hardware failure a manageable problem.




Interesting.

Can you please point me to a more authoritative source for this? I also had a "green" drive fail surprisingly soon. How did you confirm that they spin up and down "several times a minute" by themselves. Are you sure it wasn't your misconfigured OS telling them to do that?

Since green drives are also slower, I just assumed they traded off energy for data transfer speed.


You can look for yourself using smartctl -- they still respond after the mechanical failure. Aside from pasting internal emails (not happening :)), you'll have to google for yourself on this one. We didn't do anything novel with these drives at a hardware level to make them fail, and we've got dozens of Constellations in an identical configuration that have had no failures after months of load.

It's not transfer speed that they seem to trade off for, it's access times. That may have been part of the problem -- our reads are extremely random over a very broad range of sectors. This isn't exactly a log or video server usage model. :)


http://www.xlr8yourmac.com/tips/Disable_WDGreen_HeadParking....

The default head parking is 10s. In a ZFS configuration, the other drives time out while waiting for the drive to wake up from a parked configuration. This causes a cascade of IO waves back and forth as the drives in a large array park and then unpark to respond, causing IO to plumment.

Works fine with 1 drive, or even 2 drives. Past that, the probability of a collision and retry on head parking escalates exponentially.


> Love ZFS, hate having to use it in FreeBSD or Illumos

Why?


The most honest answer is that we've got a ton of Linux servers with well understood and managed configurations, and then these brahmin storage servers that fall outside of that discipline because they have very different update cycles and toolchains. It would be more convenient for us if we could extend our internal update cycle and maintenance to the storage system as well.

There are a number of the usual installation and adoption issues with FreeBSD and Illumos / OpenIndiana. I love the FreeBSD community, and using dtrace again is a joy. I just don't like having to use them for things I already have solved in Linux or can quickly find documented on either Ubuntu or Arch's wikis.

It's nice after the OS wars to be able to say "well, it's easier for us to use linux because it has a broad user base." :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: