Hacker News new | past | comments | ask | show | jobs | submit login
How I Store My 1's and 0′s: ZFS + Bargain HP Microserver = Joy (mocko.org.uk)
274 points by mocko on June 17, 2012 | hide | past | favorite | 113 comments



I have this same hardware. A few notes:

- For best performance with ZFS, you want a lot of RAM, and this unit will take 8GB of ECC RAM. You want ECC for data integrity in memory, as ZFS does nothing to prevent in-memory data corruption (there's an article on this here [pdf]: http://research.cs.wisc.edu/wind/Publications/zfs-corruption... )

- You probably want a few mirrors, not RAID-Z, if performance is an issue.

- You're better off with FreeBSD or Illumos kerneled distros (which have run ZFS for years, and have it in their mainline kernels), rather than Linux (which never will have ZFS in mainline for licensing reasons), for stability alone.

- You can get an IPMI card for this unit if you want remote manageability.

- There's an internal USB port if you want to boot off of that. It's kind of handy.


ZFS has an interesting caveat -- when you're adding disks to an existing pool, you can't simply add an arbitrary amount of new disks and rebalance the data across the existing disks and the new disk(s).

I'm using FreeNAS and 5x2TB raidz1 in the N36L and if I need more space, it's going to require some very careful planning.

That said, the hardware is a STEAL for the price, and FreeNAS + transmissiond + remote-gui rock my socks off. I'm strongly considering just buying a brand new one when I'm out of space on the N36L.


Good points - people do need to remember that ZFS was designed for going big. IT does nothing to prevent memory corruption by design.... it was assumed (required?) that your server would use ECC ram.

You also need to ensure you have adequate ram for various cacahing, and CPU power as well for checksum calculation, among other things. It's not a lightweight FS. (in-memory corruption is an issue with all filesysems - just somewhat more-so with ZFS because it was designed specifically to assume you had reliable ram. You want ram to store the hash cache or whatever too....

One could chuck an SSD in there for cache, if memory is a limit, that should speed things up drastically.

And as with all raid-like systems - you want an appropriate number of hot-spares, cold spares, and a system that monitors it and acts appropriately, especially if you're going with huge drives on slow busses.

You want regularly scheduled scrubs, not too many snapshots, probably disable atime (noatime) to speed up those scrubs, and compression probably off....

dedup I'm still on the fence about - I leave it off, I can only see specific situations where it would be truly useful (dedup+verify)


Unless you're using enterprise class SSD's with SLC flash and ultracaps to guarantee that the writes make it in cases of power loss, you're risking your data if using this as ZIL.

Cheap consumer class SSD's are generally fine for L2ARC.


I have it's predecessor (N36L if you want to google for prices) Yes, it's very nice; but it's not that quiet. I have 4 x 2Tb in raid1, but might switch to btrfs sometime; I'm a little leery of both zfs and btrfs at the moment. In Fedora because it's most similar to what I use at work (RedHat), not really any other reason.


ZFS has been in production in Solaris 10 since June 2006 and is quite stable and production tested.

btrfs is still considered experimental, even though it's been in the mainline Linux kernel since 2009.

You might look into the KVM support in SmartOS if you want to run other OS's on top of Solaris/ZFS.


SmartOS doesn't support AMD processes (or at least it didn't when I got my Microserver).


This is something we've been working to fix in the community! The good news is that most of the development and testing so far has been on an N36L-based Microserver, where the experimental AMD KVM support seems to work pretty well. There's a github repo -- https://github.com/jclulow/illumos-kvm -- and hopefully I'll get time soon to respin a current SmartOS ISO with the AMD-enabled bits.


Please be sure to submit to HN when you have a new respin ready. Pretty sure there are a lot of potential new SmartOS users on here who'd like to use KVM with AMD. Got my eye on a N40L-based Microserver myself.


Great to see that SmartOS developers are active on HN.

I'm interested in SmartOS but I've heard some things about it not being production ready. What's your view?

Also, being able to boot from an internal volume (not USB) is also crucial to me, as the alternative is simply not an option. (The "best" case, using a USB drive, is not an option as our server is in a shared rack and you run a risk of someone unplugging it).


Oh, that is fantastic news. I may have a server rebuild in my near future.


I've been hearing awesome things about ZFS for years now. Unfortunately, it can never be part of the Linux kernel due to licensing issues, so we're stuck with "your 1's and 0's are being held by a pre-1.0 version of a filesystem invented by a dead company". How far has btrfs come in its support of ZFS-like features? Still need a few more years? I'll be switching as soon as it's marked stable.

I wonder if the performance might be better if a good 16GB USB stick was used for the OS drive instead of an old laptop drive? The OS needs a lot of random access, but doesn't take up much space.

I also wonder why the author went with Ubuntu 10.04 LTS instead of 12.04 LTS, which would give him two more years of peace of mind. It's been a few weeks since 12.04 came out, so it's pretty stable. It does get kernel updates more often that I'd prefer, though, and GNOME 2 is gone.


FreeBSD ships with ZFS in their standard kernel, and has for years, if you're willing to consider an OS other than Linux.


I'm using FreeBSD and ZFS in production, for years now. Very happy with it and it's rock stable.


I find your assessment of the Linux filesystem situation to be inaccurately, and surprisingly, negative. "Stuck with" implies little potential for change, when there are several filesystems which are being developed at a swift pace, solving tough problems with vigor and ingenuity. btrfs is coming along just fine. I can't think of any place with more filesystem development going on than Linux.

I mean, let's just not leave it at idle claims:

btrfs changes in 3.4: http://kernelnewbies.org/LinuxChanges#head-556161b206bf626d6... 3.3: http://kernelnewbies.org/Linux_3.3#head-1f03b4babafb1049bea3... 3.2: http://kernelnewbies.org/Linux_3.2#head-f0a922e9c0ce6f48810d... 3.0: http://kernelnewbies.org/Linux_3.0#head-3e596e03408e1d32a7cc...

RAID5/6 barely missed 3.5, but should be in the pull request for 3.6 (other changes for 3.5: https://lkml.org/lkml/2012/6/1/160).

What's going on in XFS: http://lwn.net/Articles/476263/

Coverage from the 2012 filesystem/storage summit, day 1: http://lwn.net/Articles/490114/ Day 2: http://lwn.net/Articles/490501/

2011 summit, day 1: http://lwn.net/Articles/436871/ Day 2: http://lwn.net/Articles/437066/

Ext4 is still pretty active for a "done" filesystem, too: The last 12 months saw work on online resizing, support for bigger block sizes, a cleanup of mount options, ...


Sorry if I came across as negative. The "stuck with" was a reference to ZFS's status as a pre-1.0 PPA, not a reference to Linux filesystems as a whole.

On the other hand, the only in-kernel Linux filesystem that can match ZFS's feature set (for example, resizing a RAID array while the filesystem is online) seems to be btrfs, which probably won't be marked stable for at least another year or two. The latest developments to XFS and ext4, though interesting, aren't particularly relevant if you're looking to build a server like what the article describes.

Still, thanks for the many interesting links!


I'll just note that it is the implementation of ZFS that is pre-1.0, not the filesystem itself. The ZFS filesystem itself has been production-ready for several years now. That said, I might use the current Linux implementation for my non-critical data, but for anything important I'd stick to Solaris/OpenIndiana. There's also a decent implementation on FreeBSD, but I'm not a fan of that OS.


All the bugs happen in implementation.


Agreed. My point was simply that you shouldn't write off the filesystem entirely just because some of the implementations aren't mature.


Actually, 12.04.1 comes out on August 23rd, and LTS users are encouraged to upgrade to the first point release (which would presumably have some useful fixes). Indeed, update-manager tells you that there is no upgrade available if you tell it to look for LTS releases only.

My dev machines are running 12.04, though, and haven't run into any major issues yet (knock on wood); my daily driver netbook runs 12.10 because I like it like that.


"GNOME 2 is gone."

Who cares? It is a file server not a desktop machine.


I also wonder why he didn't go for Centos which would give him 10 years of peace of mind.


Probably because there isn't a handy YUM repository for installing ZFS support.


since it is a server--not a desktop, there is less need for the odd linux app. For that reason, I bought a used Ultra 40 for $200, and run Solaris 11. Its fast, rock solid, and free. My current uptime is over 9 months. The only negative with Solaris is the lack of every last odd linux app. The only negative with ZFS in a home environment is inability to grow the number devices in a RAID Z.


Fedora is the first distro trying to include Btrfs by default. We'll see if they make that happen in their December release of Fedora 18.


Agreed, ZFS rocks.

I started using OpenSolaris fileservers at home in 2007. I went through multiple upgrades since ten: 5x500GB, 7x750GB, and currently 7x1.5TB (all raidz).

I am just about to upgrade to my 4th server with 6x3TB in raidz2, which will be running FreeBSD this time.

Due to the sheer number of drives and continuous 24/7 operation, I experienced 4 drive failures over the years. Everytime ZFS has handled the failure gracefully, and I was able to swap the drive and rebuild the array without a hitch.

I take daily rotating snapshots - incredibly useful when you accidentally stuff.

I also run weekly scrubs, which have allowed me to witness 2 or 3 silent data corruptions which were automatically self-healed by ZFS (CKSUM column in zpool status).

I mostly use the file server to share videos & photos via NFS, and to store encrypted backups of my laptop. It has become so useful and practical that I started to use it as an iSCSI server as well to boot some diskless Windows test machines for GPU projects.

All in all, ZFS deserves all the praise you hear.


I used to build my own file servers but 2-3 years ago I bought a ReadyNAS ProBusiness at home and I love it. Granted it cost me about $1500 at the time, but it has made my life so much easier. I'm at the stage of my life where I'd rather pay extra and save time.

It supports almost everything out of the box, and there's very little configuration. It has 6 hot-swappable bays, and it allows for automatic expansion using their proprietary system, X-RAID 2. I currently have 4 500 GB drives, and 2 1 TB drives, and if I want to expand it, I just buy another 1 TB drive and swap out a 500 GB drive.

It also supports streaming protocols, including ReadyDLNA so I can play movies directly off my PS3. It also seamless supports TimeMachine for my Mac laptops. I really do love this thing.


I've also looked at the ReadyNAS products in the past, but couldn't justify the cost for home usage. X-RAID is pretty neat!


The readynas boxes are pretty nice.

i got a pro recently to share/backup a dropbox share over a network. The sharing, raid and backup stuff is all standard with the linux based os which makes it nice and easy. the pro readynas boxes are also intel atom cpu so its simple to stick the linux dropbox client on it.

This HP box looks much better value than the readynas though, as long as you want to install the OS/raid etc yourself.


I love my ReadyNAS. I got a NV (not NV+) from back in 2005 that is still going strong. And they have issued regular updates (last release was earlier this year). So the price premium that you pay over a QNAP or Synology is worth it IMO.

That said, there are some cons too. In my limited testing, ReadyNAS is not the fastest device out there so if you are looking the absolute fastest, they are probably not it.

The older Sparc based NAS devices do not get as much firmware love as the newer ones. So probably no IPv6 and GPT. Also, the older devices support only ext3 not ext4.

I found out last year that my particular NAS was due for a recall on the power supply. Unfortunately, this recall had been issued several years earlier and I just had not noticed it. TL:DR, ReadyNAS replaced it gratis even though the recall notice had expired several years prior.

Other than what I listed above, I can wholeheartedly recommend the ReadyNAS. Remember, this is your data we are talking about and it is sufficiently important to me that I'm okay spending a bit more money to make sure it is well protected.


How do you back it up?


Not sure about the ReadyNAS ProBusiness, but my NV+ has a USB port that I can plug an external drive in to, push the backup button, and in just a little bit I'll have a drive suitable for taking off-site. The FrontView web interface has a simple way to configure what the backup button does. Very simple.

It also runs Linux under the hood, and I've been able to configure CrashPlan for off-site backups of some key files too.


I use the USB port as well and connect a USB drive to it. But I am a bit more paranoid, and I do a bit-by-bit comparison of every file that I copy, after I do the backup.


You can also get a second one and have it rsync itself offsite.


I have about the same setup, but i built my server from scratch have 62TB and I run FreeBSD 9 (plus SSD for os). OS reliability wise and ZFS maturity FreeBSD > Ubuntu. You'll have virtualisation through virtualbox if you'd like to too. But I prefer to separate the storage and virtualisation platforms (do one thing etc). In some ways BSD is easier and more logical in it's setup and administration, plus you'll have better documentation on the site and higher signal-to-noise ratio in the forums, if you need help.


Despite being a heavy computer user I'm positively struggling to fill 1TB even after 2 years of hoarding HD rips, so I'd love to know what you use 62TB for. Also wondering about idle power consumption.


bah, formatting error. 6 times 2TB not 62TB. Everything runs of an atom card, with modded cooling for the bridge and cpu (fan less, really awesome, copper heat sinks). The discs are the WD Caviar Green, low power, and they are attached with rubber to silence them. There are two (90 cm) slow spinning fans in front of the disks, one in the quitet zalman psu (120 cm) and one (90 cm) at the back. I built it in an htpc chassis.


Serious question.. does no one use SmartOS[1] for this? I wouldn't feel entirely secure running ZFS on Linux when I could just as easily run SmartOS and get the "real" ZFS.

[1] http://smartos.org/


I run OpenIndiana (and OpenSolaris before that) on mine, and I tend to agree - Linux ZFS is unlikely to ever be as mature as either the Illumos kerneled distros (OpenIndiana, Nexenta, SmartOS), or FreeBSD (and derived systems like FreeNAS), both of which ship ZFS in the kernel as a primary filesystem.


How many of the Illumos or FreeBSD developers are working on ZFS? I'm curious if those implementations actually have any more manpower than the ZFS on Linux project.


The companies that are betting on ZFS -- Delphix, Nexenta and Joyent (and a bunch more that are less public about their work) -- are overwhelmingly (indeed, exclusively, to the best of my knowledge) on illumos and FreeBSD. Of these, Delphix in particular is of note because of the original ZFS core team members working there: Matt Ahrens (the co-inventor of ZFS), Eric Schrock and George Wilson -- not to mention important ZFS contributors like Adam Leventhal and Chris Siden.[1]

So illumos remains the repository of record for ZFS -- with a close relationship with those working on ZFS on FreeBSD. While the Linux port is certainly a Good Thing, it does not reflect shift in the epicenter of ZFS development...

[1] http://www.youtube.com/watch?v=-zRN7XLCRhc#t=43m54s


It seems more valuable to judge them by the fruits of their labor. This applies to more than just file systems.


You don't want manpower for file system stuff.

All my prepare and forget servers runs openbsd just because they have less people adding features (and bugs) and the few people there have awesome attention to quality (not just security) to the point of trading documentation bugs as well, bugs.

Granted it will move slower. And you may not have zfs...


SmartOS won't run on AMD chips[1], which the HP MicroServers have.

It's too bad, because I'd planned a SmartOS+Ubuntu build for mine.

[1] http://wiki.smartos.org/display/DOC/SmartOS+Technical+FAQs#S...


It should be noted that this isn't quite true. The default (production-grade) KVM module shipped in SmartOS presently only supports Intel CPUs with both VT-x and EPT. SmartOS itself runs fine on AMD CPUs, and you can use everything except KVM for hardware-accelerated virtual machines.


Had it been available when I built mine, I'd certainly have checked it out. I'm running a Microserver with OpenIndiana, (4x2TB + 60GB SSD for booting and read cache), and it's been absolutely fine except for the video being slow and hot-plugging a USB keyboard or mouse is sometimes a little hit and miss. I'm happy to live with those issues - in the year since I built it I've never needed to touch it.


This is almost identical to my backup server except I have 3x2TB drives in a RAID-Z pool. I agree with all the author's "reasons this is awesome" except my #1 reason is data integrity.

ZFS with RAID-Z does block-level checksumming and automatic healing as you access your data. Combine that with a weekly scrub (touches every block so any silent bit flips are healed) and I can do away with my fears that my precious bits are rotting away.


ZFS uses block-level checksumming for every block. The auto healing can be used when there is a second copy or another way to rebuild that block, read Raid-Z. So you get the auto healing already when using mirror.


as long as you have ECC ram. If not, you're at increased risk than most other filesystems (which are also at risk) - ZFS was designed with the requirement that RAM is reliable.


I have been using FreeNAS, which is a slimmed down version of FreeBSD meant to run off of a USB stick. It has a web interface to set up and manage ZFS, and can take regular ZFS snapshots, among other things. It also includes Netatalk and other software to share the disks over the local network.

I use it as a Time Capsule for my MacBook. Hooked up to gigabit ethernet when docked, backups are a breeze.


Ditto - FreeNAS is FTW. A new RC of 8.2 was just released in the past week and it's awesome. The web interface was moved over to Django with the 8.x line from 7.x. It's minimal, but feature rich and "just works".

For my home setup I actually run FreeNAS in an ESXi environment. I have 4 different physical disks that I carve up space on and allocate to the FreeNAS VM. This allows me to snapshot upgrades on the base OS and when I'm testing an upgrade I can disconnect the ZFS pool - validate the upgrade went fine, and then reconnect the pool for a ZFS upgrade (if needed). The nice thing about this approach is you can physically move around system very easily if all you need to do is ship out the disk store - and since I have one beefy box for virtualization at home I have my storage system contained within which makes power and space a bit more efficient.

My suggestion to those who are considering this is if you stick about $500-$1000 into a BYOD system you can generally get a high end quad core system with 32GB of RAM and 3-5TB of disk space (with an SSD boot). At that rate I would carve up a few TB for backup and SAN (FreeNAS) and the rest would be for on-box VMs. The FreeNAS VM doesn't need more than 1 proc and about 2-4GB of RAM if you're dealing with a lot of file transfer. You can easily get away with 2GB.

Long story short: ESXi + FreeNAS = 1-box solution for most at home geeks. My motivation was that I was starting to have "box sprawl" and power consumption was getting a bit out of hand. I also run PFSense on my box as well - but in that regard I also have a low power physical system that acts as the primary gateway device in my network. But there's a PFSense running as a VM as well for failover when I do upgrades. Far better than any SOHO gear you can buy for far too much $$$.


I have a couple of the HP ProLiant MicroServers, one of which is set up as a NAS server with a RAID-Z array on 5x 2TB drives (running Oracle Solaris 11 Express in order to remain at the cutting edge of ZFS development).

At ~£150 after rebate (was around £120 when I bought mine) the MicroServer is an absolute steal, and ZFS is a dream to administer. Truly a match made in heaven.


The ability to add new, differently sized disks with RAID-Z is the killer feature of ZFS. I wonder how Linux ZFS performance & stability compare with the FreeBSD ports? In the past, I've considered using something like FreeNAS for my home storage needs but the ZFS support wasn't ready last time I looked (1+ years ago?).


I also wonder the same. I was considering using BTRFS because it's better integrated, but if ZFS is more stable/mature, I'd go with that without even thinking. Has anyone used it for some amount of time?


I have used zfs-native on Ubuntu and Fedora a year or so ago for less than a month and found it to be unusable at best - I couldn't even copy my data from source to the backup ZFS disks - just went into a loop with high CPU. That may have changed a bit with later releases but I just don't think getting ZFS to scale and run reliably on different OS is going to happen anytime soon given how much effort and skills it would take.

What I am looking at doing is getting/building Solaris compatible box for my backup needs - that is a daunting task. But if I could do that I can run one of the OSS variants of Solaris - Joyent SmartOS, Nexenta etc..


Ah, there go my hopes, dashed. Thanks for the input, I'll try it on a disk I don't need and see if it's unusable, thanks again.


I'm currently running a backup box nearly identical to the OP, and haven't had a problem yet (built it over a year ago). ZFS runs like a dream on Ubuntu, even with a slew of oddly sized disks (1TB + 2x2TB + 3TB) at 90% capacity. I've had my SATA card come loose, and ZFS just locks down the FS to r/o so no damage is done. And unlike many other file systems, ZFS's checking utility actually gives you human readable results if there is an error (eg. "/foo/bar is corrupt", not just cryptic messages), and outputs exactly what you should do to repair data.

I recommend at least trying ZFS out in a VM, I guarantee you'll be impressed by the versatility.


Definitely worth checking out on some VMs.....

But know that to really use it, you want decent CPU speed, as much ECC ram as you can put in there, an SSD for cache (just one - doesn't have to match the raid size I don't think - it'l help a lot) and you still need to take into account all the normal raid cautious everyone ignores like rebuild times for 2TB sata2 drives -vs- failure rates, etc.... and if you want to use dedup without verify, that's your gamble.(smarter people than me say it's safe, but it just smells wrong)


I'll definitely give it a shot, thanks. Does anyone know what btrfs lacks compared to zfs, other than stability?


For a file system that is a big 'other than'.


Sure, but I already know ZFS is stable and btrfs is less so.


From the article: "Why not Debian or CentOS? Cool, go that way if you prefer them. But personally I am in luuuuuurve with the Ubuntu ZFS PPA."

With Debian, you can use the PPA as-is. This requires adding that to your /etc/apt/sources.list and manually adding the signing key with apt-key.

Something else the author doesn't directly address is that ZFS on Linux is really only usable on 64-bit systems. Funny things may happen if you use the 32-bit version, such as OOPSing when doing simple things such as ls -a.

I've had nothing but great experiences with running this on my home NAS.


Note that the 32bit thing isn't completely a Linux issue. ZFS seems pretty much designed with 64bit in mind.

On the Freebsd side, there is whole section on tuning [0] for i386 users. Some of it might transpose to linux (at least the concepts and things to watch).

[0] http://wiki.freebsd.org/ZFSTuningGuide


The last non-server HP products I have purchased have died. A desktop for my mother, a laptop for the inlaws, and a small PC for myself.

I am terrified by the quality of their non high-end server gear.


Great to know that someone else shares this apprehension. I cannot believe how callous HP is in developing their consumer products.

It first started with our HP laptop that died multiple times over the warranty period. And then some friend's machines as well .. the central issue in almost all the cases : bad thermal management.


Mine runs hot, but it's going strong after almost three years, even though it has an AMD CPU with a particularly high TPD compared to others in the same segment (Atoms).


Just a quick note on those HP microservers - I have one, and I'm fond of it, but the e-SATA port doesn't support hotplug and the inbuilt ethernet doesn't support jumbo frames (at least, under versions of Linux that I've tried).

The e-SATA thing probably doesn't matter too much, but for anyone looking to run iSCSI or even higher-throughput NFS, the absence of jumbo frames may be a more important consideration.


As you scale up your storage, I'd recommend you switch away from RAID-Z to a pool of mirrors (essentially RAID10). It becomes easier to add or upgrade pairs of disks with differing capacities (eg, a pair of 1TB, a pair of 2TB...), and in the event of failure you have more than a snowball's chance of being able to rebuild the array before you have another disk go.


I bet a snowball would have difficulty rebuilding the array before another disk went. I like your version better.


Love ZFS, hate having to use it in FreeBSD or Illumos, so I like seeing someone successfully using it in Linux even if licensing concerns keep me from doing it. When I saw that he was using "WD Caviar Green" hard drives, though, I cringed.

These "green" hard drives tend to be very aggressive about parking heads and spinning down the platters. While this is fine if a disk is going to sit idle for a long period of time, in cases like an OS partition and memory buffering, these drives start destroying themselves spinning down and cranking up several times a minute.

We had 4 out of 16 fail one week after burn-in in a raidz configuration. We had plenty of hot spares for various reasons of paranoia, and they didn't go all at once, so we recovered and replaced the entire batch with Constellations.

The only positive note was that we are now firmly in love with zpools and zfs.. It made egregious hardware failure a manageable problem.


Interesting.

Can you please point me to a more authoritative source for this? I also had a "green" drive fail surprisingly soon. How did you confirm that they spin up and down "several times a minute" by themselves. Are you sure it wasn't your misconfigured OS telling them to do that?

Since green drives are also slower, I just assumed they traded off energy for data transfer speed.


You can look for yourself using smartctl -- they still respond after the mechanical failure. Aside from pasting internal emails (not happening :)), you'll have to google for yourself on this one. We didn't do anything novel with these drives at a hardware level to make them fail, and we've got dozens of Constellations in an identical configuration that have had no failures after months of load.

It's not transfer speed that they seem to trade off for, it's access times. That may have been part of the problem -- our reads are extremely random over a very broad range of sectors. This isn't exactly a log or video server usage model. :)


http://www.xlr8yourmac.com/tips/Disable_WDGreen_HeadParking....

The default head parking is 10s. In a ZFS configuration, the other drives time out while waiting for the drive to wake up from a parked configuration. This causes a cascade of IO waves back and forth as the drives in a large array park and then unpark to respond, causing IO to plumment.

Works fine with 1 drive, or even 2 drives. Past that, the probability of a collision and retry on head parking escalates exponentially.


> Love ZFS, hate having to use it in FreeBSD or Illumos

Why?


The most honest answer is that we've got a ton of Linux servers with well understood and managed configurations, and then these brahmin storage servers that fall outside of that discipline because they have very different update cycles and toolchains. It would be more convenient for us if we could extend our internal update cycle and maintenance to the storage system as well.

There are a number of the usual installation and adoption issues with FreeBSD and Illumos / OpenIndiana. I love the FreeBSD community, and using dtrace again is a joy. I just don't like having to use them for things I already have solved in Linux or can quickly find documented on either Ubuntu or Arch's wikis.

It's nice after the OS wars to be able to say "well, it's easier for us to use linux because it has a broad user base." :)


We have almost the exact same setup except we use FreeBSD instead of Ubuntu -- mostly because when I setup this server, it was before ZFS support was reliable in Linux.

I can't remember the exact specs on our box, but it was a similar microserver that we've got 4 drives RAIDed in. It started at 1tb per drive but we've since upgraded to 2TB drives.

This replaced a FreeNAS setup that I ran out of the closet in my home office for years when I lived in Atlanta. That server was great but was loud, ran so hot the closet was seriously 20 degrees hotter than the office, and an electricity glutton. When we moved to New York City last year, we decided to consolidate to a small unit for size/heat/power.

Highly reccommended to anyone who needs a media server, general file server and fast-access VM/local enviornment.

ZFS is absolutely the way to go.


I've been using an unRAID setup (on an old workstation with plenty of bays) for a couple years now and I've been happy so far. No failures yet though so I haven't really put it to the test. The HP microserver definitely looks nice.

I just use it for storage though. No VMs etc. And I'm not overly concerned about access speeds so long as I can play HD video off it (which I can).

http://lime-technology.com/


Be extremely cautious when using WD Green drives in a RAID configuration.

They may be quiet/inexpensive, but they also don't ship with Time-Limited Error Recovery enabled (http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery). This makes it much more likely they'll drop out of an array. Some models can have this option flipped on; some cannot.


Thanks a lot! This is the most useful thing I've read in this thread. I own two green drives.

I installed two Caviar Black (b/c of the 5-yr warranty) drives in a RAID array a few months ago. The RAID edition drives aren't even significantly more expensive. I wish I'd known this earlier.

This is a bummer. http://wdc.custhelp.com/app/answers/detail/a_id/1397/p/227,2...

Edit: But on another page, WD says that Caviar drives can be used in consumer RAID configs. Hmm. http://wdc.custhelp.com/app/answers/detail/a_id/996/related/...


I've got a similar setup, but a custom build. 2 IDE hard drives in zmirror, this is my OS install, I am running OpenIndiana (I absolutely love the stability of the OS) and then 5 hard drives in a raidz with a 6th drive sitting in stand by (when i set this up raidz2 wasn't available yet) to automatically take over in case of a failure.

This machine has now been chugging along for a long time. It stores about 4 TB of personal backups (all my machines back up to it over the network), and various other things such as projects, media files, photos. ZFS is rock solid. I've had drives fail, and the backup drive take over without noticing a single thing.

I've got 4 GB of memory in this machine and I can get write speeds over the network of 80 MB/sec using consumer grade drives, and read speeds over the network of around 120 MB/sec (I easily saturate my Gbit network).

I wouldn't store my backup bits on any other file system, I've had failures with various Linux based raids/file systems that were nonrecoverable, I've used UFS in the past from FreeBSD and had data be silently corrupted, end to end checksumming is absolutely fantastic!


What is the network file system to make ZFS available to your other computers? NFS and Samba?


I grabbed a HP Microserver a few months ago after my old slow NAS died. One of the best purchases i've made this year.

Initially i tried using Ubuntu Server on it, but there were a few problems with it. I also tried Freenas but beyond the basic "share files and use ZFS" it didn't really offer much in the way of customisation.

So Instead i decided to just put Windows Home Server on it, since all i wanted to do was share files, use basic RAID, and run virtual machines for testing with the minimum of fuss. Windows RDC works fantastically out of the box. For VMs i just used Virtualbox. I stuck Freenas in a VM and left it running for Time Machine - perfect.

I also stuck an SSD i had lying around in the optical bay, so i have 4 drives available in the bays dedicated to storing data.

Despite not using Linux or ZFS, i'm quite happy with this current setup.


> Normally we fear expanding a home server because resizing a RAID always means copying terabytes of data off somewhere else, wiping the lot and constructing a new one including the original disks.

This is just wrong. Only last week I resized my RAID5 array from 2 drives (basically RAID1) to 4 drives. Reshaping and resizing the ext4 partition was all done online with minimal performance impact. The only bit that took a long time was the reshaping, and even then it managed to add a new 2TB drive in only a few hours.

Even upgrading the entire array to use a larger size of disk is pretty easy. You do need all the disks to be the same size though :(


It's also wrong on the opposite front: ZFS cannot change geometries on the fly. I am a fairly big fan of ZFS, but the ability to mix disk sizes and change numbers of disks in a RAIDZ set is not why--it effectively cannot do either.


Best takeaway for me from this: RAID-Z on Linux is now ready. Good to know!


Cool. I plan on doing something similar in future. One thing though:

Running the FS off a USB stick is a bad idea. No write leveling so its only a question of time before it bombs out.


Mount it read-only, put temporary files into ramdisks.


Sadly, flash also suffers from read-disturbs. USB sticks can die just from being read a lot.

Plus, sometimes you want logs for troubleshooting... small SSDs are cheap.


Well, being able to send incremental backups of ZFS tanks between systems is IMHO one of the best features of the system. Also, snapshots are very handy.


Does anyone have recommendations for similar hardware with 5 or 6 drives?

On a different note, the number of possible distributions is a tad confusing. Can someone recommend (say FreeNAS or Illumos) the distribution with the most up to date support of ZFS along with ongoing updates? I'm primarily looking for something that is hassle free to maintain and update. FreeBSD or Solaris based is okay.


Not sure if money's an option for you, but I don't think you'll find anything with 5-6 drives at this price. At AUD $280, it's cheaper than a lot of 2-drive NAS' out there.


The n40l does have 5 internal SATA ports. However the optical disk bay port doesn't support AHCI. There's a hacked bios around online to enable it.


Freebsd it is


I'm biased (I started Amahi), but try the Amahi server http://www.amahi.org

We looked into using ZFS, as there has been some demand for it, but all the licensing and the people around it were hard to deal with (non-responsive, to be precise). I would be cool, though.


Expand the first mention of the HDA acronym on the homepage.

You could have based your product on FreeBSD to use ZFS. There would have been no licensing issue or people to deal with since the BSD license allows modification and redistribution.


Oh, and BTW, there is also Greyhole, which we integrated in Amahi. It makes a large redundant store out of a JBOD, with replication on multiple selectable spindles, etc. etc. http://greyhole.net


I am looking to replace my Acer Windows Home Server box with something else, so this article is very timely. Thanks for posting it.

Is the HP Microserver the best computer in its class, or are there other good competitors?


I've got 12TB in my unRAID box which has been working perfect for me for a couple of years now.

http://lime-technology.com/


That's interesting, I've got two QNAP TS-419p boxes running Debian on armel. Had no idea that similar, AND x64-64 based hardware was available for much cheaper!


nas4free. FreeBSD 9 based fork.


I shall stick to XFS until BTRFS has made enough improvements. That's right everyone XFS has been here for a long time and doing great in production.


Am I the only one having trouble reading the font of the post? On a 24 inch monitor running at 1920x1080 W7 Chrome its not exactly easy to read.


What surprised me when I first started messing with home file servers was that even with the server plugged into the WiFi router's switch I still couldn't directly play high-resolution video on my laptops. You end up having to try to get some clunky streaming transcoding solution which only works for some file formats and requires a heavy duty CPU.


That's because your laptop's wireless connection quality is sub-par. The fileserver might be on the LAN, but the laptop's wireless is going to be your bottleneck.


Sounds like a problem with your wifi being slow -- Mine will do >20MB/sec over 802.11n.

GigE can easily be saturated with a server like this.


So, just how stable is ZFS on Linux these days? Anyone else with experience care to share?


I've heard that the perormance of linux zfs is terrible, but I think it was a phoronix report...

Also, does anyone know how well dtrace was ported to linux? Last thing I remember reading said it was half-assed


The old FUSE driver used to perform pretty poorly but I havent used it in a while.

I've been using the native linux kernel driver (obv not in-tree but easy to install) and it's fantastic, I see no slowdowns at all. Whatever performance hit is there if any is worth the benefits :)


Wow. This looks like an incredible deal. After planning to buy a new ultrabook soon, I'm thinking of ditching my monsterous desktop and going back to a laptop + server setup. This looks perfect!

... No international shipping? I'm gonna cry.


No redundant power supply?


Rabiyh


Incredibly annoying blue lines...


Please stop using fonts with serifs for the web. It's a strain on the eye. Great article, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: