Hacker News new | past | comments | ask | show | jobs | submit login
Revisiting Solid State Hard Drives (codinghorror.com)
93 points by alexandros on Sept 15, 2010 | hide | past | favorite | 59 comments



There's not really very much revisiting, as opposed to repetition. What's new: (1) Atwood recommends the Crucial RealSSD C300 (citing a couple of reviews). (2) Seagate have a reasonably priced hybrid SSD/spinning-rust drive, the Momentus XT, which according to AnandTech has good performance. (It uses 4GB of flash as a read cache; performance is intermediate between typical HD and typical SSD.)


It's a revisited taxation on a gullible readership, many affiliate links there covering all budget points.

My PC has both a standard, non-hybrid magnetic disk for volume, and a very, very fast SSD for speed. I use the SSD for development and, humorously, Steam games. It's nice, but given that I have 8GB of RAM and most everything I touch regularly is cached...and I virtually never turn my PC entirely off (S3 sleep is your friend)...outside of database tests it just doesn't make that much of a difference.


Interesting. I didn't notice the affiliate links the first time and then when I did look, I saw domain names like kqzyfj.com. What purpose does it serve to hide the fact that it's an affiliate link behind a disposable domain name? Are there things like Firefox extensions that would search for "commissionjunction.com" and replace it with non-affiliate links?


Those weird domains are Commission Junction. That's how they hand out the links, nothing Jeff did on his own. They moved all their affiliate links to a set of gibberish domains many years ago... I can only speculate why, but software that removes tracking cookies, and malware that rewrites affiliate links to credit the sales to someone else, are the first things that come to mind.


I always look for affiliate links when bloggers suddenly start shilling merchandise outside of their competency.

I don't mind when there's an obvious "best prices" DIV at the bottom or wherever, where its purpose and commercial motivation is obvious, but embedded affiliate links just feel....scummy.

It's like getting invited to dinner and the host starts an Amway pitch. It colours the entire presentation.


> It's like getting invited to dinner and the host starts an Amway pitch.

I respond in the same way, I leave, never to return.


Maybe try not attending dinners titled "revisiting Amway products". Just a suggestion.


I realize this is a bit off topic but how do you feel about disclosure right there in the blog post? Something like "if you follow that link, I'll get some money"? I wrestle with this every time I link to something in a blog post. I've mostly settled on the affiliate link disclosure followed by an affiliate free one.


Decent article, but out of date by a few months. None of the drives listed use the newer SandForce controller, which skyrockets sequential writes into the 275 MB/s range, and random writes into 50K IOPS. These drives are where it's at (for home/commodity use, etc).


The Crucial RealSSD C300 drive he cites has a theoretical max of 355MB/s. I happen to have have this drive. Writes comes close to 150MB/s at times, after a month and a half of use on OS X (which apparently lacks TRIM). Thats not too shabby.

Here's an Xbench result from about a month and a half ago (when I first got it) and one for today (as of 10:36am est): http://db.xbench.com/merge.xhtml?doc1=459679&doc2=469412...


It only has that max if you happen to have a sata 3 compatible host, otherwise you'll be downgraded back to sata 2 and your speed will be comparable to everyone else.


I upgraded my MBP to the Corsair Force 240GB and it's been blazing. Boots in seconds and I'm never waiting for anything anymore.


I bought a 24GB ExpressCard SSD to use as a boot drive and to launch certain applications from. Gave my MacBook Pro new life, even though it's not a high end SSD.


Was surprised to hear OSX doesn't support TRIM. Does anyone know why? I was gettin all excited about getting a new SSD for my MBP, but now I'm not so sure...


As a commenter pointed out, it seems that TRIM is not needed, in practice, on OS X. http://www.bit-tech.net/hardware/apple/2010/07/01/mac-ssd-pe.... As a MacBook Pro user who has had an SSD for a few months now — with a very full drive, frequent media file rotation, git repos, etc., I have not noticed any slowdown. And yes, it was worth it.


that link is a bad test. they wrote 0's to the drive, which is bad because 1) empty nand flash contains all 1's and 2) writing 1's or 0's doesn't mark the blocks as cleared, you have to use secure erase



Steve Jobs: "Thanks to everyone for coming today. Today is a big day. Today we release an update for Mac OS X that quadruples SSD performance and furthermore, all Macs will come standard with a SSD hybrid... etc..."


He wouldn’t use the word “today” three times in his first three sentences. Now, “amazing,” on the other hand… ;-)


AnandTech and other sites have mentioned that newer drives (Sandforce in particular) do background garbage collection which eliminates some of the need for TRIM - but does require idle time.

I'd say go for the SSD. I just upgraded to a 240GB OCZ Vertex 2 and the speed boost is almost unbelievable.


Background garbage collection doesn't help so much if your drive doesn't know which logical blocks no longer contain active data - which is what TRIM is all about.

(Filesystems historically never worried about informing floppy disks or hard disks about deleted sectors / blocks, because there was nothing useful those storage mediums could do with the information).


ssd's have spare blocks for extra life/wear leveling. ssd garbage collection can use LBA in combination with these spare blocks to always have cleared blocks.


Sure, but it still has to waste time (and cell life) consolidating and moving about logical blocks that the filesystem no longer cares about.


GC only works while the drive is idle and should improve cell life drastically because you do a whole lot less read/modify/write cycles for small writes in the long run. Assuming a 256 GB drive with a write endurance of 1,000,000 cycles and a write speed of 200 MB/s, it would take on average 40 years to burn out the drive by sequentially overwriting its contents. The problem with read/modify/write is that if you are doing random writes of 4 KB with an erase block size of 512 KB at 50 MB/s, the effective internal write speed is 6 GB/s and you will burn out your drive in 1-2 years instead of 40. A full GC cycle is essentially a sequential rewrite of the drive. Let's say a full cycle frees up 16 GB, as 4 GB overprovisioning per 64 GB is standard. You can write 16 GB without any read/modify/writes, and for every 16 GB you write, you will have to do an additional 256 GB of writes for the GC cycle. That sounds awful, but compared to a bunch of read/modify/writes the drive life will go up by a factor of 8. The average write speed will also be higher. To avoid long pauses the GC only runs when the drive is idle, so for things like database servers trim is essential, but on a laptop/desktop GC is plenty good enough.


I may have been unclear - I'm not denying that GC is a good thing, but GC with TRIM is substantially better again than GC without. TRIM is just a way of putting more blocks into the G, to be C'd.



Just as a followup, this OWC mercury isn't the only one that does built in garbage collection. From what I can tell, most newer generation SSD's do it. While it's not as good as TRIM support, it's a reasonable approximation.

As a side note for anyone else considering buying an SSD, there are rumors flying around that Intel is getting set to re-release their SSD line with a smaller micron next month. This should cut costs and increase speeds in time for the Christmas season.


I used to know OCZ as a bit of a shady, benchmark gaming outfit "back in the day". Today however, the SandForce based Vertex II Pro SSDs are amazing. 10K+ IOPs sustained with a genuine database load. 100GB for just over $600. Compare that with STEC Mach8 devices at ten times the price, not super-capacitor backed, and lower throughput.

Sure a couple JBODs stacked with 15K RPM disks could do the same for $50K... Now imagine a small single-parity array of four SandForce devices. ;-)

For most things, meh. But for databases, it's a real game changer. 2010 is definitely the year SSDs changed everything.


Did anyone else notice that the performance of the Seagate Momentus XT is awful compared to all the solid state drives? It's only about 20% faster than a regular hard disk drive (the SSDs are about 1000% faster, according to Jeff's own chart). It's an awful, awful recommendation. If you want the performance of a solid state drive, buy a solid state drive.


It's apparently not much faster than a physical drive for random data traffic, but much faster for loading frequently opened files and booting the PC.


It's basically like the cache in your cpu now except it's a bit smarter about storing what goes in the flash memory.

L1 = the 32MB RAM

L2 = 4GB Flash

finally the 500GB HD


It'd be nice if there existed a thing that acted like the SSD part of the Seagate Momentus XT. That way I could hybrid-ize my existing drives and have greater flexibility for future upgrades.


Check out ARC, ZFS has a implementation of it.

http://en.wikipedia.org/wiki/Adaptive_Replacement_Cache


There's the HDDBOOST, from SilverStone, though it almost sounds like you were just begging someone to link to it ;)

Some benchmarks: http://www.phoronix.com/scan.php?page=article&item=silve...


Hrm. I was hoping that such a device would perform better.

Yeah, I was hoping someone would link something. In this case, I didn't really know what I was looking for. What do you call a doohickey that does that sort of thing?


I'm not personally aware of any specific name for it, but I used search terms in the neighborhood of "SSD hybrid adapter" to find the Phoronix article.


I would disagree about not needing >2GB of ram, but other than that - SSDs rock.


Running VMs for testing during development (ie: an IE6 VM, IE7 VM, etc) are the major driver for us.

Outside of that, my work machine has Chrome, Mail.app, Terminal and Textmate open generally so 2GB is fine without the VMs.

My home machine is the previous generation, top of the line iMac (3GHz, 2GB) and is often noticeably sluggish. It looks like iTunes is responsible for a bit under 200MB. I really don't know what the issue is though to be honest, but it feels like a 4GB upgrade would get it over the hump.


On my Ubuntu machine, I rarely use more than 1GB (apart from the disk cache, of course, which can fill all remaining memory. Note: windows only has an 8MB cache)


On my Kubuntu machine with 4GB, I rarely use more than 1.5GB of RAM, and that's counting the disk cache which will grow to fill all available space. Unless I run a VM, the system literally has no use for half my RAM, except for dual-channel access.


Can you qualify the "Windows only has an 8MB cache" bit?

Of course on Microsoft Windows it uses all available memory, quite effectively. If you have copious RAM you get far-better-than-SSD "read speeds" when reopening applications (which is a typical behavior, cycling between a small universe of applications), presuming your computing lifestyle is that your computer is never turned off...e.g. you put it to S3 sleep during off time.


I have the 2010 Mac Pro @ 3.33GHz. Going from the WD Caviar Black that it ships with to two SATA SSDs (60GB each, one for Boot and one for Apps) and one PCI SSD (120GB, for scratch work) has been unbelievable. This was a fast computer to begin with, but now I literally never find an application waiting on the disk. Xcode builds that take several minutes on my laptop (copying lots of resources into the bundle, for example) complete in seconds on the new machine. Highly recommended.


Does anyone know how to get Linux to support TRIM? I just reformatted my SSD and would hate for it to become slower just because of deallocated blocks.


According to the linked wikipedia page on the TRIM command, Linux has supported it "since Feb 2010", so I would imagine that anything running the 2.6.32 kernel would correctly utilize that feature if your drive supports it. Please correct me if I'm wrong.



It also requires filesystem support. ext4 was the first support it.

I believe btrfs, nilfs2, and xfs support it now as well.


The article says that the "latest, greatest Linux kernels should beware", so I don't know what to think :/


Parse it once more: "Operating systems earlier than Windows 7 and the latest, greatest Linux kernel should beware"

Translation: latest, greatest Linux kernels support TRIM.


Ah, "earlier than (Win 7 and the latest kernel)", rather than "(earlier than Win 7) and the latest kernel". Thanks, looks like I'm covered, then.

EDIT: Apparently, Ubuntu is one version behind TRIM support.


This guy lists some good starting points: http://cptl.org/wp/index.php/2010/03/30/tuning-solid-state-d...


C300 is definitely NOT the fastest SSD drive. There are a few that are much much faster.

1) FusionIO ioXtreme = 670/280 MB/s

2) FusionIO ioDrive = 770/750 MB/s, 140,000 IOPS

3) FusionIO ioDrive Duo = 1500/1500 MB/s, 261,000 IOPS

4) RamSan-440 = 4 GB/s, 600,000 IOPS

5) NextIO vSTOR S100 = 5.5/6 GB/s, 2,200,000 IOPS

There are many more listed here: http://www.storagesearch.com/ssd-fastest.html


These are either rack-mounted or PCIe drives. If I look at the chart on that page, then the C300 is the fastest 2.5 inch SATA drive. I.e. its the fastest drive that can practically be added to a laptop.


The RamSan and NextIO aren't internal storage devices, and while the FuisionIO are non-volatile they use the PCI-E bus. A quick glance does not indicate if they are suitable to be booted from.

(edit: all that aside from the cost of using any of those solution as a desktop's storage system)

(edit2: not condoning the article in anyway)


Tangential: Does anyone have experience with using SSDs in production web/DB servers? Is it viable (yet)?


MySpace replaced all their fast hd's with SSDs about a year ago with good results http://www.computerworld.com/s/article/9139280/MySpace_repla...


We've got a client with a huge ($15,000) array of SSDs, that he's using to serve out about 9 gigabits of traffic per second from his webserver with one RAID enclosure. Duplicating the performance with normal drives would be cheaper for drives, but take several more enclosures.


I don't understand the reasoning about not wanting to put a relatively expensive SSD in a relatively cheap laptop.


I've got an expensive SSD in my cheap laptop (free, actually, it was the one they gave me at Microsoft PDC 2009) and it really makes a huge difference. This cheap little laptop, with the SSD, hooked up to a huge monitor, is one of the best dev environments I've ever used.


Laptops get stolen more frequently than desktops?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: