Hacker News new | past | comments | ask | show | jobs | submit login

That would be a death knell for hard drives if they could actually get it that cheap (~$25/TB). In addition to obvious benefits for consumers, More and more in the enterprise space (which is the dominant buyer of HDD unit purchase volume) HDD's are being regulated to write few, read lots if not outright WORM, and focused on Sequential speeds. You can knock an order of magnitude of speed off modern NAND Flash and still compete with spinning disks on sequential speed and have a few more orders of magnitude to shave off of Random I/O. It wouldn't kill HDD's overnight, but it would definitely give them a definitive terminal diagnosis.

I would love that. I doubt that's what they're doing, but I would love that.




> wouldn't kill HDD's overnight

What's killing HDDs for me - looking at desktop use - is noise. Seems it got weirder lately, with wear-levelling ringing in every five seconds, etc.

Today I just "rebuilt" a brand new HP box by adding RAM and swapping the original 512 GB M2 SSD for a 2 TB one (Crucial P5 Plus, nice specs). Thinking of adding a 6 or 8 TB HDD for data landfill, but most any customer review mentions aggravating disk noise.

So, holding off the landfill for a while, peering at SATA SSD prices ...


When I built a quiet computer I was surprised that even at max load it was the HDD that was by far the loudest


That depends on how much those supposed 50TB hard drives cost in four years.

Though even if hard drive prices stay completely stagnant at $14/TB during sales, I'd still get a hard drive for videos and backups rather than pay $25.


> I'd still get a hard drive for videos and backups rather than pay $25.

So would everyone who's only motivation for buying storage is $/TB and nothing else. The vast majority of those people think 8 or even 80TB is alot and are pefectly happy with subpar drives being packaged in external usb enclosures. And there's nothing wrong with that, but that demographic of consumers is not the driving economic force in what gets developed, invested in, and brought to market.

HDD's ~$15/TB bargain values wont be sustainable if Enterprise drops HDD's for WORM/WFRM workloads because their TCO is too high (and many are). Disk shelves chew through Kilowatts like candy while capacious focused flash hosts seem comparatively allergic to electricity, and repairing/rebulding arrays on spinning rust compared to flash arrays alone is impetus enough for a lot of shops to dump rust and embrace solid state.

> That depends on how much those supposed 50TB hard drives cost in four years.

For sure. I'm highly skeptical they can pull that off in that time frame though. the pace of innovation in the HDD space is woefully lacking. They need a win like that, with competitive pricing, or the industries days are numbered. If they can pull it off though that's great. I'm all for cheaper/better storage options, even if it's mechanical (or optical or whatever).


Not really sure about that. Isn’t the write endurance of hard dives still much higher than that of flash?


Not really.

If I go look at some 20TB hard drives, I see them promising 1.5 to 2.5PB of endurance.

On the other hand there are 4TB SSDs promising 5PB of endurance.

In full drive writes that's 75-125 vs. 1250.

Even if you ignore the hard drive warranty, I'd say the maximum reasonable workload is a constant write at 50% of the minimum transfer speed. At that speed you might get over 1000 writes, depending on drive size. It's a struggle to even beat the worst case of a mainstream SSD chosen for endurance.

A QLC SSD will only withstand a couple hundred writes, but even that's moderately competitive.


Samsung rates their QVO (QLC flash) at about 0.36 DWPD for 5 years which works out to over 600 disk mean writes before failure. Even if we half that that's still easily 3x high end HDD's, with superior drive health monitoring and far superior recovery/rebuild performance. TLC Flash is 2-3 times higher than that, and exact endurance depends on the specific drive in question as there's more to endurance than single/double/triple/quadruple bit layering.


It seems to me like initially people were worried about SSD endurance but now nobody seems to care anymore, because endurance hasn't turned out to be a significant issue. In fact, I suspect developers care a lot less about I/O patterns now because 1) the performance impact of bad I/O patterns is still high, but the performance floor is way higher on SSDs, so it basically "doesn't matter" for most 2) for end-user applications, user's don't hear your I/O any more, so how are they going to notice?

For example, I have this C:\ SSD on a Windows 10 machine, which has accrued more than 100 TBW (>400 drive writes) in 2-3 years [1], which is pretty much only possible if the system is writing all the time to it (and it is). That's not something you would have done in the spinning rust days, simply because the constant crunchy hard drive noises would have annoyed people.

[1] And whaddaya know, that SSD still says it's got 70 % more life in it, so it's probably good for some five years of this, at which point the average PC will go into the dumpster anyway. Success achieved.


> now nobody seems to care anymore,

I presume you're talking about layman consumers here.

> because endurance hasn't turned out to be a significant issue

It used to be a big issue. Keep in mind SSD capacities were pretty small back in the day, with 80-120GB capacities being pretty typical. combined with just less mature overall flash technology meant that per cell endurance was not great, and it lacked the controllers and capacities to do proper wear leveling, so the massive speed boost meant you could burn through even "good" SSD's of the day pretty quick, especially in the days before TRIM and what not where the OS was doing unnecessary excessive writes to disk because it just didn't know how to handle flash. At first, the tech kinda brute forced it's way through this problem by using increased capacities (A 100GB SSD for example will often actually have more than 100GB of RAW storage to account for cell degradation, though to what extent and the nature of the transition flash and controller and DRAM or not and etcetcetc are all compounding factors in exactly how that works out), and then through improved overall endurance, controllers, cache, etc in combination with improved software and OS policies to reduce unnecessary wear to drives. So hasn't really been an issue for a while, but it absolutely was one back in the day, even for the layman with a healthy budget.

> That's not something you would have done in the spinning rust days,

Yes people did. They didn't really have a choice (unless they're having some fun with RAMdisk or something) + HDD I/O was so much slower that it took orders of magnitude longer. Even when you weren't actively doing something drive intensive, the computer the OS was consistently doing work in the background (remember defragging)?

> simply because the constant crunchy hard drive noises would have annoyed people

It's wild to me to see so many people share this sentiment. Then again I come from a background of having 8x10k 300GB Velociraptors in my personal workstation back in the day. Those were... Loud. lol.


> endurance hasn't turned out to be a significant issue

To be fair, I think it's a bit like y2k. It's not an issue because steps have been taken to mitigate it.


I don't know. Simple correction and wear leveling was required to get off the ground, and that by itself was enough to make early drives have sufficient endurance.

The real mitigation work was done in the service of storing more bits per cell, with endurance allowed to slip.

So sure effort has been put in, but the alternative wasn't disaster, it was using SLC more.


>0.36 DWPD for 5 years

so a mechanical drive running for ~40K hours with ~5% spend writing.

>easily 3x high end HDD's

how? high end HDD offer 2M MTBF, and with something like HGST 7K6000 you end up with pallet loads being retired at 50K hours run time 100% defect free.

>superior drive health monitoring

how? SSDs usually just die without any warning

>and far superior recovery/rebuild performance

you mean recovery/rebuild on a storage box level, because your data on dead SSD is unrecoverable gone forever


> how? high end HDD offer 2M MTBF, and with something like HGST 7K6000 you end up with pallet loads being retired at 50K hours run time 100% defect free.

The warranty says you only get 75 or 125 full drive writes on the two higher end 20TB drives I looked at.

I can't tell you why it's that low, but that's what it says.

> how? SSDs usually just die without any warning

Sometimes SSDs just die, sometimes they go read-only. Sometimes HDDs just die too. Do you have any numbers?


Hmm. Thanks for sharing. Definitely food for thought.


No. Premium flash (Write Endurance optimized flash) is far far far superior to HDD's for endurance and reliability. And modern flash controllers are far better at providing granular and detailed health statistics of cells/devices and allowing you to recover data from failing devices as well as far far faster at rebuilding or recovering data for array rebuilds/resilvers.

In the context of flash replacing HDD's, HDD's have already been regulated to Write Few, Read Many (WFRM), or even Write Once, Read Many (WORM) usecases, and flash has largely replaced spinners where write endurance is an important feature. This is true for consumers and enterprise. As such, a low write endurance flash drive isn't necessarily an inherent flaw. You have to be mindful that you can't just drop it into your databases as a hot spare (not if you expect good results anyway), but similar special use considerations are made for SMR spinners, so that's not exactly an atypical restriction, and is a flaw that still heavily favors flash.

Technically you can probably do more writes to an HDD than you could a 100 P/E cycle Endurance flash drive (that's 1/10th the endurance of a typical QLC NAND flash cell), but the HDD will be orders of magnitude slower and cost you several times as much in electricity just to idle, not to mention under load (again, we're talking capacious flash, not NVMe speed demons that suck down power). Given the usecase in question is specifically in scenarios where writes are already severely limited, the reduced write endurance of such NAND cells isn't really a drawback, especially given how much better they are at providing granular drive and cell health data to monitor device and array health, and how much faster/easier it is to do rebuilds/recovery with flash than spinning rust.

EDIT: To quantify how good flash is these days: Nearly all consumer flash and most non high-endurance enterprise flash drives typically have 0.5-1 DWPD of endurance. Entry level QLC flash (eg samsung 870 QVO) can be as low as 0.3 DWPD [0] but that's still pretty good compared to HDD's. DWPD Means that if you have a 1TB drive, you can write 500GB to 1TB everyday to that drive for 5 years before the drive fails. This is an MTBF, not a minimum lifespan, so YMMV but that's still very good. [1]

Meanwhile, HDD's are too slow to even do 1DWPD for the 20TB and up capacities, period. (keep in mind throughput for spinning disk degrades as physical data as allocated closer to the inner ring of the platter, so those >200MBps seq figures are unrealistic for full disk writes unless you plan to short stroke your drive which massively reduces capacity). And if it's SMR? lol. Even if you do short stroke them to give them the best possible chance, you're looking at about 15TBs maximum written per day theoretical for the best 3.5" spinners out there. Additionally, the main impetus for spinning disk failure is mechanical, not degradation of write medium like in flash, so while it's hard to do apples to apples testing, if your real world write workload works out to less than half a total disk per day, even basic bitch bargain bin flash that sabrent scraped off samsungs fab floor will last you the better part of a decade in 24/7 use. Let's not forget that HDD's will continue to degrade at idle by virtue of being mechanical, whereas the SSD will suffer very limited if any degradation in the same idle case.

High End Consumer/Entry level enterprise flash is 1-3DWPD. [2]

High Endurance enterprise flash is 5-10DWPD. [3]

The Endurance champion however is 100DWPD [4]

TL;DR: So setting money aside, HDD's really only beat SSD's (even cheap QLC ones) in small capacity (<10TB) usecases. For a lot of consumers, that makes HDD's an easy pick, especially considering HDD's are still cheaper per teraybte typically. Otherwise, SSD's are better in pretty much everyway except for price, and SSD's are improving far faster than HDD's on that front as well.

[0] Samsung 870 QVO: https://semiconductor.samsung.com/consumer-storage/internal-...

[1] Micron/Crucial MX500: https://www.crucial.com/ssd/mx500/CT2000MX500SSD1

[2] Samsung 980 Pro: https://semiconductor.samsung.com/consumer-storage/internal-...

[3] HGST SS530: https://documents.westerndigital.com/content/dam/doc-library...

[4] Intel Optane P5800X: https://ark.intel.com/content/www/us/en/ark/products/201840/...


Yes. Flash can’t be stored unused for long periods without losing data.


That's what tape is for and not the usecase in question.


isn't tape better here?


Taking into account that they don't say anything about data retention and reliability, i wouldn't hold my breath.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: