Hacker News new | past | comments | ask | show | jobs | submit login

Also: Modern storage is plenty fast, but also not reliable for long term use.

That is why I buy a new SSD every year and clone my current (worn out) SSD to the new one. I have several old SSDs that started to get unhealthy, well, according to my S.M.A.R.T utility that I used to check them. I could probably get away with using an SSD for another year, but will not risk the data loss. Anyone else do this?




No. Hardly anyone does this, because it's just conspicuous consumption, not actually sensible. Have any of your SSDs ever used even half of their warrantied write endurance in a single year?


The solution for this problem is raid. Way cheaper and far superior in terms of reliability than your solution.

Or if that isn't an option (laptop?) a good backup-solution that that runs daily or more often is also a better and cheaper alternative.

Drives may dail at any time, but they don't age the way your post would suggest.


I've looked into RAID. It seems a bit complicated to use. Is it trivial to create a RAID array in Linux with zero fuss and the whole thing 'just working' with very little knowledge of the filesystem itself other than it keeps your data 'safe' and redundancy baked in?


Do you mean “use” or “setup”? RAID is trivial to use. Mount the volume to a directory and use it like normal.

The setup is a bit more involved, but really not that bad. It’s a couple commands to join a few disks in an array and then you make a file system and mount it.

https://www.digitalocean.com/community/tutorials/how-to-crea...


On the one hand, a new SSD a year sounds extreme.

On the other hand, how many years does each of us have left? Ten? twenty? Thirty? Forty? Few of us can easily imagine ourselves still alive and productive in forty years. So much of what we do rests on an implicit assumption that we are going to live for eternity, and starts to seem pointless when we consider how short our existence is.


Very well said, there are times we lose the bigger picture of our lives and instead start wasting times with pointless stuff just to escape the reality of our lives.


Materialism (the dominant underlying philosophy of our culture) keeps us away from that higher level consciousness. It poisons our mental models and worldview.


What do you do that wears them out so fast? I've been running the same NVMe disk as my daily driver since 2015 and it's not showing any signs of degradation.


> What do you do that wears them out so fast?

I forgot to mention I do a lot of heavy writes to it. It is common to see me creating a huge 20GB virtual machine disk image, using it for a few hours, then deleting it, before creating a new one in its place. I'm a huge virtualization freak.


In a lot of these systems (at least VMWare back when I used that, and Docker) you can clone an existing image with copy-on-write. This is a lot faster and would avoid 20GB of writes to spin up a new VM.


Eh, still that is not that much data. I've had much longer life out of SSDs that are used as cache drives and perform a huge amount of writes per day, with Intel and Samsung drives that is.

If you're losing drives that fast check your thermal management instead. Don't run your drives hot.


I work with bioinformatics data and tend to switch out an NVMe within 3-4 months. I'm usually maxing out read or write for 12 out of 24 hours a day. The slowdown is rapid and very noticeable.


> The slowdown is rapid and very noticeable.

That probably doesn't have anything to do with write endurance of the flash memory. When your drive's flash is mostly worn-out, you will see latency affected as the drive has to retry reads and use more complex error correction schemes to recover your data. But there are several other mechanisms by which a SSD's performance will degrade early in its lifetime depending on the workload. Those performance degradations are avoidable to some extent, and are not permanent.


So I can potentially recycle my used SSDs?


Almost certainly.

Assuming these are consumer SSD, the most important way to maintain good performance is to ensure that it gets some idle time. Consumer SSDs are optimized for burst performance rather than sustained performance, and almost all use SLC write caching. Depending on the drive and how full it is, the SLC cache will be somewhere between a few GB up to about a fourth of the advertised capacity. You may be filling up the cache if you write 20GB in one shot, but the drive will flush that cache in the background over the span of a minute or two at most if you don't keep it too busy.

The other good strategy to maintain SSD performance in the face of a heavy write workload is to not let the drive get full. Reserving an extra 10-15% of the drive's capacity and simply not touching it will significantly improve sustained write speeds. (Most enterprise SSD product lines have versions that already do this; a 3.2TB drive and a 3.84TB drive are usually identical hardware but configured with different amounts of spare area.)

If a drive has already been pushed into a degraded performance state, then you can either erase the whole drive or, if your OS makes proper use of TRIM commands, you can simply delete files to free up space. Then let the drive have a few minutes to clean things up behind the scenes.


I think you could give it a shot with ATA Secure Erasing one of them and seeing if it performs faster. Although 4 months at 50% utilization at (say) 2GB/s is some ~10PB of I/O, so I'm not sure if I would expect what you're seeing to be a temporary slowdown...


Consumer SSDs have endurance ratings in TBW which is terabytes written over the lifespan. They're often in the 100s with some drives over 1000. The faster drives also use MLC or TLC which has lower latency, better endurance, and higher performance than the more higher capacity QLC.

For example the Samsung 1TB 970 PRO (not the 980 PRO) has a 1200TBW rating with a 5 year warranty. That's 1.2M gigabytes written or more than 600GB every day, and will usually handle far more.


I'm still on the SSD I bought 6 or 7 years ago as my OS drive.

Haven't noticed a single issue on it.


It will highly vary depending on use case. I have been using the same SSD (Samsung 850 evo) since 2015. First used on my gaming desktop, then on my college laptop, now in my gaming desktop again. I just make sure to keep it at ~25% to ~50% capacity to give the controller an easy time and I try to stick to mostly read only workloads (gaming). SMART report from that drive: https://pastebin.com/raw/HyPE6aHm

For my disk for my exact use case: ~4 years of operation. 88% of lifespan remaining.

Your mileage will almost definitely vary.


I've had one (very early and cheap) SSD fail on me. Other than that I don't think I've seen or heard of any issues across a large range of more modern SSDs. The reliability and endurange issues which occured on earlier SSDs no longer seem to be a problem (this is in part because flash density has skyrocketed: because each flash chip can operate more or less independently, the more storage an SSD has the faster it can run and the more write endurance it has).


I would add a new drive with zfs mirroring and enable simple compression. For most use cases it gets better read performance, ok write performance, and can tolerate both of the drives being a bit flaky so you can run it for a lot longer than the new drive alone.


Every year seems like a very short lifespan, but I guess every usecase is different. I definitely replace drive when SMART is starting to look bleak, but that is far more infrequent in my usecase I guess.


> Every year seems like a very short lifespan

Yes but I forgot to mention I do a lot of heavy writes to it. It is common to see me creating a huge 20GB virtual machine disk image, using it for a few hours, then deleting it, before creating a new one in its place. I'm a huge virtualization freak.


> It is common to see me creating a huge 20GB virtual machine disk image, using it for a few hours, then deleting it

The SSD in my current desktop, a Samsung 960 Pro 1TB, has a warranty for 800 TBW or 5 years. So that's 800/5/365.25*1000 ~= 438 GB per day, every single day.

And it's been documented the Samsung drives can do a lot more than the warranty is good for.

Either you're doing something else weird, or you're not really wearing them out.

[1]: https://www.samsung.com/semiconductor/minisite/ssd/product/c...


That's still nothing even if you do that 4x/day.

Also just because you create a 20GB virtual disk does not necessarily mean you're actually writing out 20GB to the disk.

Many SSDs and NVMEs are designed with total drive writes per day in their specs.

What is the wear method you're measuring by and what's the threshold where you're replacing your drives?


> does not necessarily mean you're actually writing out 20GB to the disk.

You mean like preallocation? I think Virtualbox now does that. In the past it didn't though, it just kept writing a bunch of zeroes to the drive until it reached 20GB.


Or probably the filesystem decides to do it? The Refs will just ate adjacent 0 and assuming you want a fallocate here.


That is absolutely nothing in terms of the write endurance for modern drives


I think having a backup solution is the better choice here. You can use your SSDs until they die or become too slow, and you won't lose your data if it breaks before you replace it after a year


> I think having a backup solution is the better choice here

Any particular provider you would recommend? I've looked into backblaze but it seems a bit pricey. Also: I am aware that cloud based backup solutions have very little failure rate in terms of drives since they're probably using RAID


I use Backblaze's B2 (think S3) style storage for backup, and I'm paying about ~$4.50 USD for ~1TB of storage per month. I don't use a ton of bandwidth though, so if you have a lot of churn in the files you're backing up you could see higher costs, but looking over the numbers Backblaze as the cheapest solution by far to others I looked at.


$6/mo?


Can somebody please write up modern SSD and state of the world regarding data retention, modes, applicability for SSD replacing spinning rust "on the shelf" offline...


SSDs make no sense for offline archival. They're more expensive than hard drives and will be for the foreseeable future. You don't need the improved random IO performance or power efficiency for a drive that's mostly sitting on a shelf.


If what you have is an SSD, you need to know how applicable it is to unplug/remove it, and hold it offline. What I read is that this is not a good fit for how the data retention of an SSD works, it expects to be plugged in to power more frequently. If the storage is safe for 6 months to a year to 10 years, it would be good to know. I believe it may not be.

I entirely agree that its not the medium of choice yet, if ever. LTO still exists for many people, perhaps 25+ TB hdd will make that moot.


When ssds fail they don’t lose your data, they just become unwritable. What you’re doing is unnecessary and wasteful.


I've only had two SSDs fail on me, and in both cases they died without any warning. Didn't get discovered during boot or anything. Two different brands, very different uses.

So while they _can_ fail in a graceful way, that's not been my experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: