Hacker News new | past | comments | ask | show | jobs | submit login

At what point will the wear resistance be so low from increasing the layers (and bits) per cell that flash becomes effectively write-once, read many (WORM)?



There was a paper a while back that promised a healing process for flash:

https://www.zdnet.com/article/self-healing-flash-for-infinit...

I have no idea what happened to that though, because it sounded extremely promising.


Probably they're more interested in selling you new flash.


Keep in mind, these are layers of cells, not multi-level cells. Each of the layers has an individual multi-level cell.

That said, Samsung quad-level cells only have 1k write lifetime, so likely not far off.


Remember, even 1000 writes means roughly 3Y of full drive writes per day with the naïve wear leveling and the most brutal write load. For most write loads the drives made of this kind of NAND can be easily warranted for 3Y.


This doesn’t account for the write amplification factor, which is quite unlikely to be 1 with a naive wear leveling implementation and the most brutal write workload.


note that there are data structures (eg b epsilon trees) that can guarantee a write amplification factor of log_b(disk size) for very large b (over 1000). this means that if you are willing to sacrifice a small amount of sequential read speed, you can guarantee write amplification of no more than 4 (with a few mb of RAM).


I've added [1] to my reading list. Do you have suggestions for others that relate specifically to WAF?

1. http://supertech.csail.mit.edu/papers/BenderFaJa15.pdf


The TLDR is that the design of b^e means that (other than the root node which you can store in RAM), you never need to write in chunks less than b^e (smaller writes get batched and pushed down the tree only when you have that much data to write to a child node). With the typical, e=1/2, you can set b=blocksize of SSD^2. This means that any small write will turn into 1 write per depth of the tree, and each of those writes will be writing a full block of data (so the maximal WAF is the depth of the tree). Also, for bigger writes, you can just write to the bottom directly, so this ends up working out such that for large sequential writes, you get a WAF of 1, and for small/random writes, you get a WAF of d.


1k writes sounds good enough for consumer hardware I suppose. As long as there's some some type of S.M.A.R.T. system to warn the user before it wears out.


usually it goes into read-only mode when it wears out, so there's no data loss.


that's fantastic actually! One problem might be that programs don't anticipate this behavior, like if it wears out while a filesystem is still syncing


all modern filesystems are either journalled or always consistent on disk, so shouldn't be a problem if it dies during

worst case you have to copy it to another disk and replay the journal


Yeah, 1K is still in the range of reasonable (if the wear leveling isn't terrible), another order of magnitude decrease though and we're getting on the edge of not.


Yeah they are going to stacked cells because the endurance is so bad at smaller process nodes.

SSD prices have stalled for quite a while. By now they should be getting in shouting distance of hard drives but unfortunately memory makers are much more of a colluding trust than Hard Drive manufacturers.

What I really want is a high-lifetime write medium to go with SSDs. Hard disks aren't really it. It's too bad the DVD/CD form factor/drive died with high speed internet, holographic storage would have been great. But there was no investment/demand to push it.


I love how someone said there's a septuple-level cell out there... but you have to immerse it in liquid nitrogen for it to work.


PLC NAND is already on the way to commercial hardware.


> write-once, read many

but not too many, read disturb is also a NAND problem.


Levels per cell and wear I understand. What's the link between circuit layer depth and wear?


Increasing layers should cause roughly zero problems.

As far as bits per cell, 4 already gives you very slow writes and not a lot of them. Let's say that maxes out at 5 or 6.


Thermals? SSDs have slowly but surely increased in TDP.


Flash likes to run pretty hot and we still rarely attach heat sinks, even less often proper heat sinks with fins. There's a lot of headroom.

And if your main goal is capacity you could decide to just not run it faster, which will avoid basically all the extra heat.


Indeed, the flash write endurance is actually significant higher at higher temperatures (I presume because of greater charge mobility) with modern flash. But data retention, naturally, gets shorter at high temperatures and quite a bit longer at low temperatures, and it can vary by orders of magnitude between temperature extremes. Again, I think due to the charge mobility effect.


That's from controller chips.


I would actually love that. I need fast random access to data that never changes. SSDs are an overkill and too expensive, HDDs are too slow


Even the reads aren't truly nondestructive --- look up "read disturb".


This is TLC flash, don't worry.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: