It's really not the users' fault. Most civilians understand that computers have to have some way to "remember" their files, but the fact that computers also need memory that "forgets" if the power goes off makes no sense to them.
It shouldn't make sense to us either; it's a ridiculous kludge resulting from the fact that we've never figured out how to make memory fast, dense, cheap, and nonvolatile at the same time.
Actually Intel did figure it out with Optane. Then they killed it because computer designers couldn't figure out how to build computers without the multilevel memory kludge. IMHO this is the single dumbest thing that happened in computer science in the last ten years.
My understanding is that the problems with Optane were a lot more complicated than that. @bcantrill and others talked about this on an episode of their Oxide and Friends Twitter space a few weeks ago. A written summary would be nice.
Thanks for the tip. I learned a lot listening to this [0].
My takeway: Optane was good technology that was killed by Intel's thoroughly wrong-headed marketing. It's cheaper than DRAM but more expensive than flash. And much faster than flash, so it makes the fastest SSDs. But those SSDs are too expensive for everybody except niche server farms who need the fastest possible storage. And that's not a big enough market to get the kinks worked out of a new physics technology and achieve economies of scale.
Intel thought they knew how to fix that: They sold Optane DIMMs to replace RAM. But they also refused to talk about how it worked, be truthful about the fact that it probably had a limited number of write cycles, or describe even one killer use case. So nobody wanted to trust it with mission-critical data.
Worst of all, Intel only made DIMMs for Intel processors that had the controller on the CPU die. ARM? AMD? RISC-V? No Optane DIMMs for you. This was the dealbreaker that made every designer say "Thanks Intel but I'm good with DRAM." As they said on the podcast, Intel wanted Optane to be both proprietary and ubiquitous, and those two things almost never go together. (Well obviously they do. See Apple for example. But the hosts were discussing fundamental enabling technology, not integrated systems.)
> Most civilians understand that computers have to have some way to "remember" their files, but the fact that computers also need memory that "forgets" if the power goes off makes no sense to them.
Well, of course that makes no sense. It isn't true.
We use volatile memory because we do need low latency, and volatile memory is a cheap way to accomplish that. But the forgetting isn't a feature that we would miss if it went away. It's an antifeature that we work around because volatile memory is cheap.
I agree we would be fine if all memory was nonvolatile, as long as all the other properties like latency were preserved.
In terms of software robustness though, starting from scratch occasionally is a useful thing. Sure, ideally all our software would be correct and would have no way of getting into strange, broken or semi-broken states. In practice, I doubt we'll every get there. Heck, even biology does the same thing: the birth of child is basically a reboot that throws away the parent's accumulated semi-broken state.
We have built systems in software that try to be fully persistent, but they never caught on. I believe that's for a good reason.
It would take serious software changes before that became a benefit. If every unoptimized Electron app (but I repeat myself) were writing its memory leaks straight to permanent storage my computer would never work again.
> If every unoptimized Electron app (but I repeat myself) were writing its memory leaks straight to permanent storage my computer would never work again.
This is a catastrophic misunderstanding. I have no idea how you think a computer works, but if memory leaks are written to permanent storage, that will have no effect on anything. The difference between volatile and non-volatile memory is in whether the data is lost when the system loses power.
A memory leak has nothing at all to do with that question. A leak occurs when software at some level believes that it has released memory while software at another level believes that that memory is still allocated to the first software. If your Electron app leaks a bunch of memory, that memory will be reclaimed (or more literally, "recognized as unused") when the app is closed. That's true regardless of whether the memory in question could persist through a power outage. Leaks don't stay leaked because they touch the hard drive -- touching the hard drive is something memory leaks have done forever! They stay leaked because the software leaking them stays alive.
> The difference between volatile and non-volatile memory is in whether the data is lost when the system loses power.
I'm aware. This is a feature for me - I disable suspend/hibernate/resume functionality. I don't want hiberfile.sys taking up space (irrelevant in this scenario, I guess) and I certainly don't want programs to reopen themselves after a restart, especially if it was a crash. If all storage were nonvolatile, OSes would behave as though resuming from hibernate (S4) all the time.
> that memory will be reclaimed [. . .] when the app is closed.
Again, I'm aware. I'm glad you've never had any sort of crash or freeze that would prevent closing a program, but it does happen.
OSes would need to implement a sort of virtual cold boot to clear the right areas of memory, even after a BSOD or kernel panic. Probably wouldn't be that hard, but it would have to happen.
You could still have a restart "from scratch" feature in the OS. But persistent RAM could potentially mean the power dropping for a few seconds means you don't lose your session.
I used to explain it to clients as the difference between having your files nicely organized in cabinets or on shelves, and having the contents of several different files strewn over the desk. For people who like to keep a tidy desk this metaphor made immediate sense.
It explains some aspects and obscures some other ones.
It explains how the documents at arm reach on the desk are faster to access than the things in the cabinets. It obscures the fact that stuff on the desk dissapears if the power supply glitches even just a little.
In fact we invented these batteries with electronics which sense if the electricity is about to go out so the computer has time to carry the documents from the desk to the cabinets before they dissapear. And we think this is normal and even necessary in pro settings. (I’m talking about uninterrupted power supplies of course.)
A lot of people clear their desk before leaving the office, either because they're tidy and well organized or because they don't want to leave confidential info out where the cleaning staff could browse through it. It's not hard to extend the metaphor to a reboot of the computer being like the end of the workday.
It shouldn't make sense to us either; it's a ridiculous kludge resulting from the fact that we've never figured out how to make memory fast, dense, cheap, and nonvolatile at the same time.
Actually Intel did figure it out with Optane. Then they killed it because computer designers couldn't figure out how to build computers without the multilevel memory kludge. IMHO this is the single dumbest thing that happened in computer science in the last ten years.