My dad had a 256bit memory module on his desk. Made out of sandwich of two pieces of glass and 16 vertical and 12 horizontal wires. Each intersection had a small ferrous ring.
Around that time he was programming and early computer and you could actually hear the memory bits changing. He was excited listening to it, figuring each click was a loop. Then her heard a different sound and realized he as hearing instructions and not loops.
Some really early computers stored bits in waves in mercury. A series of waves would be started at one end of a tube full of mercury and they would be read back when they arrived at the other end. The memory of course then had to be refreshed. Core memory was however much more successful.
Some teacher that worked for the french ministry in the 80s told us that. It makes sense but I never found data on it. I'm not 100% sure it was done because he was a bit of a big mouth but at the same time he was in the field for decades so..
Is Optane really 'all that' compared to other PCIe storage?
I know that Intel are positioning it as some sort of system accelerator, and I know it does tend to score well in comparisons... but not "This is in a completely different category" well, AFAICT.
What's special about it compared to, for instance, Samsung's NVMe offerings?
Optane is persistent and bit-addressable, with mean latency under 1 microsecond, which is an order of magnitude faster than other SSDs.
I don't think the market has figured out the right use case for Optane yet. The majority of desktop applications won't benefit from lower latency IOPS and it's too expensive to use for general purpose storage on servers. It does make sense for constant write or seek-heavy applications like database journals, but most databases are optimized for doing bulk sequential reads/writes and won't take advantage of Optane's bit addressable storage.
Intel's recently started shipping Optane DIMM modules that act like slow, cheap, high density RAM. This is an interesting option as it allows in-memory databases to be atomically persistent without having to add any code.
> This is an interesting option as it allows in-memory databases to be atomically persistent without having to add any code.
It isn't quite that simple. To start with, you need either a cache line flush or cache line writeback instruction to get you data out to persistent storage. And the Optane DIMMs currently are used in one of two modes: as transparent expansion of DRAM or as separately addressed and managed storage. In the DRAM-like mode, the system's actual DRAM is used as a cache, and the Optane storage does not persist across reboots. In the storage mode, you need extra code to access it, though you can treat it as simply a block device instead of using special-purpose persistent memory programming interfaces.
You do need code changes for Optane DIMMs, if you treat them as what they are, which is non-volatile memory. That simply requires a new programming model and also a new performance model, since Optane is "almost as fast as RAM" but has other aspects like the fact read/write performance is asymmetric and the bandwidth specs are different from both DRAM and NVMe.
Intel's PMDK for example gives you the ability to have transactions for committing changes to "memory", "fat pointers" that can be persisted and reloaded (even at different addresses) later on, etc. Under the hood the library is "just" writing the bits out and managing the cache line flushes or whatever, but you can now build these other abstractions on top, which changes how you think about your design.
You can also treat the DIMMs as a DRAM cache, or "just" as an ordinary block device, with possible filesystem-specific optimizations to greatly boost the performance (DAX, or systems like NOVA which are geared directly for persistent memory). Those modes need no code changes at all, and I imagine will be useful for many systems, even after people start developing and designing systems directly around Optane.
It is actually significantly faster than traditional flash-based (NVMe/SATA) SSDs for consumer workloads because flash performance falls off drastically at low queue-depths, and almost all consumer workloads have low queue-depths.
A lot of consumer applications are not really optimized around super-fast SSD storage in general. You end up with CPU bottlenecks initializing stuff and so on. So in many cases Optane is not noticeably faster than NVMe because the bottleneck moves to the CPU instead of the SSD.
The real problem is price, of course. I'd gladly replace all my flash with optane at $100/tb, but it's also an order of magnitude more expensive. Not sure if that's just an issue of economies of scale not being there or what, but I got a good deal on a 280 GB 900P (about $200) and that's enough super-fast storage for the things that really matter to me. I use 1 TB EX920s for the rest.
What do you use the 280GB 900P for in front of your 1TB drives? You're right the bottleneck is often somewhere else in the system once you have modern SSDs. The 280GB 900P is ~$0.89/GB [1] and 1TB EX920 is ~$0.15/GB [2] these days, so the cost delta is not an order of magnitude (i.e. >10x).
What's funny to me is that the first consumer flash SSDs Intel released were just over 10 years ago, and they were ~$7.44/GB with only 80GB capacities [3]. Things change quickly, eh?
We now know that it isn't. Optane NVMe drives have a granularity of 512B or 4K (not sure) and Optane DIMMs are 256B with a large read-modify-write penalty.
Optane tends to crush the competition in terms of latency and random I/O, and mixed workload benchmarking. Samsung Z-NAND performs just slightly better on sequential I/O.
They have a big marketing team. "Now with XYZ Technology" is standard marketing for jazzing up something either not actually all that radical, or too technical to explain.
I use LVM and have enough storage that all SSD would be a little expensive so for the last few years I've been using the block caching LVM feature with mirrored NVME drives as a writeback cache to my large HDD array. It greatly increases the performance of the storage with little hassle.
If haven't played with gluster for a few years but it has a tiered storage feature so the most recently used files actually get moved onto the high speed storage medium, which depending on your workloads might be even better, assuming gluster is now performent and stable enough.
I recently upgraded my laptop to a Thinkpad T470s, which supports M.2 NVMe, and M.2 Sata.
I upgraded from a 500gb NVMe drive to a 1TB M.2 sata drive.
Why do I say upgraded if it is slower?
Well, in a laptop, power usage is hugely important, as is heat.
To prevent throttling on the CPU, less heat is better.
I did a lot of research, and found that the sata based M.2 drives consume less power, and generate less heat. For my purposes the sata speed is sufficient. :)
From the brochure, it had a few million of 64-bit words worth of RAM that was used as regular RAM ("central memory") and a lot more of ECC DRAM that was used as fast storage ("SSD"). What was the API for the SSD? Was it mapped into virtual memory as RAM, or did the ISA have I/O in,out opcodes for it with word address, or did it use the API for disk storage, or what?
Even the Cray-1S supported it! I believe it was connected via a 128-bit wide bus operating at the system clock speed (80 MHz in the case of the Cray-1), which made it a ~10 gbit interface. From a low-level software perspective, it was connected to a pair of DMA "channels" (1 input / 1 output), and there were instructions to do bulk copies from SRAM->CHANNEL or CHANNEL->SRAM. It may have used another slower channel pair as a command interface, but I'm not sure (nor how COS or UNICOS exposed it from a higher level software perspective).
Register latency is usually around 4ns (2.5-3.5 Ghz with a pipeline depth of 12-16). The article claims main memory latency is 7x more, which is around 24ns to 45ns. I tried to come up with plausible numbers that fix the 7x ratio they mention.
These days main memory latencies are more around 70-80ns (single sockets) and 80-100 ns (dual sockets).
Being off by so much does make me wonder about the accuracy of the rest of the article.
language is symbolic so there doesn't have to be a literal connection between the reference and referent. for example you still "boot" a computer, which comes from the concept of "bootstrapping", ie, pulling oneself up by the straps of their boots. no boots are involved.
Contrary to what others said, it will likely fall out of favor the way the floppy icon faded for "Save" functionality in GUIs. It's everywhere until one day you realize you can't find it in any of the apps you use every day.
Latest Word: it's there; paint brush: check; paint.net: check; editpad: floppy. Even Intellij Idea uses floppy. Actually from a quick glance I did not find a single application which did not use floppy icon as a save pictogram.
May be it's Mac world? Apple is eager to forget old ways.
Doing a quick survey of the apps on my machine, it seems like most have just forgone the button altogether. For those that still have the button, the floppy icon seems to still be there.
Yeah, purely from memory here but it's not that the save icon went out of fashion, but the concept of an icon bar or ribbon has gone; Microsoft's current UI standard for their native apps still has a ribbon IIRC, but I feel like that's one of the last vestiges. Using real world analogies went out the window after skeuomorphic design was no longer a thing too.
Oddly enough, for me I had trouble adjusting to the 'folder' icon (or referring to folders as folders), we grew up with the term "directory" and I never had an association with anything physical with that.
Around that time he was programming and early computer and you could actually hear the memory bits changing. He was excited listening to it, figuring each click was a loop. Then her heard a different sound and realized he as hearing instructions and not loops.