Optane is persistent and bit-addressable, with mean latency under 1 microsecond, which is an order of magnitude faster than other SSDs.
I don't think the market has figured out the right use case for Optane yet. The majority of desktop applications won't benefit from lower latency IOPS and it's too expensive to use for general purpose storage on servers. It does make sense for constant write or seek-heavy applications like database journals, but most databases are optimized for doing bulk sequential reads/writes and won't take advantage of Optane's bit addressable storage.
Intel's recently started shipping Optane DIMM modules that act like slow, cheap, high density RAM. This is an interesting option as it allows in-memory databases to be atomically persistent without having to add any code.
> This is an interesting option as it allows in-memory databases to be atomically persistent without having to add any code.
It isn't quite that simple. To start with, you need either a cache line flush or cache line writeback instruction to get you data out to persistent storage. And the Optane DIMMs currently are used in one of two modes: as transparent expansion of DRAM or as separately addressed and managed storage. In the DRAM-like mode, the system's actual DRAM is used as a cache, and the Optane storage does not persist across reboots. In the storage mode, you need extra code to access it, though you can treat it as simply a block device instead of using special-purpose persistent memory programming interfaces.
You do need code changes for Optane DIMMs, if you treat them as what they are, which is non-volatile memory. That simply requires a new programming model and also a new performance model, since Optane is "almost as fast as RAM" but has other aspects like the fact read/write performance is asymmetric and the bandwidth specs are different from both DRAM and NVMe.
Intel's PMDK for example gives you the ability to have transactions for committing changes to "memory", "fat pointers" that can be persisted and reloaded (even at different addresses) later on, etc. Under the hood the library is "just" writing the bits out and managing the cache line flushes or whatever, but you can now build these other abstractions on top, which changes how you think about your design.
You can also treat the DIMMs as a DRAM cache, or "just" as an ordinary block device, with possible filesystem-specific optimizations to greatly boost the performance (DAX, or systems like NOVA which are geared directly for persistent memory). Those modes need no code changes at all, and I imagine will be useful for many systems, even after people start developing and designing systems directly around Optane.
It is actually significantly faster than traditional flash-based (NVMe/SATA) SSDs for consumer workloads because flash performance falls off drastically at low queue-depths, and almost all consumer workloads have low queue-depths.
A lot of consumer applications are not really optimized around super-fast SSD storage in general. You end up with CPU bottlenecks initializing stuff and so on. So in many cases Optane is not noticeably faster than NVMe because the bottleneck moves to the CPU instead of the SSD.
The real problem is price, of course. I'd gladly replace all my flash with optane at $100/tb, but it's also an order of magnitude more expensive. Not sure if that's just an issue of economies of scale not being there or what, but I got a good deal on a 280 GB 900P (about $200) and that's enough super-fast storage for the things that really matter to me. I use 1 TB EX920s for the rest.
What do you use the 280GB 900P for in front of your 1TB drives? You're right the bottleneck is often somewhere else in the system once you have modern SSDs. The 280GB 900P is ~$0.89/GB [1] and 1TB EX920 is ~$0.15/GB [2] these days, so the cost delta is not an order of magnitude (i.e. >10x).
What's funny to me is that the first consumer flash SSDs Intel released were just over 10 years ago, and they were ~$7.44/GB with only 80GB capacities [3]. Things change quickly, eh?
We now know that it isn't. Optane NVMe drives have a granularity of 512B or 4K (not sure) and Optane DIMMs are 256B with a large read-modify-write penalty.
I don't think the market has figured out the right use case for Optane yet. The majority of desktop applications won't benefit from lower latency IOPS and it's too expensive to use for general purpose storage on servers. It does make sense for constant write or seek-heavy applications like database journals, but most databases are optimized for doing bulk sequential reads/writes and won't take advantage of Optane's bit addressable storage.
Intel's recently started shipping Optane DIMM modules that act like slow, cheap, high density RAM. This is an interesting option as it allows in-memory databases to be atomically persistent without having to add any code.