Hacker News new | past | comments | ask | show | jobs | submit login
Why SATA is now obsolete (zdnet.com)
163 points by mrkd on Feb 26, 2015 | hide | past | favorite | 98 comments



"why maintain two logical address spaces? Why not have the file system directly manage the flash?"

Because managing flash is hard. For this to work, the amount of exceptions in the filesystem it's self would make it unmanageable.

an example is this: a Fusion-io card up until recently used low quality flash, yet managed to get stella performance. Much higher performance than the equivalent intel ssd card (who have access to much higher quality flash) some of that is down to layout, the rest firmware.

Then you have the flash that lives on a device like your phone, however the firmware of the SoC is tweaked to run that particular flash controller.

At its heart, flash/hdd is just a key:value database (presented with 4k value size) why would we want to complicate that (sure we need to indicate to the hardware that we've deleted a page, but to be honest not much else)


> Because managing flash is hard. For this to work, the amount of exceptions in the filesystem it's self would make it unmanageable.

The obvious way to implement this is to expose the raw flash to the OS and put an abstraction layer into the OS that does what the drive firmware currently does so it can be used with existing filesystems. Just that would have its own benefits because it would remove the black box from around that code and let people improve it. It would also make the drives less complicated/expensive and remove the burden from smaller manufacturers to maintain solid firmware (which they've often failed to do), and make it a lot easier and more reliable to secure erase drives.

Once you have that you can start looking at improving particular filesystems by taking advantage of the additional information now available to the OS.


"The obvious way to implement this is to expose the raw flash to the OS and put an abstraction layer into the OS that does what the drive firmware currently does so it can be used with existing filesystems."

It may or may not work better than what we have now. Doing so would require a lot more knowledge for the developers involved. Actually "developer" is a fancy term for a job that could not be called "engineering" for being almost completely insulated from that class of problems that an engineering job has to consider as a norm. Expose the raw flash (or other raw access to electrical components or what not) and you get yourself with real engineering problems on your hand. No more (software) "development", but engineering! And engineering is hard.

And there's one more thing - if we think the current state of software fragmentation is bad, wait until when this unified physical computing interface (that the firmwares more or less adhere to and which we're taking for granted today) is taken out.


> And there's one more thing - if we think the current state of software fragmentation is bad, wait until when this unified physical computing interface (that the firmwares more or less adhere to and which we're taking for granted today) is taken out.

It isn't about removing abstraction layers, only moving them. It should be part of the OS rather than the hardware. The people who write the abstraction layer have to deal with hard problems, but they do that now. What does it matter if they work for Microsoft and RedHat instead of Samsung and Intel?

The point is to publish and standardize how that abstraction layer works so the people who write filesystems and filesystem tools have better information and can suggest or provide improvements. And to stop forcing every SDD manufacturer to duplicate the software engineering efforts of the others instead of focusing on hardware.


It seems like the article maybe misrepresents what the thesis that inspired it says[1]. From the abstract:

"... the device still has its freedom to control block-allocation decisions, enabling it to execute critical tasks such as garbage collection and wear leveling ... Next, we present Nameless Writes, a new device interface that removes the need for indirection in flash-based SSDs. Nameless writes allow the device to choose the location of a write; only then is the client informed of the name (i.e., address) where the block now resides."

So it sounds like this approach is actually removing responsibility from the filesystem, not the firmware.

[1] http://research.cs.wisc.edu/adsl/Publications/yiying-thesis1...


I have now read the thesis lightly, and I cannot for the life of my children, find out exactly WHY you want to save the simplest of abstractions on the device, namely the logical to physical translation. It's a well known abstraction, and the benefits are clear as day - the alternative, with migration callback and now two different kinds of data that you must write differently just screams bad idea, and goes against everything I've learned in CS and in my carreer. It introduces unneeded complexity for the OS. The main benefit seems to be lower cost devices (1GiB ram per 1TiB), which is neglectable. The performance isn't really better and the thesis doesn't exactly go into detail of the CPU overhead of this implementation during heavy IO, and we now face an entirely different problem with crashes. Today we can build SSDs to ensure that confirmed writes are guranteed persisted.

This abstractions is what have allowed us to transistion from a traditional spinning data store to SSD without much effort (save for delete flags to help the device performance GC and improve performance).


Nameless writes save an indirection layer -- but quite a cheap one!

They add significant latency to write operations, though, which is a high price to pay for such a small gain.


SSDs are so fast let's make them slow again!


Indeed; it sounds like it's creating a clean abstraction layer for the kinds of time-space guarantees flash memory wants to give, where what we have now is very muddled due to being based on the kinds of time-space guarantees spinning media (or even tape drives) wanted to give.


The Linux kernel already supports directly managing Flash without a controller. The JFFS2 filesystem is designed to run directly on top of the Flash device. This is used in most routers and other small Linux devices in order to keep costs down.


There's also f2fs [1][2], which has a similar design but runs on flash devices with an FTL and thus takes a more middle road. It still has a log structure and tries to make things easy for the FTL by doing large, sequential, out-of-place writes whenever possible, but it takes advantage of the FTL when it makes things simpler, like for certain metadata that's easier to update as a small random write.

[1]: http://lwn.net/Articles/518988/ [2]: http://lwn.net/Articles/518718/


Does YAFFS follow the same principle?


Yes.

It is interesting to note the design trend in mobile devices like phones.

Years ago, it was typical to have a NAND flash controller built into the SoC (like a TI OMAP, Freescale i.MX series or similar).

This is raw Flash memory, and it is up to the SoC plus OS to manage error recovery, remapping bad sectors, etc.

However, in recent years, most mobile devices just use one of their SD interfaces (often 8-bit these days) to access an eMMC chip. This looks just like a SD card, because it has a FTL in it which takes care of a lot of the low level details needed for Flash management.

Some SoCs these days don't even include a NAND Flash controller anymore.


Exactly!

The actual memory contents behind the controller are dependent on physical characteristics. Using the FTL firmware, the flash+controller (such as eMMC) vendor is free to do all kinds of tricks, for instance, depending on the quality of a particular batch of NAND. Bad batch? Use more of the spare for error correcting code. Particular memory pattern that generates interference? Tweak the scrambler. Slow? Interleave between a couple of NAND chips. (These examples are hypothetical)

Tying filesystems to the physical layer makes no sense. It would mean I wouldn't be able to use 'dd' to copy a partition to some other device, since it would have different physical characteristics which the filesystem would need to take into account. It would mean that I wouldn't be able to take an iSCSI volume and write it out to disk to 'de-virtualize' virtualized storage.


>Then you have the flash that lives on a device like your phone, however the firmware of the SoC is tweaked to run

nowadays all of the things (ha) use flash behind higher level of abstraction, be it emmc, UFS, or SD controller. There are no phone soc firmware tweaks.


At its heart, flash/hdd is just a key:value database (presented with 4k value size) why would we want to complicate that

To avoid an inherent bottleneck?. Perhaps we'd be better served by a larger number of key:value stores with greater parallelism?


SSDs are already parallel and I don't think updating the single FTL mapping table is a bottleneck.


same was told about cylinders and sectors a couple decades ago.

to use your example: imagine if you could use the fusion-io FS on the intel flash.

also remember that several SSD companies that even had their own flash fab are now gone because their firmware was crap.



"De-indirection for Flash-based Solid State Drives", Zhang 2013 http://research.cs.wisc.edu/adsl/Publications/yiying-thesis1...

"Flash-based solid-state drives (SSDs) have revolutionized storage with their high performance. Modern flash-based SSDs virtualize their physical resources with indirection to provide the traditional block interface and hide their internal operations and structures. When using a file system on top of a flash-based SSD, the device indirection layer becomes redundant. Moreover, such indirection comes with a cost both in memory space and in performance. Given that flash-based devices are likely to continue to grow in their sizes and in their markets, we are faced with a terrific challenge: How can we remove the excess indirection and its cost in flash-based SSDs?

We propose the technique of de-indirection to remove the indirection in flashbased SSDs. With de-indirection, the need for device address mappings is removed and physical addresses are stored directly in file system metadata. By doing so the need for large and costly indirect tables is removed, while the device still has its freedom to control block-allocation decisions, enabling it to execute critical tasks such as garbage collection and wear leveling.

In this dissertation, we first discuss our efforts to build an accurate SSD emulator. The emulator works as a Linux pseudo block device and can be used to run real system workloads. The major challenge we found in building the SSD emulator is to accurately model SSDs with parallel planes. We leveraged several techniques to reduce the computational overhead of the emulator. Our evaluation results show that the emulator can accurately model important metrics for common types of SSDs, which is sufficient for the evaluation of various designs in this dissertation and in SSD-related research.

Next, we present Nameless Writes, a new device interface that removes the need for indirection in flash-based SSDs. Nameless writes allow the device to choose the location of a write; only then is the client informed of the name (i.e., address) where the block now resides. We demonstrate the effectiveness of nameless writes by porting the Linux ext3 file system to use an emulated nameless-writing device and show that doing so both reduces space and time overheads, thus making for simpler, less costly, and higher-performance SSD-based storage.

We then describe our efforts to implement nameless writes on real hardware. Most research on flash-based SSDs including our initial evaluation of nameless writes rely on simulation or emulation. However, nameless writes require fundamental changes in the internal workings of the device, its interface to the host operating system, and the host OS. Without implementation in real devices, it can be difficult to judge the true benefit of the nameless writes design. Using the OpenSSD Jasmine board, we develop a prototype of the Nameless Write SSD. While the flash-translation layer changes were straightforward, we discovered unexpected complexities in implementing extensions to the storage interface.

Finally, we discuss a new solution to perform de-indirection, the File System De-Virtualizer (FSDV), which can dynamically remove the cost of indirection in flash-based SSDs. FSDV is a light-weight tool that de-virtualizes data by changing file system pointers to use device physical addresses. Our evaluation results show that FSDV can dynamically reduce indirection mapping table space with only small performance overhead. We also demonstrate that with our design of FSDV, the changes needed in file system, flash devices, and device interface are small."


The author is on to something. Flash memory devices can emulate disks, but that's an inefficient way to access them, especially for reading. You have all the overhead of Linux system calls and file systems to, basically, read a RAM-like device that's orders of magnitude faster in access time than a rotating device.

The question is, what non-file abstraction should a flash device offer? Slow persistent RAM? A key/value store? An SQL database? Computing has had "RAM" and "disks" as its storage models for so long that those concepts are nailed into software at a very low level.

The NoSQL crowd could probably find uses for devices that implemented big, fast key/value stores.


The problem with NAND flash is that it is not "RAM-like device" even for reads (smallest addressable entity is block even for reads).

It can essentially do three operations: read block, write block, erase bunch of blocks at once. For the purposes of filesystem, this is actually an workable storage interface that does not need any additional abstraction layers.


You are both right. It's a RAM-like key-value store with large (block-sized) values.


It's not RAM-like, because you can read small parts of keys, but have to write large ones.


You can't read or write individual bits or bytes in RAM directly either. Word size is very small compared to disk blocks, but it's not atomic.

I'm not an architecture expert, so please correct me if I'm wrong about this.


It depends on what kind of RAM and what kind of bus you are using. Some do let you write bytes at a time, others only allow N byte words at a time.

Though I suppose a byte is still not a bit! I'm not sure I've ever seen a bus interface that addressed individual bits…


But the read and write are symmetric, of same sizes.


No: some buses let you write different sized words to the memory controller or RAM chip. For example, picture a bus that's 32 bits wide, but there are control lines that let you specify that you're writing (or reading) 1 byte, 2 bytes, or 4 bytes. During a single byte wide bus cycle you are not sending 4 bytes to the part, even though there are 32 bits connecting to it.

Certainly not everything out there is byte addressable, but neither can one say unequivocally that all RAMs have large atomic word sizes.


Again, in your scenario read&write are symmetric. In Flash, they're not.


I don't understand what you mean by "symmetric". If I can write a 32 bit word out and read 1 byte of it back, I don't consider that "symmetric".


Some of my coworkers bypassed the OS for Redis, although the FTL is still there. http://public.dhe.ibm.com/common/ssi/ecm/po/en/pos03135usen/...


Funny how your comment told me more about what that offering is supposed to do than the whole Solution Brief did! All the same, very neat stuff.


Are they _really_ emulating "disks", though?

Storage presents block devices to operating system. Spinning Rust and all manner of SSD use lots of clever engineering to appear as a block device.

Block device is the most basic abstraction of all storage. You really want MySQL to manage NAND? ;)


It sounds like a great idea to "get rid of abstraction" and have the file system handle the NAND directly if you don't think about it.

But if you think about it for even a little bit you realize that you can't just get rid of the abstraction, you're just moving it out of the drive's firmware and into the file system. And that gets you nothing because now the file system has to be compatible with every manufacturer's NAND drives. And the file system has to be updated any time that system needs to be changed. You can't just pop a new NAND drive in and have it work, you have to download new NAND drivers for the file system, etc.

I'm also skeptical of just how much performance gain you'd see. The article doesn't include these figures, which makes me skeptical that they are significant.

Delegation of duties is important for good system design. Drives necessarily need to present themselves in a standard way to the system. Having SSDs appear like normal drives is fine. They aren't memory and they shouldn't be used as memory.


Flash Translation Layer is critical to performance. Googling "flash translation layer performance" should be enough to convince you of this.

Moving it from firmware to file system does not improve performance by itself, but it enables optimizations. And FTLs in firmware are far from optimal.


> The obvious question is: why maintain two logical address spaces?

There are perhaps 10 more address spaces and translations. It the way OS and hardware are build.

> Why not have the file system directly manage the flash?

Too slow innovation, it takes decade to replace operating system. Also main CPU is very power hungry and slow...

More likely the SSD controller will become part of CPU and SOC.


now that non-volatile memory technology - flash today, plus RRAM tomorrow - has been widely accepted, it is time to build systems that use flash directly instead of through our antique storage stacks.

Hear hear! The basic storage model that arose in the 50's and 60's being the best fit for recently developed hardware seems as likely as the security models from the 50's and 60's being the best fit for the Internet.

(If the obnoxious popup adverts from ZDNet strike me as archaic, they are not doing well.)


If NAND continues to function the way it does today, we would still need to standardize on some kind of interface to get traction. The advantage of SATA, SAS, PCIe, et. al. are their pervasiveness in the PC world. His proposal basically boils down to "NAND on SATA is kinda slow because of all the latency from stuff. We should just hook the NAND up to one of the upstream buses." It's unclear to me how this is different from what is already being done.


If you're interested in this subject from a Linux/storage expert, here's Ric Wheeler's talk from FOSDEM last year: https://archive.fosdem.org/2014/schedule/event/persistent_me...


One word. Separation of concerns. Okay, three. Why would you want to mix managing files and transistors?


I like "news" articles like this. Harris takes a scientific publication, adds his own opinion while making the topic easier to understand, and cites his source.


Personally, i believe his armchair dismissal of FTL requires far more justification than he gives. This blog post is pure postulation that doesn't even attempt to take up the argument of the dissertation, let alone take that argument far enough to justify that title.

In fact in the thesis' abstract, the proposed solution is still a layer of indirection. Instead of the FTL giving an abstracted address space, the "device" is given the responsibility of controlling wear, gc, and parallelism by dynamically reallocating blocks itself, and the filesystem picking up some more of the management stuff.

I am certainly not qualified enough to read this dissertation critically enough to know if it is fundamentally sound theoretically, but as a scientist I know that it is close to meaningless until it has been experimentally tested and reviewed..


As a related note, if you upgrade to a SSD be sure to change the SATA type in BIOS from IDE to AHCI. Not only did I see a doubling in write speed [1], but after making the change, my system felt much more responsive under heavy load.

[1] http://forum.crucial.com/t5/Crucial-SSDs/Why-do-i-need-AHCI-...


Basically the author is calling for a return to the Multics file system, where files were just a way of referring to capability-managed subsets of an enormous pool of pages.

I liked the idea the first time around so I like it again. But it's harder than it sounds.

I find something lovely and poetic in the fact that Lisp is over 50 years old and Multics is almost that old, and both may be having a bit of a renaissance.


Yes, the SATA interface to flash storage is obsolete. Just like internal combustion engines! And it will be almost as hard to get rid of.


SATA and AHCI are already on their way out, being replaced by PCIe and NVMe. That removes most of the abstraction that is actually obsolete, but it doesn't pretend that SSDs aren't fundamentally block devices. About the only thing NVMe does that is perpetuating a fiction about the underlying flash is that the read block size and write block size are the same thing. All the rest is abstraction that's absolutely necessary until we completely change our conventions for computer architecture and operating systems.


Linkbait title is linkbait, but content is sound.


I'm not a hardware expert, but my intuition tells me that the general thesis -- that any given hardware module could be implemented in some way that's light years ahead of where we are now -- is often true. But there is more to widespread adoption than being the best technology: marketing, timing, industrial realities, legacy software limitations... Legacy enterprise software and OSs in particular seem they probably make significant architectural changes to the very core of every computer difficult to effect.


This isn't an entirely novel idea. I remember in 2009 Steve Wozniak giving a talk in Waterloo about becoming the Chief Scientist for Fusion-io (https://fusionio.com/), a company building PCI-E SSDs, bypassing the SATA bus.


This is somewhat tangential, but my professor recruited Steve to come to Fusion-io and always speaks incredibly highly of Steve. When he was leaving the CEO position to give it back to the founder of Fusion-io he offered Steve some additional shares and Steve told him to give the shares the engineers who actually produce the technology. Incredible humbleness there.


all he means behind the sensationalistic crap is that the sata controller is not actually needed, its there for compatibility.

"hard disk" will not be the ram. its just is made of the same chips, but still logically separated. and there is no need for a sata controller to address it really.

That's all. You still need a file system if you want to address the files. Just like address blocks of ram or anything else. If you give free reign to the app, there is no security, reliability, etc. Even ram is actually indexed as well.

On top of that you need a representation for the human behind it - no matter how hard they try to remove that (because its much easier to lock you in when there is no generic way to browser a memory device), its not very convenient compared to a file system.


That sounds fine in principle, but please don't tell me the future is jffs2!


I think the proposal is something like ZFS/Btrfs but with even more complexity to handle wear leveling and such.


Is this going to have a net benefit? To my mind it makes more sense for specialized functionality to be built into the drives firmware rather than letting the CPU deal with it.


My SSD has a three core cpu dedicated to it. Removing the layer of indirection isn't going to make it faster. At best it's going to make the ssd cheaper by a minuscule amount due to a simpler controller.


One core of your main processor is faster than those three embedded cores; that's why Fusion-io mooches off the main processor.


Its probably fast enough for its purpose. If it isn't, making it faster is way easier than changing existing abstraction layers. In the extreme, it can ever be an asic.


the basic premise is that remapping is necessarily expensive. unfortunately, neither the thesis nor its citations really support that. and we have plenty of examples where remapping is pervasive and apparently efficient (MMUs, for instance). it's hard to imagine how a filesystem could reasonably handle wear-levelling, not to mention the now-common practice of using MLC cells to hold SLC data for faster writes, etc.

the "nameless writes" idea is OK - a FS person would just call it even lazier allocation, not anything really different.


I've always wondered how much performance gain you could get if the very large blocks required for erasure were known to the file system, rather than being hidden behind an emulation layer.


I think the author misunderstands the word "obsolete." Something is only obsolete when there is something better to replace it with.


Anyone else think it's a little ironic to read about something being obsolete on ZDNet?


The title is odd. Should be "Why SATA is now obsolete."

The article is of course correct, and some manufacturers are already offering SSD-like storage which is connected like a stick of RAM.

I'm sure the gap between storage and the main system pipeline (CPU-GPU-RAM-etc) will only shrink as time moves forward. However as an interim solution things that acted like hard drives were convenient.


> The title is odd. Should be "Why SATA is now obsolete."

A pity we didn't see this until now, but we've adopted your suggestion.

All: you can get this kind of thing taken care of sooner by notifying us at hn@ycombinator.com.


Yes, but if the title was honest then not as many people would click, and we can’t have that.


and that's exactly why I read the comments first


But there are also SAS (Serial Attached SCSI) SSDs ... the major point is in the aphorism in his third paragraph:

"All problems in computer science can be solved by another level of indirection, except for the problem of too many layers of indirection."

Hence this level of indirection, the discrete "drive", is what he thinks should go poof.


> "All problems in computer science can be solved by another level of indirection, except for the problem of too many layers of indirection."

First time I've heard that quote, but I love it. Reminds me of this one:

"There are only two hard problems in Computer Science: cache invalidation, naming things, and off-by-one errors."


I was convinced both of those quotes were from Dijkstra (as quotes go, the Mark Twain of CS).

Turns out, the second one is originally from one Phil Karlton (as reported by Tim Bray [1] , that's enough authority for me), without the "off-by-one" part.

The first one, which I was goddamned sure was Dijkstra, is actually by David (not A.) Wheeler (via Butler Lampson) [2], inventor of the subroutine.

That rabbit-hole took me longer than expected. Hopefully I'm saving some time for someone else ;)

[1] https://twitter.com/timbray/status/506146595650699264

[2] http://www.dmst.aueb.gr/dds/pubs/inbook/beautiful_code/html/...


My quote list has the second half as a later elaboration:

  "All problems in computer science can be solved by another level of indirection"
  "But that usually will create another problem."
  -- David Wheeler

  "...except for the problem of too many layers of indirection."
  -- Kevlin Henney


What's the best thing about telling UDP jokes? You don't care if they get it.


Or:

What's the best thing about telling UDP jokes? if don't You they get care it.


> if don't You they get care if.

Don't forget that UDP allows repeats and drops!


and repeats.


I was going to post a reply here with an HTTP joke but I am feeling slightly insecure.


"There are two hard problems in computer science: we only have one joke and it's not funny."

(I think it's funny though.)


Plenty of other jokes abound:

- What goes "Pieces of seven! Pieces of seven!"

- Parity error

Look up "recursion" in the dictionary and it says: See Recursion.


Why do programmers confuse Halloween and Christmas? Because 25 DEC == 31 OCT


You know what's a C + iJ? A Complex Joke.

You know why Complex Jokes aren't funny? Because the joke part is imaginary.


Q: How many prolog programmers does it take to change a lightbulb?

A: Yes.


These are way better than that bloody binary joke in every computer department.


the better version of the binary joke is

Binary: as easy as 1, 10, 11


As easy as 3, 2, 1? I realise you read binary from right to left, not left to right, but the sentence is in English so you read that left to right.

Also the commas and binary mixed together are a little odd. Least of all because you didn't make all three two binary digits (i.e. 11, 10, 01).

Regardless that joke is going to cause more arguments than laughs.


Base 2 is not special.

You do not "read" binary from right to left. At least in English, the most significant digit is on the left, just like any other base (e.g. base 10, base 8, or base 89432890432).

Also, for your leading 0, I would write: Bob has 23 apples, and I have 5. I wouldn't write: Bob has 23 apples, and I have 05. Base 2 is not special, so I wouldn't write 01.

You can also turn your order argument on its head: since the order of digits in base 10 is exactly the same as base 2 (most significant on the left), you could say that "As easy as 1, 2, 3" should actually be read as "As easy as 3, 2, 1", which matches your binary reading "As easy as 3, 2, 1".


The two can be melded together: There are 10 kinds of people in this world, those who understand binary, those who don't, and those who suffer off-by-one errors.


I like it, but how about:

There are only 10 hard problems in Computer Science: cache invalidation, naming things, and off-by-one errors.


I thought it was: There are 10 kinds of people in this world, those who understand binary, those who don't, and those who were expecting a base 3 joke.


It really is:

There are 10 kind of people in the world: Those who know binary, those who don't, and those who start counting from zero.


There are 10 kinds of people in the world. Those who know binary, those who know ternary, those who know base four, ..., and those who know that every base is base 10.


It really is:

There are 10 kinds of people in the world: Those who know hexadecimal, and F the rest...


>and those who start counting from zero.

That kills the joke. 'off-by-one' is subtler.


The introduction of M.2 on desktop motherboards has already started to enter the laptop realm but your last point is correct. Storage has shifted away from physical devices toward the "cloud". Cost/GB for SSD has dropped drastically over the past year.


Not only laptops. Any number of newer embedded boards are being developed with M.2 in-place of m-sata.


I'm actually curious if putting the storage on the bus is a win. Sure for speed it's a win but the ability for a bad program to crap all over memory is scary whereas as an HD it's a little harder for a poorly written program (vs a malicious program) to randomly write data to the HD, virtual or not.

I'm sure someone will figure something out but pretending it's an HD does kind of solve that issue for now.


Presumably you'd use the MMU, and leave unmapped any persistent storage pages not being currently accessed.


Those RAM sticks currently have SATA controllers embedded. Its another layer of abstraction. The promise is eventually it won't.


I'm not sure which ones you're referring to, but the NVDIMMs I'm familiar with [0] are are normal DRAMs with an additional hold-up supercap, controller, and flash. When power goes away, the controller streams DRAM contents to flash. Linux block device Drivers [1] exist as does some filesystem support [2] for ext4fs.

[0] http://en.wikipedia.org/wiki/NVDIMM

[1] https://lkml.org/lkml/2014/8/27/674

[2] https://lkml.org/lkml/2014/3/23/121


> I'm sure the gap between storage and the main system pipeline (CPU-GPU-RAM-etc) will only shrink as time moves forward. However as an interim solution things that acted like hard drives were convenient.

We move ever closer to "Computronium".


SATA isn't obsolete - it's fine for mechanical disks. but the abstraction that's used for SSDs doesn't represent the true capabilities of the NAND and prevents a database (eg) from utilizing the true power of the flash




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: