Even fastest compression algorithms like LZ4 need Core i5-4300U @1.9GHz in order to get 385MB/s [1]. You'd need a pretty powerful setup to have it keep up with SSD speed and you also need to be mindful of heat generated. Also, it would be pretty useless if the volume's encrypted.
Wait. Aren't the numbers for SSDs, particularly the bus speeds for the SATA connection measured in Gigabits (not bytes)?
Looking at these numbers you linked to (which seem to be in megabytes, it seems to me that decompression speeds could keep up. And I know write speeds on SSDs are a lot slower than the spec'd numbers, so the compression write speeds look plausible to me too.
The general idea is that modern CPUs tend to be instruction starved and sit idle because they are waiting on the slow memory buses that connect everything.
There was a window there where HDD performance was marginal enough that compressing data helped fetch it from disk faster since CPU decompression was quicker than waiting for the data to be fetched at full-size, especially on things like text where 10:1 compression isn't hard.
Now we're living with SSDs that can do 2GB/s and no CPU can decompress that quickly.
> Now we're living with SSDs that can do 2GB/s and no CPU can decompress that quickly.
I'd totally buy a machine with a state-of-the-art CPU paired with one or two FPGAs that can be programmed as accelerators for crypto, compression, etc.
Intel's working on bundling FGPA with its Xeon systems, so maybe that will happen, but it's probably better addressed with a hardware decoder like is done for H.264.
Even fastest compression algorithms like LZ4 need Core i5-4300U @1.9GHz in order to get 385MB/s [1]. You'd need a pretty powerful setup to have it keep up with SSD speed and you also need to be mindful of heat generated. Also, it would be pretty useless if the volume's encrypted.
[1] https://github.com/lz4/lz4