Hacker News new | past | comments | ask | show | jobs | submit login

So one can look at the chip on beside the storage as an cpu offload built inside the drive, instead of a coprocessor on the motherboard. I’m not seeing a huge use case here except the narrowest of uses like decryption, compression.



> except the narrowest of uses like decryption, compression

These seem like broad use cases to me. Also consider ETL and database applications. Time series, finance, machine learning, search engines. It seems like the primary benefit is in terms of latency and minimizing data bussed to the main CPU.


> I’m not seeing a huge use case here except the narrowest of uses like decryption, compression.

SSD already have all provisions for both, and do it. Something like that will genuinely benefit more from a highly optimised ASIC than anything else.

The use case is obviously huge, and you don't see the elephant in the room: money.

Putting all those drives to even a cheapest Xeon around, increases the price n-fold over the price of the flash, unless you talk about multi-terabyte scale SSDs.


Convolution on stored data. This is huge and really really simple to implement. You could work with TB of data without the gpu.

Or am I missing sometjing obvious?


But, one potential issue here is that your FPGA now needs to modify the filesystem to write any new results.

Maybe the use-case here is more like transforming the data on the fly. Let's say storing the data compressed, but reading it back uncompressed. This could effectively be transparent to the host CPU, but handled by the FPGA.

The more that I think about it, this data flow sounds significantly more reasonable than asynchronously processing data. Then you could still read / transform / write the new data to the SSD, but you'd limit the main CPU to only sending the read/write IO, instead of the transparent transformation.


I see, thanks for the explanation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: