It's cool to know it's an area of active research. I wonder if there are also power consumption ramifications though. While e.g. AES-NI is incomparable performance-wise, my novice (perhaps incorrect) understanding is that ARM beats x86 power consumption by having a drastically simpler instruction set.
Could a simple ARM-like instruction set plus a generic "synthesize and send this loopy junk to FPGA" have power implications without a major performance impact on cloud servers? (Yeah I know this is likely a topic for hundreds of PhD theses, but is that something being investigated too?)
Could a simple ARM-like instruction set plus a generic "synthesize and send this loopy junk to FPGA" have power implications without a major performance impact on cloud servers? (Yeah I know this is likely a topic for hundreds of PhD theses, but is that something being investigated too?)