I know about bandwidth issues, but you can also use big vector sizes to mitigate, more opportunities for simple schemes for realtime (de)compression of data. (Usually called packing/unpacking).
But where you can work around bandwidth issues, you can get up to twice as much work done.
I know about bandwidth issues, but you can also use big vector sizes to mitigate, more opportunities for simple schemes for realtime (de)compression of data. (Usually called packing/unpacking).
But where you can work around bandwidth issues, you can get up to twice as much work done.
I've also been looking for such a chip to test my code. Of course it's possible to use SDE. https://software.intel.com/en-us/articles/intel-software-dev...