Hacker News new | past | comments | ask | show | jobs | submit login

Cool. A few questions, don't feel obliged to answer all of them: Is it a custom instruction set? What are similarities / differences to desktop vector instructions like sse/avx (or tpu etc "neural processors")? What's the sw/compiler stack like, how easy is it to port software, or is sw more commonly custom written for the platform?



All good questions.

1) It is a custom instruction set, you can rean the ISA guide over at https://www.hpc.nec/documentation

2) The main difference in simple terms is that AVX instructions have a fixed vector length (4, 8, 16 etc). With the SX the vector length is flexible so it can be 10, 4, anything up to the max_vlen (up to 256 on the latest ones). Essentially the idea is you have a single instruction that can replace a whole for loop. Without a good compiler though that means you have to re-write your nested loops.

3) There's currently two options when it comes to the compiler, you can use the proprietary NCC or use the open source LLVM fork NEC has. NCC is less compatible than GCC/Clang (particularly modern C++17 is problematic) but has a lot of advanced algorithms for taking your loops and rewriting them and vectorizing them automatically. The LLVM-fork currently supports assembly instruction intrinsics but they are still working on contributing better loop auto-vectorization into LLVM.

4) Porting software is not terribly difficult to get working, but quite a bit harder to get performing very well depending on the type of workload. Since the Scalar core is pretty standard, you can almost always take regular CPU code and get it running (unlike GPU code in general). If you don't leverage the vector processor though, the performance you get will be nothing special, especially at 1.6GHz. Most of the software made for it starts off as being CPU code and is then modified with pragmas or some refactoring to get it running with good performance on the VE. In almost all cases the resulting code still runs on a CPU just fine. One example of a project that supports both in a single code-base is the Frovedis framework[1].

I think the chip deserves a little more interest than it does. It's one of the few accelerators that you can 1) Buy today, right now 2) Has open source drivers [2] 3) Can run tensorflow [3]. The lack of fp16 support really hurt it for Deep Learning but it's like having a 1080 with 48 GB of RAM, still lots of interesting things you can do with that.

[1]: https://github.com/frovedis/frovedis [2]: https://github.com/veos-sxarr-NEC/ve_drv-kmod [3]: https://github.com/sx-aurora-dev/tensorflow


Fascinating stuff, thanks for the details! I had no idea that they made PCIe accelerator based configurations now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: