Hacker News new | past | comments | ask | show | jobs | submit login

Is ISA lock-in really an issue today?

Porting most software to ARM64, Power, or RISC-V involves typing some variation of "make." Only a small percentage of software written in C/C++ or ASM is problematic. Anything in a higher level language like Go or a newer language like Rust is generally 100% portable.

Switching from X86_64 to ARM64 (M1) for my desktop dev system was trivial.

Endian-ness used to bite, but today virtually everything is little-endian above embedded. Power and some ARM support both modes but almost always run in little-endian mode (e.g. ppc64le).




- Have you ever, e.g., computed the sinus of a floating point number in C (sinf) ?

- Have you ever multiplied a matrix with a vector, or a matrix with a matrix (GEMM) using BLAS?

- Have you ever done an FFT ?

- Have you used C++ barriers? Or pthreads? Or mutexes?

An optimized implementation achieves ~100% of theoretical peak performance of a CPU on all of those, and these are all tailored to each CPU model.

There is software on any running system doing those things all the time. Running at 0% of the peak just means increased power consumption, latency, time to finish, etc.

Generic versions perform at < 100%, often at ~0% (0.1%, 0.001%, etc.) of theoretical peak.

Somebody has to write software for doing this things for the actual hardware, so that you can then call them from python.

IBM has dozens of "open source" bounties open for PowerPC, and they pay real $$$, but nobody implements them.

---

Porting software to PowerPC is only as simple as doing make if the libraries your software uses (the C standard library, the libm library, BLAS, etc. ) all have optimized implementations, which isn't the case.

So when considering PowerPC, you have to divide the paper numbers by 100 if you want to get the actual numbers normal code recompiled with make gets in practice. And then you have to invest extra $$$ into improving that software, cause nobody will do it for you.


Er, no. I do that stuff (well, I'm not clever enough for C++ generally, and it would be OpenMP rather than plain pthreads) on the sort of nodes that Sierra uses. However they mostly use the GPUs, for which POWER9 has particular support. Then I can tell there isn't currently any GEMV or FFT running on this system, and not "all the time" even on our HPC nodes.

While it isn't necessarily clear what peak performance means, MKL or OpenBLAS, for instance, is only ~100% of serial peak on large DGEMM for a value of 100 = 90; ESSL is similar. I haven't measured GEMV (ultimately memory-bound), but I got ~75% of hand-optimized DGEMM performance on Haswell with pure C, and I'd expect similar on POWER if I measured. Those orders of magnitude are orders off, even for, say, reference BLAS. I don't know why I need Python, but the software clearly exists -- all those things and more (like vectorized libm). You can even compile assorted x86 intrinsics on POWER, though I don't know how well they perform relative to on equivalent x86, but I think you're typically better off with an optimizing compiler anyway.

I've packaged a lot of HPC/research software, which is almost all available for ppc64le; the only things missing are dmtcp, proot, and libxsmm (if libsmm isn't good enough).


You start with BLAS being a factor 2 off, and then go to PETSc, and are another couples of factors off, and then the actual app the scientist wrote, which many use all of the above and the kitchen sink, where every piece and the pieces they use are all a couple of factors off, and then your scientist app is at 0.01% of peak.

If you have used Sierra since the beginning, we have seen significant performance increases over the years, because the people using it have actually been discovering and then either getting IBM to fix, or fixing themselves, most of the software.

Compared with Power 10, I'd say that Power 9 is "mainstream" (many clusters available), and from the Power 9 CPUs in existence, IBM's are the most mainstream of them all.

Take the Power 10 ISA, build your own CPU that significantly differs from IBM's, and good luck with optimizing all the software above. It can be done, and dumping it on a couple of HPC sites where then scientists and staff won't have any change but to use it for 4-6 years is a good way to get that done.

But for a private company that just wants to deliver value, ARM is just a much better deal, cause it saves them from having to do any of this.


Endianess is not the only problem. You can have issues with different cache coherency model, different alignment requirements, different syscalls (which are partially arch-dependent, at least on Linux). The fact that the switch from x86 to arm was trivial just proves the point that arm has matured really well.


OpenBSD and Linux already sorted that.


How? For example, a different memory model isn't something you can just flip a switch to fix — someone needs to review/test application code to see whether it has latent bugs which are masked (or simply harder to reproduce) on x86. Apple went to the trouble implementing support in their silicon to avoid that but if you don't run on similar hardware it's likely that you'll hit some issue in this regard for any multithreaded program, and those are exactly the kinds of bugs which people are going to miss in a simple cross-compile with limited testing.


You think Firefox ported their jit to power simply by typing make?


JITs are ASM code generators. They are not typical software.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: