Hacker News new | past | comments | ask | show | jobs | submit login

This is treading a fine line here, since you don’t mention Numba, which applies the same approach as Julia, namely translating the language to LLVM IR for generating PTX.

The same applies to those OpenACC pragmas that can offload a butt ugly Fortran loop to GPU: no one says Fortran is running on the GPU, rather the compiler is doing code gen and RT calls to make user life easy.

It thus smells like marketing rhetoric.




Numba compiling Python to PTX is absolutely "Python running on GPUs" and OpenACC is also "Fortran running on GPUs". If user code written in language X is compiled to target hardware Y then that is "X running on Y". This is fairly standard compiler terminology, not marketing speak. You specifically mentioned TensorFlow, which is NOT Python running on G/TPUs since the code that has neither Python's semantics nor its runtime.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: