Hacker News new | past | comments | ask | show | jobs | submit login

I don't think Rust is very suitable for GPU programming, "first–class" or whatever. And it has not much to do with Rust itself. We have seen time and time again people write deeply inefficient overkill kernels in C for some basic stuff really. I should expect this would happen to Rust, too. In order to tackle this right, it seems, we really should treat GPUs and similar coprocessors for what they are—deeply data–parallel computation accelerators. You must design your pipelines with that in mind; this stuff is also very sensitive to data locality, as you might expect. Now, take a look at this Haskell-inspired language [1], which I think is much more viable in the long–term as far as data parallelism and composition goes. Admittedly it's not in itself designed for graphics, but that's okay. From their website: "Futhark is a small programming language designed to be compiled to efficient parallel code. It is a statically typed, data-parallel, and purely functional array language in the ML family, and comes with a heavily optimising ahead-of-time compiler that presently generates either GPU code via CUDA and OpenCL, or multi-threaded CPU code."

I can see that very much unlike Futhark, Rust-GPU is deeply graphics focused, but I argue that a very similar language can be made to accomodate if not all then most shader-specific things. In which case, Futhark would be a very good place to start.

[1]: https://futhark-lang.org/




Having a macro to convert a Rust-like DSL (with a focus on iterators to replace Futhark's higher order functions) to GPU via a Futhark-like compiler is doable and something I would love to see done.

It could be the rayon of GPU computing on Rust.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: