Hacker News new | past | comments | ask | show | jobs | submit login
Vortex: OpenCL Compatible RISC-V GPGPU (arxiv.org)
115 points by pjmlp on March 5, 2020 | hide | past | favorite | 27 comments





Anyone here use OpenCL for their day job? Practically speaking, what are the use cases for it in a corporate environment?


It's like CUDA, but not owned by NVIDIA. AMD's entire ROCM stack is based on OpenCL. That means that if you run tenserflow on amd gpus, you are using OpenCL. By corporate environment, if you mean day to day IT, probably none. It's useful for high performance and parallel code.


Pretty sure the rocm backend for tensorflow uses hip, not OpenCL, and hip is basically just AMD's implementation of cuda.

I don't think it's accurate to say that the "entire ROCM stack is based on OpenCL." The rocm stack supports OpenCL, but it also supports hip, which is what AMD chose to use to implement many of the new libraries in the rocm platform (rocBLAS, rocFFT, rccl; replacing cuBLAS, cuFFT, nccl)

FWIW, rocm also doesn't support OpenCL 2.0.


You got it wrong, it is like CUDA but only supports C, has lousier debugging tools, and buggy drivers.

Yes it does support C++ and finally has its own bytecode format for heterogeneous programming, but unless the card is OpenCL 2.2, it is back to my first paragraph.


ROCM isn't OpenCL based.


We abandoned it due to lack of ecosystem support. We originally picked it for general ETL / analytics / preprocessing data viz when opencl vs. nvidia for commodity data parallel hardware ( = GPU ) was unclear, and as a long option in case FGPAs would ultimately make sense. But the CUDA software stack's community + Nvidia investments eventually converted us.

Interestingly, we now exclusively use high-level data parallel abstractions like RAPIDS dataframes, so if there was say a Modin->risc-v path, wouldn't be too heavy a lift to port most of our stack. (Though, as is, little reason to.)


The ecosystem seems to be moving to Vulkan-based compute these days. That's more aligned to the hardware than plain old OpenCL, so performance should be improved.


Not sure what hw support adds here. While I prefer stacks that a megacorp doesn't own at the bottom, my point is it's the software. Current langs + frameworks are relegated to a tiny fragment of the coding population. So more about making it as easy to reach for GPU / data parallel stuff as folks do for say Spark or Pandas.

We tried working with AMD & Intel early on here but they didn't really get it (architects, investors, and a few other decision-maker-types). It's easy to put $10-20M into the ecosystem and just build it -- a16z finally has several years after we pitched it to them, and so did Nvidia with rapids.ai after we pushed them for a couple years ("what's a dataframe?"). Nothing stopping AMD/Intel and other opencl ecosystem players. But they still aren't.


I don't think this is true, or at least as simple a story as "Vulkan is faster". OpenCL is already a pretty good and modern API, and computation in Vulkan doesn't seem like it's the main point. It's my impression that Vulkan is mostly fast compared to OpenGL, and particularly by removing driver overheads, but that has never really been a big problem with OpenCL.

Last year, I had a student add a Vulkan backend to a GPU-targeting compiler[0]. Compared to the OpenCL backend, many programs ran substantially slower, and very few ran faster. It's certainly possible that this backend is imperfect, but it's more of a data point that I have seen elsewhere.

[0]: https://futhark-lang.org/student-projects/steffen-msc-projec...


Blender and Darktable use it at least. Also commercial Mac apps like Photoshop, Premiere etc.

It's emitted by a bunch of compiler backends (eg Futhark).

ML and computer vision toolkits support it (eg PyTorch, OpenCL etc).

Not sure what you mean by corporate environments but the suits probably mostly use Excel and calc.exe :)


Tbf, Excel making use of OpenCL kind of makes sense :)


Yes, we use it for Indigo Renderer (https://www.indigorenderer.com/) and Chaotica Fractals (https://www.chaoticafractals.com/). It allows cross-platform (Windows, Mac, Linux) execution on GPUs.


Vlado mentioned at SIGGRAPH, that they basically gave up and has gone from OpenCL => CUDA => "let OptiX handle it" => "OptiX7 w/ RTX". Do you have a CUDA and/or RTX comparison?

P.S. It's awesome you're still working on Indigo!


OpenCL seemed to have comparable performance to CUDA when we stopped using CUDA. I don't have comparisons with RTX on hand. The main problem with OpenCL now is not performance but vendor/driver support. For example the Apple OpenCL drivers are pretty buggy.


I’ve been exploring it for use for financial calculations that are ‘embarrassingly parallel’. It’s much faster alright but people here aren’t trained in data oriented programming. Might end up writing some generic accelerated building blocks for use from VBA.


If you have an HPC workload it can make sense. Lots and lots of software work is I/O bound and doesn't do a whole heckuva lot of computation.

I used to use OpenCL at work a lot.


Currently none.

Android uses its own dialect, RenderScript.

iOS had OpenCL, but since Metal got introduced, Metal Shaders are much better.

On Windows I used C++AMP for a while, now I just make use of Java/.NET libraries that plug into CUDA.

OpenCL biggest mistake was focusing on C only instead of opening the programming model to other languages.

SPIR and SYS-CL came too late into the game.


Sadly, OpenCL (alongside OpenGL) are already deprecated on Mac...


Isn't OpenCL just a c-like language that can run/be impemented on most things, what about the RISC-V architecture prevented running eg POCL?

edit: seems the paper is mostly about adding SIMT support to RISC-V, proably the answer to the above is that it RISC-V can run OpenCL even without the extension.


Deep Learning could in theory run on this using Intel PlaidML's Keras implementation in OpenCL https://github.com/plaidml/plaidml


I am curious how RISC-V GPGPU development would affect Nvidia’s support for the platform. So far they seemed to be supportive. They’ve contributed NVDLA. They’re a member of the foundation. It’s not far-fetched to think a Jetson with high performance RISC cores is already in development. I hope they won’t back out if they deem a strong edge platform competitor is not in their best interest.


nVidia are not using RISC-V for their "high-performance" GPGPU cores. Their interest in RISC-V is only for support cores, so GPGPU support is neither here nor there.


What Nvidia brings to the table with both the Jetson line and the Tegra line is precisely those GPGPU's. The ARM parts are pretty standard I believe. What I meant was if there was enough interest and development around an open-source architecture for GPGPU's and Neural processing it would have the potential to become a competitor to Nvidia's offering. I am confident it would take years to become a worthy competitor in the desktop and server markets. Edge and portable devices I am not so sure.


> The ARM parts are pretty standard I believe.

Not sure exactly what you're saying here, but NVIDIA's cores are bottom-up custom, and they are very unusual.


The older Jetson units featured standard Cortex-A57 processors and the newer units have their custom processors (Denver and Carmel architectures)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: