Hacker News new | past | comments | ask | show | jobs | submit login

"It's always been 10-20% slower than CUDA"

This is an untrue, yet often repeated statement. For example Hashcat migrated their CUDA code to OpenCL some time ago, with zero performance hits. What is true is that Nvidia's OpenCL stack is less mature than CUDA. But you can write OpenCL code that performs just as well as CUDA.




It has historically been slower for neural networks, especially considering the lack of a CuDNN equivalent.


Also the opposite can be true as well (>2x slower); e.g try to rely heavily on shuffle.


What is Hashcat and why should we care?


A password cracking utility, and because it was put forth as at least one example of a real-world application purported to perform just as well under OpenCL as CUDA. If true, it provides evidence against the claim "[OpenCL]'s always been 10-20% slower than CUDA".


Because its a performance critical application that has made the switch so is a good comparison.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: