Anyone know how Blender fairs with Linux & AMD GPUs? I'm in the market for GPUs and am super torn by AMD vs Nvidia. Not only is that a tough choice on Linux, but it sounds like Blender is simply much, much better on Nvidia.. at least for rendering.
Thoughts? My goal is of course to use blender to build stuff, but also render them out performantly.
I was in the same position as you a few months ago. I believe that currently Blender provides a much better workflow on Nvidia cards. The reason is simple: Optix. The speed advantage over CUDA (and openCL, of course) is really, really impressive. With Optix, I think I've seen the greatest performance improvement I've ever witnessed on Blender. Here is a Linux + Nvidia + Optix benchmark from 2019: https://www.phoronix.com/scan.php?page=article&item=blender-...
And with 2.92 it's even faster and it now supports CPU+GPU rendering with Optix. Unfortunately, AMD doesn't have such good Blender support. You're stuck with OpenCL and slower rendering time. You could try using their Prorender renderer, but to be fair, it's a closed source and non-native to Blender. I'd rather stick to Cycles+Eevee. I wish one day AMD will work with Blender to better support it, as I much prefer them as a company and I like their opensource drivers. But really, Nvidia RTX cards with Blender at the moment are fantastic.
Blender runs like a champ with my nVidia Quadro 3000M, though I've had no success getting Ubuntu or Blender to avail of the CUDA cores. Will check out OptiX based on your comment.
Blender System Preferences for CUDA, OpenCL, and OptiX all report "No compatible GPUs found for path tracing. Cycles will render on CPU." I'm using nVidia's proprietary driver
and NVIDIA X Server Settings reports 240 CUDA Cores. No Joy w/ nvidia's cuda packages, either. Blender's been running well on this workstation since ~2.7x for video editing work, so further system tweaks have been low-prio. Will revisit to look at OptiX.
Your card is based on Fermi, an architecture that's no longer supported by NVIDIA since a long while now (last drivers in 2018, you're on a legacy branch).
That means that apps using newer versions of CUDA will not work, and your hardware is also far too old to use OptiX.
Put in another way, your current GPU is slower than the one in a Nintendo Switch...
> Fermi ... (last drivers in 2018, you're on a legacy branch).
IYO, is there any value to using legacy nvidia-driver-390 over X.Org?
> apps using newer versions of CUDA will not work, and your hardware is also far too old to use OptiX.
Will look to see if apps I use avail of my CUDA cores, else I'd prefer to remove this sole proprietary driver from my old-and-busted workstation.
Thanks again for saving me time. I’ll re-de-prioritize further GPU config hacking, though if you have suggestions for a Nintendo Switch-to-FireWire 800 interface, please share.
> IYO, is there any value to using legacy nvidia-driver-390 over X.Org?
nouveau doesn't support compute at all on those... (as in OpenCL).
Also, as far as I remember, rechecking work on Fermi didn't seem to be implemented the last time that I played with one (it is implemented for the later generation, Kepler, though)
> Will look to see if apps I use avail of my CUDA cores
If your app is compiled against CUDA 8.0 or earlier, targeting compute capability 2.1, it'll work on Fermi. That excludes any kind of modern binaries.
(the newest compiler that CUDA 8 supported was GCC 5.3...)
More recent apps can either be built with an old compiler + C++ library... or use OpenCL instead.
Thanks again - I appreciate these details. Blender's not seeing cores available for OpenCL, either.
>> "No compatible GPUs found for path tracing. Cycles will render on CPU."
I'll research Fermi for OpenCL and Cycles Render Devices. Blender's working well, so I'll resist the urge tweak until that's no longer the case, and hope to spend more time exploring 2.92.
ProRender render quality was sadly sometimes pretty close to garbage. (I heard that ProRender 2.0 changed that, but didn't get the chance to test it yet)
When does Nvidia become a hassle? I've been using my 1060 on NixOS for a while with no issues so far. Everything has just worked. Are there edge cases i should consider?
Blender on Linux with AMD GPUs works great. The only difference would be that nVidia cards can use 'Optix AI Denoising' for denoising both in Cycles (raytraced) and in Eevee (real time rendering) which is much faster than the naive Cycles denoising (which often produces sub-par results) and the Intel Denoiser post-processing 'filter'. Although the latter produces much better results as it processes the whole frame at once, instead of discrete tiles.
tl;dr: (sadly) nVidia provides (at times _significantly_) better rendering performance than AMD due to OptiX denoising, which requires less samples per frame. CUDA vs OpenCL is - as far as I know - on par performance-wise.
Thoughts? My goal is of course to use blender to build stuff, but also render them out performantly.