Hi, Author here. Thanks for the feedback. I'm happy to correct any mistakes.
Sorry you don't think this is useful. However, as the post clearly points out it's not intended to make you a GPGPU programmer. That really isn't possible in a single blog post. Rather, it tries to give a general overview of how the GP part of GPUs works for the general reader, and to try to relate terminology used by AMD and Nvidia. I'm not aware of another source that tries to do this succinctly.
The end of the post gives lots of links for further reading for readers who want more.
And FWIW I have done quite a bit of GPGPU programming.
Thanks for responding, and sorry for the snide remark, I see a lot of blogspam on this topic and my bullshit detectors are sensitive.
My main gripe is that the blog post does not seem to explain any of the terms it's inteoducing. The text keeps promising to explain concepts that aren't brought up again, and the main differences between AMD and NVIDIA are not discussed (shared memory, warpsize). I did appreciate the history in the introduction, that part was well-written.
If you want to explain the GP part of GPGPUs, I would suggest starting with the threadgrid and moving onto threadblock register footprints and occupancy, it's been a successful recipe for me when I've been teaching, and it keeps the mental load low.
Sorry you don't think this is useful. However, as the post clearly points out it's not intended to make you a GPGPU programmer. That really isn't possible in a single blog post. Rather, it tries to give a general overview of how the GP part of GPUs works for the general reader, and to try to relate terminology used by AMD and Nvidia. I'm not aware of another source that tries to do this succinctly.
The end of the post gives lots of links for further reading for readers who want more.
And FWIW I have done quite a bit of GPGPU programming.