Hacker News new | past | comments | ask | show | jobs | submit login

I dunno, it still sounds to me that nvidia is taking their (admittedly inaccurate) concept of a thread, putting a bunch in parallel, and calling that a warp to be cute.

I think the analogy still makes a kind of sense if you accept it at face value and not worry about the exact definitions. Which is really all it needs to do, IMO.

Again, I don't really know anything about GPUs, just speculating on the analogy.






Agreed that warp is a marketing term, but it is definitely not something that should be called "threads" except in the very loosest sense of the term.

A bunch of threads in parallel implies MIMD parallelism- multiple instructions multiple data.

A warp implies SIMD parallelism - single instruction multiple data (although technically SIMT, single instructions multiple threads https://en.wikipedia.org/wiki/Single_instruction,_multiple_t...).

From both a hardware and software perspective those are very different types of parallelism that Nvidia's architects and the architects of its predecessors at Sun/SGI/Cray/elsewhere were intimately familiar with. See: https://en.wikipedia.org/wiki/Flynn%27s_taxonomy




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: