I dunno, it still sounds to me that nvidia is taking their (admittedly inaccurate) concept of a thread, putting a bunch in parallel, and calling that a warp to be cute.
I think the analogy still makes a kind of sense if you accept it at face value and not worry about the exact definitions. Which is really all it needs to do, IMO.
Again, I don't really know anything about GPUs, just speculating on the analogy.
From both a hardware and software perspective those are very different types of parallelism that Nvidia's architects and the architects of its predecessors at Sun/SGI/Cray/elsewhere were intimately familiar with. See: https://en.wikipedia.org/wiki/Flynn%27s_taxonomy
I think the analogy still makes a kind of sense if you accept it at face value and not worry about the exact definitions. Which is really all it needs to do, IMO.
Again, I don't really know anything about GPUs, just speculating on the analogy.