PTX is certainly more suitable as an intermediate representation than OpenCL's dialect of C, but PTX obviously doesn't and never will fulfill the requirements of being the low-level portable target for high-level languages.
NVidia's early-mover advantage is significant, but software tied to only their hardware will never be able to achieve the kind of status that the netlib stuff has. The only question is whether the gold-standard numerical libraries a decade from now will have multiple backends, or a single non-CUDA backend.
I never said they weren't. But they can't expect to automatically continue with that success while staying as closed and proprietary as CUDA has been so far. Their hardware is not drastically better than that of their competitors, and they are subject to serious competition (unlike Intel with their CPUs). OpenCL is here to stay with a market far broader than just AMD GPUs, so it's pretty much inevitable that it will take over as the dominant standard unless it's developed as badly as OpenGL was in the early 2000s. If Microsoft ever ships a CPU-backed OpenCL runtime as part of Windows, it'll be all over for CUDA.
The NVIDIA hardware is not necessarily better, but the toolchain is unmatched. Race checker, memory checker, top-notch debugger and profiler, visual disassembly graph, makes it a very smooth experience.
AMD can only compete through HSA and non-standard OpenCL extensions, and only companies whose hardware originates from AMD GPUs are in HSA.
OpenCL suffers from being fragmented and with very fuzzy mapping to real hardware. Even OpenGL compute shaders looks more interesting to me with their vast texture format access, and the fact that OpenCL multiple queues don't deliver in practice.
There is both SPIR and HSAIL in competition with PTX, NVIDIA can rejoice.
This is one of the reasons OpenCL is still playing catchup with CUDA.
Thanks to PTX, you can target CUDA directly with C++ and Fortran, besides a few other languages.
OpenCL only this year got SPIR.