The issue there is that that info is what Nvidia chooses to port out from the on-chip execution. Most of what we can do for observation is in the kernel driver space and not really on-chip or even low level transit to the chip. One of the other commenters pointed out that you can get huge benefits from avoiding busy waiting on the returned data from the chip, which makes total sense, but also increases latency, which didn't work for my near-realtime use case when I was investigating. Other than those types of low hanging fruit where you can accept a little latency for better power state management, it's hard to find low level optimizations specifically for Nvidia through the closed source parts of the CUDA stack or through the driver transit to chip when those are intentionally hidden.
A while ago, I read a paper on dissecting the Nvidia architecture using very specifically tuned microbenchmarking to understand things like cache structure on chip and the like [0]. Unfortunately, no one has done this for seriously in use, recent architectures, so it's hard to use this info today. Similarly, there isn't an eBPF VM running on the chip to summarize all of this and the Nvidia tools aren't intended to make this kind of info easy to get, probably specifically because of this paper...
A while ago, I read a paper on dissecting the Nvidia architecture using very specifically tuned microbenchmarking to understand things like cache structure on chip and the like [0]. Unfortunately, no one has done this for seriously in use, recent architectures, so it's hard to use this info today. Similarly, there isn't an eBPF VM running on the chip to summarize all of this and the Nvidia tools aren't intended to make this kind of info easy to get, probably specifically because of this paper...
[0] https://arxiv.org/pdf/1804.06826