You can use OpenXLA, but it's not the default. The main use-case for OpenXLA is running PyTorch on Google TPUs. OpenXLA also supports GPUs, but I am not sure how many people use that. Afaik JAX uses OpenXLA as backend to run on GPUs.
If you use model.compile() in PyTorch, you use TorchInductor and OpenAIs Triton by default.
Thank you for saying something useful here. I was vaguely under the impression that pytorch 2.0 had fully flipped to defaulting to openxla. That seems to not be the case.
Good to hear more than a cheap snub. OpenAI Triton as the reason other GPUs work is a real non-shit answer, it seems. And interesting to hear JAX too. Thank you for being robustly useful & informative.
If you use model.compile() in PyTorch, you use TorchInductor and OpenAIs Triton by default.