I'm mentioning that Google Cloud primarily hosts and bills your ML code on TPUs. Today TPU v4 run at exascale.
so the equivalence "scared" behavior exists for both Intel and nvidia on Google already.
Instead $/ML training job or time to complete training given X budget is likely a better measure.