A big part of the Kubernetes value proposition is “autoscaling”, used in the loosest sense. Pods will come and go over time, in response to events, etc. as part of normal operations for many systems.
If I still had to deploy to Kubernetes, I’d consider this an outage.
This isn't preventing pods from moving from node to node, and for a lot of workloads it wouldn't even prevent making new pods. This is if you make a new storage claim (a PVC) the underlying storage filling that claim isn't being created at the moment, for one particular version of Kubernetes. Existing volumes are not impacted. Every other version is not impacted.
Loads of pods don't even use PVCs. In the several hundred deployments I routinely manage, there's only a handful of PVCs, and they aren't exactly dynamically created and destroyed. I've got many GKE clusters and no workloads I'm running are affected, and I imagine most existing workloads in GKE clusters aren't affected. I'm then also doubly not affected, because I'm not running 1.25.
If I still had to deploy to Kubernetes, I’d consider this an outage.