Short of having an army of testers whose job it is to continually retest the same UX sequences with every release, how is that possible? Rapid release to me implies releasing changes faster than they could possibly be manually tested. And automated tests are going to do a rotten job of detecting most UX regressions.
So you're saying that the way to maintain a stable UX...is to release changes directly to customers (or at least a subset of) to test, and if they react badly, roll it back?
It is one part of more wholistic UX/PM practice. Yes, you should have mechanisms to validate the impact of your changes on your customers. You should be having regular conversations with various subsets of your customers for qualitative feedback, both on specific changes and roadmap. You should be doing things like heuristics reviews and user-testing. You should have UX designers with experience designing interfaces with plans for gather feedback _before_ deployment and then validate that change _after_ the deployment with further analysis. During development you also need all the regular QA type testing as well.
But yes, if you are regularly changing your UI in order to improve your system and roll out changes, you need to validate the impact. When I was last a PM, I spent a bunch of time monitoring the rollout of changes via Pendo to understand BOTH, if was getting the adoption of features I expected from certain cohorts and seeing if there where any shifts in user behavior (either organically, due to intentional or unintentional changes) And we did this for "beta" groups as well to really validate that feature were working with early adopters before broader rollout. I guess you might not consider the later "production" in sense, but these were real customers with real workloads.