People, depending on their expectations, will perceive your app as one of these categories:
- A: not slow
- B: slow, but still using
- C: intolerably slow and no longer use it
Indeed, you can't measure C with a survey. But for most apps, it's probably reasonable to assume a distribution of thresholds where, if B/(A+B) (ie, the result you get on a survey of users for "is it slow") is less than 5%, there probably aren't many in C.
User surveys are never going to be good science. Surveys are confounded by massive variation in response rates. For instance, I haven't filled out a user survey in many years because I have better things to do. They can only ever give a vague indication that something might or might not be a common issue.
The scientific approach to find out if app slowness is a problem is to make a much faster version, give that to some fraction of users in an RCT and see if their usage goes up. But that makes no sense for a business. If you make the effort to develop a fast version, just give that to everyone and move on to the next thing that might get you more users.
What are you not getting here? I was willing to believe that you were familiar with survivorship bias before and had just made a mistake, but now it's seeming as if you're not just completely unaware of it, but unwilling to even look it up. And doubling down like this comes across as incredibly obnoxious.
Why don't you write to Jake, Hilary, and Maria and see if they'll explain to you why the existence of their paper doesn't mean what you're now trying to argue?
No, it isn't. That's "Recognizing Survivorship Bias 201".