You're missing the point. The goal is to maximize a small set of metrics. Engagement, New User Experience, etc.
They make these changes and roll them out. They look at the numbers. They see that 5% of their users (those on older hardware) spend 20% less time on the site. They see that 20% of their users are spending 50% more time on the site. They file a ticket about the drop in engagement for older devices. It goes into the backlog. Next sprint/quarter rolls around, they see a couple of options. One is to speed up the site on old devices another is to add a new feature that they estimate will increase engagement by another 20%. The second option seems to increase their bottom line more, so it gets funded and the old device support stays on the backlog. Repeat cycle.
I can guarantee that at any sufficiently high traffic site they don't use developer hardware as a benchmark. They see the numbers, they know its slow for you and they make a conscious decision that you aren't worth the opportunity cost of new features.
This but with one minor correction, the developers usually want to fix the experience. It's the management / project owners / etc that use the aforementioned analytics to make their judgement.
Perversely, I've also often observed that those who spend the most time judging a sites performance on its analytics are usually the ones who actually use the site the least. or at least this is what I've observed with past projects I've worked on.
> those who spend the most time judging a sites performance on its analytics are usually the ones who actually use the site the least
It's a weird part of human tribal/social dynamics. People who already generally like a thing are open-minded to new information that presents the thing in a positive light, and just generally ignore new information that presents the thing in a negative light. Likewise, people who already generally dislike a thing filter out the prosthelytizations of people who like the thing, but pay attention when they notice reasons to dislike the thing.
Basically, our brains' belief-evaluation machinery is really just a wrapper around a core "generate excuses to keep thinking what I'm thinking" algorithm.
We can exploit this—adversarial justice systems work much better than non-adversarial ones, because you've got two sides who each have paid attention to half the evidence, brought together in the same room to present it all. But if we aren't exploiting it, aren't even aware of it, it can become a real problem.
One further (possible) correction: a "discussion" was had in the past whether to fix this, dev wanted to fix and PM didn't, or vice versa - whoever is the most politically powerful wins, regardless of metrics impact (all relevant facts aren't reported to senior management so sanity could prevail).
One thing is always clear -- a big Co's interest is not always your interest.
Which is why I love what Sindre is doing -- letting us to customise our experience with products that ignore it by themselves. I wish there were more projects doing that. Demand is definitely there.
They make these changes and roll them out. They look at the numbers. They see that 5% of their users (those on older hardware) spend 20% less time on the site. They see that 20% of their users are spending 50% more time on the site. They file a ticket about the drop in engagement for older devices. It goes into the backlog. Next sprint/quarter rolls around, they see a couple of options. One is to speed up the site on old devices another is to add a new feature that they estimate will increase engagement by another 20%. The second option seems to increase their bottom line more, so it gets funded and the old device support stays on the backlog. Repeat cycle.
I can guarantee that at any sufficiently high traffic site they don't use developer hardware as a benchmark. They see the numbers, they know its slow for you and they make a conscious decision that you aren't worth the opportunity cost of new features.