> The new mobile web version of Twitter is much faster and better looking than the desktop one.
It's certainly much faster. I'm using it on my netbook which used to run the desktop version equally as fast.
Why can't big development teams think about accessibility as it applies to older hardware? It's clear that it's getting to be a big problem for Twitter let alone other companies (see previous discussions about Bloat)
The only thing I can think of is that the developers use the shiny new hardware and it runs okay for them. Or if the devs want to change, the management and board runs the fast hardware and it's "working for them".
You're missing the point. The goal is to maximize a small set of metrics. Engagement, New User Experience, etc.
They make these changes and roll them out. They look at the numbers. They see that 5% of their users (those on older hardware) spend 20% less time on the site. They see that 20% of their users are spending 50% more time on the site. They file a ticket about the drop in engagement for older devices. It goes into the backlog. Next sprint/quarter rolls around, they see a couple of options. One is to speed up the site on old devices another is to add a new feature that they estimate will increase engagement by another 20%. The second option seems to increase their bottom line more, so it gets funded and the old device support stays on the backlog. Repeat cycle.
I can guarantee that at any sufficiently high traffic site they don't use developer hardware as a benchmark. They see the numbers, they know its slow for you and they make a conscious decision that you aren't worth the opportunity cost of new features.
This but with one minor correction, the developers usually want to fix the experience. It's the management / project owners / etc that use the aforementioned analytics to make their judgement.
Perversely, I've also often observed that those who spend the most time judging a sites performance on its analytics are usually the ones who actually use the site the least. or at least this is what I've observed with past projects I've worked on.
> those who spend the most time judging a sites performance on its analytics are usually the ones who actually use the site the least
It's a weird part of human tribal/social dynamics. People who already generally like a thing are open-minded to new information that presents the thing in a positive light, and just generally ignore new information that presents the thing in a negative light. Likewise, people who already generally dislike a thing filter out the prosthelytizations of people who like the thing, but pay attention when they notice reasons to dislike the thing.
Basically, our brains' belief-evaluation machinery is really just a wrapper around a core "generate excuses to keep thinking what I'm thinking" algorithm.
We can exploit this—adversarial justice systems work much better than non-adversarial ones, because you've got two sides who each have paid attention to half the evidence, brought together in the same room to present it all. But if we aren't exploiting it, aren't even aware of it, it can become a real problem.
One further (possible) correction: a "discussion" was had in the past whether to fix this, dev wanted to fix and PM didn't, or vice versa - whoever is the most politically powerful wins, regardless of metrics impact (all relevant facts aren't reported to senior management so sanity could prevail).
One thing is always clear -- a big Co's interest is not always your interest.
Which is why I love what Sindre is doing -- letting us to customise our experience with products that ignore it by themselves. I wish there were more projects doing that. Demand is definitely there.
Runtime speed has not been a priority in most apps/websites in a long time. Even relatively new phones struggle with things like modals and swipe effects on mobile websites. So much effort is spent on making things feel "native". Native, in my opinion, should mean fast before anything else.
I used to work at a place that had a ~5 year old pc on a <1Mb internet connection to test their software. If it ran find on that, then it would run fine on most of their users hardware.
For the Android app development we do for clients, I have a $10 trac phone I got from Walmart.com. That was the phone price without subsidy or any extra cost.
Android can be a bunch of hurdles, but we use same mentality as this for our app designs.
I hear a lot of people in tech idolizing "good design". But is a design really good if it doesn't work everywhere?
Performance should be a key factor of judging any design. Both heuristic user performance and actual measurable software speed.
> The only thing I can think of is that the developers use the shiny new hardware and it runs okay for them.
I think that was one of the problems with Google+ in the early days. They launched a social network that assumed a huge screen resolution (because that's what they used), so the interface was too big and clunky for a lot of people.
There's no real way to measure hardware as it pertains to power and speed. The best thing we have is browser and OS detection. If there was a way to determine hardware that'd be pretty useful.
navigator.hardwareConcurrency provides the number of logical CPU cores. Additionally, GPU detection is possible with WebGL extensions. That said, this is clearly incomplete from what data would be ideal.
It's certainly much faster. I'm using it on my netbook which used to run the desktop version equally as fast.
Why can't big development teams think about accessibility as it applies to older hardware? It's clear that it's getting to be a big problem for Twitter let alone other companies (see previous discussions about Bloat)
The only thing I can think of is that the developers use the shiny new hardware and it runs okay for them. Or if the devs want to change, the management and board runs the fast hardware and it's "working for them".