Hacker News new | past | comments | ask | show | jobs | submit login
How often do superior alternatives fail to catch on? (lemire.me)
111 points by deafcalculus on Nov 25, 2017 | hide | past | favorite | 97 comments



Multiple dimensions.

The common theme why people get confused about "superior" products not winning boils down to ignoring the multiple dimensions of quality. A product's overall "superiority" is a single score that compresses multiple scores of the various attributes of the product. If one is ignorant (or discounts the value) of the other dimensions, he will be perplexed why the supposedly "inferior" solution won.

E.g. I never understood why bicycles didn't beat out cars. Bicycles are obviously superior because:

+ narrow profile can squeeze through tight alleys or even heavily treed forests that cars can reach

+ requires no fossil fuel that emits pollution

+ costs less than 1/100th the price of a car

+ easily repaired by homeowners in the garage because no computers

+ etc, etc

That fixation on those attributes causes the confused person to totally miss the other positive attributes of the car:

+ travel faster than 25 mph with minimum physical exertion

+ typical size car can carry 5 adults which is ~1000 pounds of weight

+ carry entire week's worth of groceries (~10 bags)

+ occupants don't get wet when when it rains

If one doesn't understand the all the multidimensions of qualities and weigh them in an objective manner that's detached from personal preferences, he'll always be confused why "superior" USENET lost to "inferior" Reddit, or why "superior" Lisp lost the popularity contest to "inferior" C++/Java/Python/Go/etc, or why Mac OS 9 lost to Windows NT.

Likewise, there were multidimensional factors to Betamax vs VHS and "picture quality" was only one of them.


I agree with everything you said, but you can also go too far in that direction of analysis. If you let the empirical gain in traction to sneak in as a factor in your analysis, you effectively have a tautology, and you lose all predictive power, similar to the misuse of the phrase “survival of the fittest,” where you conclude that variant A was “clearly” more “fit” than variant B because A survived and B didn’t.

It’s still important to recognize that a variant might “win” because of factors completely unrelated to its “fitness,” like an organization using its market power in an unrelated sector to promote its variant, or even just good old fashioned dumb luck.


>, but you can also go too far in that direction of analysis. If you let the empirical gain in traction to sneak in as a factor in your analysis, you effectively have a tautology, and you lose all predictive power

That's a good point. I'd have to give it more thought on how much predictive power we're supposed to ascribe to multiple-dimensions.

Maybe multiple dimensions is a better thinking tool for experimentation than prediction. This frees one from being paralyzed by the status quo. For example, a 1990s entrepreneur sees that the internet is becoming popular and maybe he thinks he can sell books on it. But then he goes to Barnes & Noble and sees that people are pulling real books off the shelves, taking them to the lounge chairs, and then leisurely sitting down and flipping through the pages before buying. If the dimension of "in-store experience" constrains the worldview, it will not occur to the entrepreneur that other dimensions like near-infinite selection in a "virtual bookshelf" and convenient delivery to the home will be prioritized over the store's lounge chairs. One doesn't know if those other dimensions for "book buying" matter more to buyers until he tries it.

Another well-known example of constrained single-dimensional thinking is analyzing the Blackberry vs the iPhone in 2007. Both the CEO of Research In Motion and Steve Ballmer of Microsoft were focused on the physical tactile keyboard. They saw that the iPhone obviously lacked real QWERTY buttons, so they both predicted it would flop in the marketplace. Steve Jobs' Apple team ignored the dimension that RIM & Microsoft used for criticism. Instead, they looked at the dimension of "screen size" and made it larger. The also bet on the dimension of "capacitive touch screen" which enables finger gestures. These were the dimensions that a billion smartphone buyers ultimately preferred. The Blackberry was cheaper and buyers still preferred the iPhone touchscreens.


I was going to make the same comment about the tautology in pure survival, although I do acknowledge your parent comment’s point that there could be a more accurate predictive metric. We don’t want to deny that there is an element of “irrationality” at play, just because there is a winner.


So how do we decide if qwerty or dvorak is fitter?


Well,you try to design a test of typing speed and fatigue but it's very challenging because all your subjects are probably already familiar with qwerty keyboards even if they don't touch-type.

And then you conclude abstract fitness is irrelevant anyway because everyone uses qwerty and there are:

- Advantages in everyone using more or less the same thing

- No way you're going to make everyone incur huge costs and switch


> So how do we decide if qwerty or dvorak is fitter?

Ideally, stay out of it and let others decide for themselves.

We don't need to decide that bikes are better than cars. We ought to just support both (paved roads, traffic laws, places to keep your vehicle while you shop, etc.).

We don't need to decide that SSDs are better than disc drives. We just need to support the latest SATA standard.

So, for qwerty vs dvorak, we just need to support appropriate USB standards. And at the software level, we need to let the user customize their keys from time to time (remap hjkl in vim, remap wasd in video games, etc.).


So do we just invest the same amount of public funds into automobile infrastructure and bicycle infrastructure? That seems overly simplistic.


also impossible, because they're mostly done within the same context.

it's pretty rare to get dedicated cyclist lanes which were created separately to the rest of the road.

it's generally created simultaneously.


There is still always some balance of priorities, even if it’s not made explicit in official budgets. That balance needs to be chosen somehow.


I realize now that my first response only addresses how an individual might ideally decide whether to switch to Dvorak. Deciding which layout is “fitter” across the globe (or whichever language regions currently use QWERTY and can feasibly use Dvorak) is a much more difficult cost/benefit analysis in practice. To calculate cost, you would need to look at the costs of mass manufacturing new keyboards and/or patching and distributing the software that drives keyboard input. To calculate benefits, you would need to do large scale studies on the speed/productivity benefits of Dvorak over QWERTY, both for first-time learners and switchers. Heck, you’d even need to estimate the political costs (for governments, large corporations, standards bodies, etc.) of persuading the change, as well as the unavoidable costs of having both layouts exist in the wild during the transition period.

This is so daunting that it feels nearly impossible, and indeed, I suspect any rigorous cost/benefit analysis would show that the benefits of any new layout (even the technically best layout) is not worth the costs of switching away from any single ubiquitous standard (even the technically worst layout imaginable), at least not in any reasonable time frame.

My guess is that the best approach is to maintain one ubiquitous standard, and change it very gradually, and to gradually reduce the costs, e.g. by implementing easy layout-switching software in major operating systems, or even bolder approaches like supporting virtualization/containerization of personal software settings and development environments in major operating systems.

What if I could just sit down at any modern networked PC, type in a URL and some authentication, and immediately be running my own exact customized computing/development environment, complete with my keyboard layout, text editor, development dependencies, etc.? That would be awesome. As long as the hardware keyboard layout is roughly compatible with my software settings (and I don’t rely on printed labels on the physical keyboard), most of the costs of choosing a keyboard layout (other than the fixed costs of learning it) go away.


Ideally, by recording (as best you can, and estimating after that) the cost per unit time of using a non-standard keyboard layout (including the cost of learning it and the cost of occasionally needing to use computers that don’t use it) and the benefit per unit time of using that layout.

Either the benefit per unit time eventually surpasses the cost per unit time (since some costs are fixed, namely the initial learning), or it doesn’t. Only switch if benefit surpasses cost in a time window you’re comfortable with.


I've often wondered why alphabetical order isn't the default on phones. Given that most of the population that uses phones didn't learn to type.

It really bothers me actually.


People know the alphabet as a one-dimensional object. Unless your keyboard has 26 keys all in order, the multiple rows don't correspond to an existing mental model. At that point, you can either choose qwerty, which pleases anyone who already knows it, or you can make a sort-of alphabetical layout, which pleases nobody.

I'd be curious about your statement the most people who use phones didn't learn to type. Is that for a particular age group? Alternatively, for the developing world, where phones are more common than computers? I'm having difficulty seeing the justification for the statement.


If you know the alphabet you can easily guess where the keys will be. (In a alphabetically ordered keyboard)

Do you believe that most people who use phones know how to type?


If the keys are laid out in a 1x26 keyboard, yes. If the keys are laid out in a more usable way, then no. If I can see the key 'j', I cannot guess whether 'm' will be to the right or left of it, without knowing the length of the row, and considering it at all times.

I would describe today's society as one in which typing is necessary for any basic tasks, and is universal, and so I don't see why smartphone users would be different from the norm in terms of typing ability. Are you using "typing" to refer only to "touch typing"?


That is a good point, although one could easily point out that “alphabetical order” is also a completely arbitrary convention, and we could just as easily argue whether we ought to change how we teach that order. Perhaps a different ordering of the alphabet, like sorting by usage frequency or grouping the vowels and consonants together, would help everyone learn to read and write. The same type of argument could apply to nearly anything. For example, in the English language, perhaps we should standardize spelling. This isn’t a novel suggestion. Mark Twain famously and perhaps apocryphally made such a suggestion. Why not just teach everyone in the world Esperanto or Lojban?


But my argument didn't involve teaching anything to anyone. It was simply accepting the fact that most people are more familiar to alphabetical order.

What benefit is there to having everyone hunt and peck on a phone to keep with a convention that only really makes sense to typists?


The hunt-and-peckers will still be hunting and pecking on an ABC keyboard, at least the typists don't have to. Also, the first adopters of smartphones probably did have overwhelming keyboard experience, so I can totally see how we got here. Fortunately in phone land the default is just that and easily changed by anyone who thinks another layout would work better.


The one who catch-on is fitter, and the fitter one catches on. Circular logic is complete.

Game theory literature should have this problem?


I saw this recently when a small but vocal portion of early federated social network users complained that the "inferior" Mastodon was gaining so much traction. They only saw it in terms of technical features and seniority, oblivious to its main draw: a welcoming user and development community for marginalized people.

Most of the early users and developers were (and continue to be) LTBTQ+ people, and the project leader (Eugen) was 100% welcoming. In a world where we don't get much that's for us, it's nice to have a thing where we're well-represented.

When I tried the previous attempts at federated social networks, they seemed to match the general demographics of tech. The predominantly white, male, straight, cisgender population wasn't exactly unwelcoming, but I didn't feel comfortable talking about some topics. It wasn't any better than Twitter in that sense.

And then we got content warnings and per-toot privacy levels. The possibly technically superior predecessors didn't stand a chance.


Perfect example is how the OP say goes so far to say bat lungs are better (because they are more efficient as they have more surface area per volume) but they also are seemingly more fragile: suffering barotrauma from extreme changes in pressure: https://www.newscientist.com/article/dn14593-wind-turbines-m....

> Baerwald and her colleagues believe that birds do not suffer the same fate as bats – the majority of birds are killed by direct contact with the blades – because their lungs are more rigid than those of bats and therefore more resistant to sudden changes in pressure.


It's true that in most cases if one thing beats another, you can find some dimension on which the winner was superior. But it isn't necessarily true in all cases. If it is the case that sometimes things win only due to entrenchment or random network effects, then there may be some cases where the winner is strictly worse than the loser, though these cases will be rare. If you found one, though, it would be the strongest possible evidence that sometimes markets/evolution/whatever don't pick the best thing. The OP is asking for those examples.


There's a much better framework for understanding this stuff in the concept of the Whole Product.

The Whole Product is not just the core product. You have to add on all the things like reliability, service and support (the expected product), its expansion capabilities (the augmented product), and its potential for future development (the potential product) to get "the whole product".

If you just look at the core product, you are very likely to go wrong, because having a technologically superior core product does not mean you have a better Whole Product. (See cassette tape vs DAT, Windows vs OS/2 and many more.)

The Whole Product concept was popularized by Geoffrey Moore in Crossing the Chasm and Inside the Tornado, which were seminal books at places like Netscape. Before that, the key idea was TAM, the Technology Adoption Model, pioneered by Professor Everett Rogers in Diffusion of Innovations in 1962.


>The Whole Product concept was popularized by Geoffrey Moore in Crossing the Chasm

I see the value in Geoffrey Moore's framework. There are similar analysis frameworks. One is MCDA (multi-criteria decision framework)[1] and another is SWOT Strengths Weaknesses Opportunities Threats[2]. There must be a dozen others.

But I would still generalize them down to "multiple dimensions" because one can use it for non-commercial products sold for free such as comparing GCC vs Clang/LLVM -- or -- compare non-specific products such as "categories of products". An example of comparing multi-dimensions of categories such as "live theater" vs "movies".

Without GM's book and his framework, if we see someone say that "GCC is superior to Clang because it targets more cpus than Clang and the resultant binaries of LAME MP3 and FFMpeg compiled by GCC have better performance", it means the commenter is leaving out other dimensions to support his conclusion of GCC superiority. By omitting the missing dimensions, the observation that Clang/LLVM is winning over GCC in the "marketplace of ideas" seems totally illogical. But once one adds in additional dimensions such as Clang not forcing a GNU license and/or Clang exposing the AST parse tree, its adoption makes more sense.

Same for categories like "theater vs movies". When movies first came out, some experts believed that films would be a fad because "audiences want to see _real_ actors in person and not some _artificial_ projection on a screen". The experts were only focused on one dimension: the appeal of the audience being in the same physical space as the actors. They ignored all the other positive dimensions that films allow such as closeups on actors faces. (E.g. a live stage play's script can't rely on very subtle facial responses such as an eyebrow raise or a tiny smirk of the lips because anybody beyond the 3rd row won't even see it. Stage dialogue with whispers can't be real whispers, they have to very loud and very fake stage whispers. Films also allow alternating time and location cutting for storytelling. But live Broadway theater is also still around -- because it has quality dimensions some people prefer.)

A lot of arguments on the forums about X being superior to Y are really people favoring different dimensions and then talking past each other. Since people can't agree on all the dimensions/criteria to consider and how to weigh them, the debates remain unresolved.

[1] https://en.wikipedia.org/wiki/Multiple-criteria_decision_ana...

[2] https://en.wikipedia.org/wiki/SWOT_analysis


Every factor can be captured, but can we agree what isn't superiority? e.g. buying an ipad (vs a samsung tablet or whatever) to be the same as everyone else, is not a reflection of the virtues of the ipad. I would say the (simpler) winning product in https://news.ycombinator.com/item?id=15776047 has true superiority because it can be perceived by anyone in a vacuum.


This is a terrifically bad example to choose given the trillions of dollars invested in building roads and general infrastructure that make cars viable in the first place.

It is pretty clear now that cars are decidedly not the best solution to transport. It just so happened that they scaled the best, and now everything is designed around them.

It's just macro vs micro and hardly has anything to do with the product.


Actually if you research the story of the bicycle, it was a failure that took over one century until people started to realise how it could be useful, without killing themselves in the process.


Mostly marketing more often than not. Money and greed get what the want at the expense of others


Well put, as far as your point goes on positive aspects of multidimensional valuation. There is another aspect of these multiple dimensions that has little to do with positive product quality as most people perceive it, unfortunately. Capital (fiscal or social) spent on advertising, free samples, dumping, bribery of reviewers, lobbying about legislation, questionable lawsuits about patents and copyrights (e.g. SCO's zombie FUD lawsuit), hiring competitor's employees, establishing strategic partnerships, loss leaders, outright theft, and so on can distort outcomes so the lower quality products wins in the marketplace -- and thus the rich get richer.

I have seen that in the computing field. with, say, how Java won over Smalltalk (by engaging IBM's marketing muscle). Or how Windows won over CP/M and QNX (again, IBM's marketing muscles and more). Or how EFI won over Open Firmware (Intel's marketing muscle).

Such "successes" can then lead to establishing monopolies with social networking effects which become self sustaining -- even if the origins may be questionable: http://www.thecrimson.com/article/2004/9/13/lawsuit-threaten... "According to the complaint in the United States District Court for the District of Massachusetts, ConnectU LLC, formed by Divya K. Narendra ’04, Cameron S. H. Winklevoss ’04 and Tyler O. H. Winklevoss ’04, is seeking damages for Zuckerberg’s alleged theft of their idea — then called Harvard Connection — and his subsequent deception."

See also for Bill Gates' success based on other's source code: https://patch.com/california/losaltos/microsoft-co-founder-p... "That phase of Allen's life involved taking the bus–sports coat, tie, leather briefcase and all—down to the offices of local computer gurus. "I would boost Bill into dumpsters and we'd get these coffee-stained texts (of computer code)" from behind the offices, grinned Allen."

And: "MS-DOS paternity suit settled: Computer pioneer Kildall vindicated, from beyond the grave" http://www.theregister.co.uk/2007/07/30/msdos_paternity_suit...

And it helps to start rich, of course: http://philip.greenspun.com/bg/ "William Henry Gates III made his best decision on October 28, 1955, the night he was born. He chose J.W. Maxwell as his great-grandfather. Maxwell founded Seattle's National City Bank in 1906. His son, James Willard Maxwell was also a banker and established a million-dollar trust fund for William (Bill) Henry Gates III. In some of the later lessons, you will be encouraged to take entrepreneurial risks. You may find it comforting to remember that at any time you can fall back on a trust fund worth many millions of 1998 dollars."

Contrast all that with the hypocritical: https://en.wikipedia.org/wiki/Open_Letter_to_Hobbyists "As the majority of hobbyists must be aware, most of you steal your software. Hardware must be paid for, but software is something to share. Who cares if the people who worked on it get paid Is this fair? One thing you don't do by stealing software is get back at MITS for some problem you may have had. MITS doesn't make money selling software. The royalty paid to us, the manual, the tape and the overhead make it a break-even operation. One thing you do do is prevent good software from being written. Who can afford to do professional work for nothing? What hobbyist can put 3-man years into programming, finding all bugs, documenting his product and distribute for free?"

So says the millionaire dumpster diver who could afford to write free software his whole life. Hypocritical to anyone who knows more of the story, but brilliant marketing! Including the rhetorical deception about how much time free software developers might have (including ones at universities or government labs or students like, then, Linus Torvalds).

The extension of copyright duration and scope by Disney and others uses the courts to suppress possibly better free alternatives to paid media. https://en.wikipedia.org/wiki/Copyright_Term_Extension_Act http://www.bbc.co.uk/newsbeat/article/35811012/why-star-trek...

Even for cars, the purchase and destruction of light rail across the country by companies who stood to benefit from selling more cars was a factor (but not the only one) in the car's ascendance. https://en.wikipedia.org/wiki/General_Motors_streetcar_consp...

Leaded gas was known from the start to be harmful, yet money talked on that too including to misleadingly point the finger at lead paint instead of leaded gas for so much criminal behavior -- leading to a century of suffering and all sorts of external costs to society. http://www.motherjones.com/environment/2016/02/lead-exposure...

Further, the infrastructure for bicycling in the USA was never developed in a way like, say, the Netherlands -- and that is a political choice affecting the desirability of product use.

Marijuana and hemp (easily grown almost anywhere) in general became criminalized in part due to profits to be made by selling alternatives for tree-based paper and oil-based pharmaceuticals and remained criminalized because of campaign donations related to profits from maintaining prisons (e.g. why NY's clearly harmful Rockefeller drugs laws were supported for so long by so many legislators with prisons in their districts).

Most heart surgery like stents in most cases is basically a scam at $50K+ a pop compared to dietary changes (where nutritional advice and support offer little profit to hospitals and doctors): https://www.nytimes.com/2017/11/02/health/heart-disease-sten... https://www.drfuhrman.com/learn/library/articles/53/for-stab...

The entire medical profession was distorted in many ways by this application of big money a century ago to disrupt a focus on lifestyle and nutrition and shift the focus to drugs prescriptions and procedures: https://en.wikipedia.org/wiki/Flexner_Report

There are quite a few financially successful drugs and procedures which are based on questionable science and ethics -- but tremendously successful marketing operations.

The same goes for food.

The truth is increasingly coming out about refined sugar as poison: https://www.ucsf.edu/news/2017/11/409116/sugar-industry-supp... "If Project 259s findings had been disclosed, sucrose would likely have been scrutinized as a potential carcinogen, the authors said."

But rather than make such findings public, the Sugar industry paid money to shift the blame to fat: https://www.nytimes.com/2016/09/13/well/eat/how-the-sugar-in... "The sugar industry paid scientists in the 1960s to play down the link between sugar and heart disease and promote saturated fat as the culprit instead, newly released historical documents show. The internal sugar industry documents, recently discovered by a researcher at the University of California, San Francisco, and published Monday in JAMA Internal Medicine, suggest that five decades of research into the role of nutrition and heart disease, including many of today’s dietary recommendations, may have been largely shaped by the sugar industry. “They were able to derail the discussion about sugar for decades,” said Stanton Glantz, a professor of medicine at U.C.S.F. and an author of the JAMA Internal Medicine paper."

Or in general: http://www.timescolonist.com/opinion/columnists/monique-keir... "One popular television journalist quipped that the four food groups should be renamed “the four lobby groups.”"

The food lobbyists (especially the dairy industry) were very good at getting propaganda into schools by supplying free "educational" literature.

The schooling industry itself also delivers a monopolistic product with many bad qualities: "The Underground History of American Education: A School Teacher's Intimate Investigation Into the Problem of Modern Schooling" https://archive.org/details/TheUndergroundHistoryOfAmericanE...

An even deeper question regarding foods, games, online, and other products and services is designing them to have an addictive nature as part of how companies get people hooked on stuff that is harmful for them. Examples: https://www.naturalnews.com/021949_tobacco_smoking_health.ht... https://www.npr.org/sections/monkeysee/2012/11/27/165989232/... https://www.psychologytoday.com/blog/ethics-everyone/201202/... https://www.polygon.com/2014/7/28/5930685/love-child-intervi... https://en.wikipedia.org/wiki/Supernormal_Stimuli

Unregulated or misregulated businesses (including non-profits and government-run businesses) will privatize gains and socialize costs and risks whenever possible (including costs and risks to consumers and societies of using inferior products). So, while you are 100% right to look at multiple dimensions in saying why some products succeed, some of those dimensions can be pretty ugly.


There is a drug used to treat a form of eye disease that causes blindness. The product, eylea, has dominated the market despite (until recently) having no evidence that it improves vision vs the competition, provides no significant improvement on other relevant clinical endpoints, had no major safety benefit, and it is priced at a comparable or higher level than all competitors

A development stage product had shown better potential for vision improvement without major safety concerns, but when I interviewed physicians to ask whether they'd still prescribe eylea if this "better" drug was approved. They unanimously, without hesitation, said they'd prescribe eylea

Why would doctors treat eye disease with a product that does a worse job of treating eye disease? Why would patients accept a treatment that does not optimally restore their vision? There was nothing in the clinical literature to suggest this made sense. This must be a case of a large, established pharma company playing dirty, maybe bribing doctors or brainwashing patients with tv ads

The reality is that eylea required fewer doses than the competing products. For this drug, dosage means injection in the eye with a big needle. Further, physicians don't get paid more for doing an additional dose, so for them it's lost revenue. Turns out the incremental vision improvement does not offset the benefits of less frequent dosing

Getting this insight is not a feat of data analysis or scientific brilliance, but simply talking to customers. And getting this right, in this case, means winning a $4-5 billion market


As people have said already, you need to define what criteria you are using when you say something is "superior". On HN, there seems to be a divide between engineers who measure a product by its engineering quality vs. business folk who define product quality by its market success. They are not always the same thing. The reason "inferior" products succeed is because they may have some killer features desired by the market, or better tech support, or better sales and marketing -- in other words, the business quality beats out technical problems. Ideally, you'd have high quality in both areas, but that isn't reality in most organizations.


But you’re dangerously close there to having no predictive power. Business folk may rightly judge a product by its market success, but some business folk need to decide where to invest resources in building new products. Hindsight is 20/20, and of course products that are widely viewed as having poor engineering quality can succeed and even dominate a market, for a variety of reasons.

But that doesn’t provide much evidence to support the claim that, when developing a new product, engineering quality doesn’t matter. Sure, we can take the fatalistic view that new product development is an unguided process like biological evolution, where we will only know which random mutations were successful when we observe their prominence many generations down the line. But clearly most people act as if they believe they can do better than uniform random (i.e. investors generally don’t invest the exact same amount in every new company or product they can find).


You typically don't try to "predict", you just learn about customers' wants and needs and solve for those. Why try to guess what attributes are important in a product when you can just ask / test


As far as I can tell, product development is still guided, at the bare minimum by the ability of humans to predict other humans’ preferences based on an assumption of similarities between humans’ minds. I’m not aware of any product development that is intentional truly random, other than perhaps some details like the color of a button.


Are you saying that talking to your customers, and solving for their needs is "random"?


If quality is defined by success, the question whether superior alternatives ever lose out becomes trivial. They can't, by definition.

And now you've made it harder to discuss market failures and similar ideas, because you've eliminated some words that could intuitively be used to describe a useful distinction.


The problem is that people want market success to be a consequence of engineering quality. But a lot of people are interested in a lot more than engineering quality. So the products with the best code don’t necessarily make the most money.

You could ask “why do some attractive people marry ugly people?”. It’s because your reasons for choosing something are not necessarily the same as somebody else’s.


> The reason "inferior" products succeed is because they may have some killer features desired by the market, or better tech support, or better sales and marketing

Or they have a monopoly of some sort: a patent, only product carried by vendors, only service available in the area, only product with a critical mass of users, owns your email domain, owns your data, etc.

That's not technical or business, at least not in the popular-in-the-market sense.


People usually do confuse popularity with merit. They are in fact, two completely different, and often fairly unrelated, characteristics. The keyboard that happens to have the most market share is the most popular one. It is not the superior one.

Start with the example of the top ten most popular pieces of music at the time. Do we generally conclude that these are the most meritorious musical compositions of the moment? Or are they popular because they happen to be catchy, or just raunchy enough to play on our base desires but not enough to be censored, or because the distributor has a deal to continuously play the song on the radio?

QWERTY is still popular mainly just because of momentum. At the time it came out many decades ago, it had some nice utility. But clearly the core idea of slowing down typing became outdated long ago. But when it comes down to it, learning a new keyboard is not easy.

To switch to a new way of doing things does not depend on having a better way. I believe it depends on some type of social networking effects and chance. For example, if several celebrities suddenly decide to create a Twitter campaign about the evils of QWERTY and incredible qualities of DVORAK, then it for some reason came standard in a new hot electronic device that happened to be trendy among teenagers, perhaps we would see a significant market switch.

This is one thing that I have noticed about some posts on HN. People will come on and say that their startup failed, and then come up with a list of rationalizations blaming various material qualities of their product or service. In most cases I believe this is quite an incorrect interpretation of events. They usually have a perfectly good or often superior product, which simply did not catch on. So I think that things like marketing are quite key to becoming popular, but again, being popular or not doesn't substantiate or unsubstantiate the quality of the product.


The quantifiable improvement and cost benefit is also important. Dvorak shows on average a 1-2% benefit to typing speed, which is basically not worth the time and effort lost to learning a new layout.


I don’t even think that learning a new layout is the dealbreaker. I think the dealbreaker is that, no matter how you work, you will occasionally need to use a computer with a standard keyboard layout. It may not be very often at all (the last time I used a random computer was an on-site programming interview with the company’s PC, and that was a text editing disaster despite my familiarity with QWERTY, though I still got an offer), but it’s probably often enough that it’s not worth the small potential gains of another keyboard layout.

Of course, now that I present that argument, I realize it could be applied to text editors shortcuts and plugins, which I have customized extensively on all my main development machines. That said, I almost never do any significant coding on random machines, so the cost/benefit analysis might work out in my favor (I certainly behave as if it does). I also remap all my Caps Lock physical keys to Control/Command, which is pretty common among programmers but very rare among non-programmers. That’s another case where my gut says that the small rare inconveniences of using random computers are outweighed by my regular productivity increases.

However, I don’t have any actual data on this, so it turns out my argument in the first paragraph of this comment is really just a guess. I do intuitively feel like QWERTY to Dvorak is a much more significant alteration than Caps Lock to Ctrl or some customized hotkeys in my text editor of choice.


People can also cope with switching from different setups if trained well enough in both, which should not be too surprising. Eg: shortcuts in vim vs native editor shortcuts, scroll direction, one key swap.. It’s even possible for full blown keyboard layouts. I type using QWERTY and Colemak both on a regular basis, although I’ve not tested how long I can go without a layout to retain it, but forgetting is harder than relearning in that case.


But people don't do a cost benefit analysis or anything rational like that when they select a keyboard or when they select most things. Its a subconscious decision mainly factoring in what other people do (what is most popular) and the rationalizations come after.


But also a fair amount less finger motion (I did test this to my own satisfaction some years ago) for a given set of text. And qualitatively, I do think that has translated into less wrist strain.

I’m not a particularly fast keyboardist, and I don’t particularly care about the speed differences of dvorak vs. qwerty. But, I do quite appreciate what I believe to be real ergonomic benefits.

My wife switched a few years ago and claims also improved wrist comfort.

But nobody would ever choose a dvorak keymap if they were solely interested in that 1-2% speed benefit (I don’t know where this number came from, but I no reason to doubt it).


I believe there is a lack of proof the ergonomic benefits are significant outside of typing tests which are very unrealistic to day to day tasks (unless you have a pure typist job). A bias also exists when many people that switch layouts never touch typed well or properly in QWERTY to begin with, or lose comfort awareness as they forget how to type well in QWERTY. I speak as someone that touch types equally well in QWERTY and Colemak, and think time would be spent better sticking to or learning QWERTY better.


After how long usage was this measured? Do you have sources? How about Colemak and its variants?


> But clearly the core idea of slowing down typing became outdated long ago.

The qwerty layout was designed to minimise jams, not to "slow down typing".


> QWERTY [...] the core idea of slowing down typing became outdated long ago.

And even back then it wasn't true. QWERTY keyboard was not designed to slow typist down, it was designed so letters that follow each other most often were as far on the hammer row (and thus on the keyboard) as possible.


It depends what you mean by superior.

If you mean superior in a technical sense (which unfortunately seems to be all a lot of startups and engineers focus on), then obviously there are tons of instances where 'superior' alternatives fail to catch on, simply because the 'superior' alternatives lack the non technical benefits of the older products or services.

Like how a new social network might be more decentralised and censorship resiliant than Facebook, but lack the community/userbase that makes Facebook valuable to begin with. Or how a CMS system might be better coded than WordPress, but have a UI that people find more awkward to use (or an install process that's overly tedious/annoying for non technical folk).

In that sense, a lot of superior alternatives fail to catch on simply because for all the 'objective' improvements they make, they just don't do what people need them to actually do. A community site or social network doesn't need the best codebase possible, a long laundry list of features or a censorship resistent setup, it needs a big enough community/userbase to get people invested in it.

Of course, even when all factors do line up... well, that doesn't exactly guarantee the alternative with succeed either. People aren't robots, and don't choose every action to be as rational as possible. The difference between two competitors can just as easily come down to pure luck, some perceived emotional connection, timing or anything else, not necessarily the quality of the product or organisation behind it.


Often, because in the calculation of the provider, one factor never appears. Training-time, if a superior tool comes along, making it necessary to retrain to do the same work, its not superior, its inferior until the gain is so big that the productivity loss is visible to the customer.

That is why even with superior software it is necessary to add a "Legacy"-user interface, that allows for a easy switch from old software, while at the same time retraining people to use the new interface. This changes the transition costs.


There are lots of examples. For example a base 12 number system is superior to a base 10 one. Esperanto is strictly easier to learn than English and so would make for a better lingua franca than English does.

What doesn't happen very often is that the market chooses a worse technology when we each can individually benefit from picking the better one.

But the definition of "better" has to be right. Often things that are worse on one axis are better on another. See https://www.jwz.org/doc/worse-is-better.html for a famous essay about the difference between what makes software good versus readily adopted.


Is Esperanto really easier to learn than English? I'm basing on https://en.wikipedia.org/wiki/Esperanto#Grammar

English uses 26 un-accented letters, which is easily the lowest common denominator for typewriters and computer text systems. Esperanto has 28 letters with one type of diacritic.

Esperanto has word inflections for case (e.g. subject vs. object) and more verb tenses than English. Are inflections really easier to use than having auxiliary words or relying on word order?

I think English became a lingua franca for a reason, because it has far fewer grammatical and orthographic features than other European languages like French, German, etc. I think Esperanto has more complex features of European languages than English.


For years the lingua franca was French. (In fact lingua franca is an Italian phrase meaning "French language".) And based on experience, I believe that French is easier to learn.

The transition to English only happened after the British Empire became economically and politically dominant. It retained its importance because as England declined, the USA became one of the two world superpowers.

The big win of English is not that it is easy to learn. We often have parallel words for the same thing (eg earth vs dirt). Words that would be related in any other language aren't (eg cow vs beef). Attempt to read http://www.i18nguy.com/chaos.html aloud if you need further convincing.

The big win is that the people you most want to talk to already speak it.


Straying off topic a bit: If we native English speakers want our birth tongue to continue being the global language, we should make English as easy as possible for non-native speakers to learn and use — mainly by not looking down on intuitive-but-"bad" usages such as "it's" as the gender-neutral singular possessive pronoun.


A related thought based on my recent experience.

One of my teams is currently using a mesh colorization system for our game, which we developed in-house. E.g. we have a game level where each mesh is assigned some color ID, such as "primary accent" or "background1" (but not an actual RGB color); and there are color schemes which we can apply to the level according to these color IDs to get different looks.

The first version of the color scheme system was "clearly superior": it offered a lot more possibilities for assigning color schemes to meshes, but it needed about 80 color values for each color scheme.

The second version of the same system was "clearly inferior" -- just 28 color values, and much less creative control over the resulting look of the level.

Guess which system was adopted? Right, the second, simple one. Despite offering less creative possibilities, the second system was smaller and thus could fit more easily into artists's heads, which produced better-looking results overall.

To sum up, "superior" is multidimensional. Dvorak can offer better typing speed, but it offers much less "habit compatibility" with existing keyboards in the real world, and thus is a worse time investment. Or, Haskell offers much better maintainability, refactorability and reliability, but dynamic languages offer faster time-to-market and easier-to-hire coders.


If you’re a touch typist physical keyboard layout doesn’t matter too much. Beyond ergonomics, at least.

On that note, I’m loving the increasing number of games that read the OS keyboard layout and display appropriate key mappings instead of assuming every gamer uses QWERTY (some games do even worse and don’t use the key code but the mapped character forcing me to rebind keys or switch layouts).


For programmers, QWERTY is actually superior to DVORAK in my opinion. The US keyboard variant more specifically as most language designs have been influenced by how reachable each punctuation is to that specific layout. And with auto-complete, the alphabet placement is not that important.

Actually if we were to rethink the keyboard based on the frequency of usage, it might make sense to place all the punctuation a bit closer to the home row for programmer keyboards.


When there's a new, far superior way of doing things, it always catches on.

But usually there are several different versions of this new way, with slightly different properties, trade-offs and non-product features (like marketing, support, compatibility, price etc) - which of these wins out is a crap-shoot. That's why some companies, investors and customers back different versions... and the pragmatic ones wait for a market leader to emerge.


To add a bit of insight to the conversation, beyond "it's a matter of more dimensions than one", which people have already commented on. This whole situation could also be seen from an economics perspective. Demand for a product, let's say a can of soda, is called "elastic" when customers will switch from one vendor to another purely based on price. If one vendor has a lower price than another, all customers, in this idealized model, will head directly to that place to buy their soda, leaving the more expensive one in the dust. In this situation, however, demand for the thing, keyboard layouts, is "inelastic". That means that the factor that we'd think would be the only property of keyboards to matter for customers, is not in fact all that matters. Another thing that affects whether they will switch is how long it will take to learn the new layout.

TLDR; demand for quality in keyboard layouts is inelastic. There are other variables in play. One of them is the fact that there is a lot of friction (in terms of setup, having to learn a new layout, etc) for customers to switch from one layout to another.


It's true that there isn't necessarily much difference in typing speed between QWERTY and Dvorak layouts, for trained typists (even the same typist). However, we might say that we're throwing the baby out with the bathwater when we claim that Dvorak is technically not superior because weak studies showed little to no marginal improvement in a few particular metrics.

Modern work is a marathon of text entry, and in light of this I believe that what is technically important about a keyboard in the long run is not speed or even accuracy, but ergonomics. I won't make any general claims, because it seems like it's never been possible to study, but as an anecdote I can offer my own experience. I used to type on QWERTY until I had to stop due to repetitive strain. In the fifteen years since I switched to Dvorak I have never had a single day when I finished work feeling any pain in my hands, whereas this was almost an every-day occurrence with QWERTY. I can type for hours without stopping, without pain. The design of the keyboard makes it pleasurable to type in English. Words roll off the fingers of both hands in relaxing patterns. My hands move very little. It's nice, and I would suggest it as an option to anyone who types or programs a lot and is having trouble with their hands, not because I have any financial or personal stake in it, but because it really helped me and I care about the health of my fellow tech workers. I have seen a lot of people taken down by their hands, and I have also seen many people try some crazy gimmicks.

In the same space, not considering the placement of the letters on the keyboard, there is an even more absurd technical anachronism embedded in almost every single keyboard on the market, including virtual ones on our phones. The keys are positioned not in clean vertical rows, but offset as if they have mechanical arms behind them. This is pure path dependence, and there is no conceivable reason why we are stuck with it thirty years after mechanical typewriters fell out of use except that those who learned on a mechanical typewriter couldn't even imagine to design or test a different positioning of the keys. It's not just the weird zig-zag pattern of the position of the keys that is anachronistic. Why should the backspace and delete keys (which are so essential when we are typing in the flexible medium of digital text) be relegated to the far corner of the keyboard? (TypeMatrix presents an example of a modern reconceptualization of the layout. I'm not affiliated but I do enjoy their products.)

To summarize, I think that this article presents a rather limited (and even ad hominem) attack on the keyboard issue, with acknowledgment but little appreciation of the degree of path dependency in tech development. How can Dvorak be better if the research was flawed? This is not a complete answer.

Of course we are going to end up in suboptimal equilibriums, and together we should appreciate this if we ever want to get out of them.


Reason.com is a presenting a political argument more than a scientific one. Their goal is to promote free markets, not to accurately decide the best keyboard. Of course they'll cherry-pick data to avoid the appearance of market failure.


> Why is the keyboard story receiving so much attention from such a variety of sources? The answer is that it is the centerpiece of a theory that argues that market winners will only by the sheerest of coincidences be the best of the available alternatives

> Because first on the scene is not necessarily the best, a logical conclusion would seem to be that market choices aren't necessarily good ones

By the way, the reason.com article also goes into an explanation of path dependence, which the grandparent mentions extensively.


The reason.com article inflates the significance of the QWERTY myth, and then claims that debunking it debunks path dependence, which, of course, it doesn't.


While i agree with the sentiment that the supposedly superior product isn't actually that superior all the time, the given examples are a bit cherrypicking-ish. For example the BetaMax vs VHS comparison is about image quality, not length (and FWIW BetaMax could record 2 hours, but it is true that VHS thanks to its size could record more than that). Similarly the mammals vs birds comparison feels a bit stretchy - like calling a tomato salad a "fruit salad" because tomato is technically a fruit.


I remember OS/2 was allegedly superior To Windows 95. But it failed because there were far fewer applications for it. IBM had to to pay Netscape to finally port their browser to it.

I used to say OS/2 was HALF an OS, as it was missing a very important half of the equation; all those annoying applications.

Plus I could never get it to stay installed on an IBM PC. I did it once, but never could do it a second time. So that third half of the equation was pretty weak too.


I recommend that people take a look at “Diffusion of Innovations” Rodgers 1962 and all of the subsequent work since. It doesn’t directly address usability and design, which I believe are also important, but it addresses it indirectly.

[1] https://en.m.wikipedia.org/wiki/Diffusion_of_innovations


Why? Because context matters.

That is, for example, just because BetaMax had a superior Feature X doesn't mean that Feature X matters to enough people. That feature exists in a broader expectations + wants + needs "eco-system." That momentum can be very difficult to redirect once it gets rolling.

Yes, a unique / superior feature _might_ be what wins you the market but it's not always that simple.


Whether it's a computer or a bar of soap, most things have so many aspects that even the experts can't agree, so A) All the damn time.


I do think that superior alternatives will catch up eventually if they stay in the game long enough... The problem is that sometimes the window of opportunity can be suppressed for very long periods of time; sometimes several generations... Maybe even several centuries.

Great one-time success can create a protective buffer which allows inferior traits to persist through long periods of time.


> It is often said that birds have far superior lungs than mammals. So mammals are failures compared to birds…

I've never heard this argument once in my life, and Google doesn't come up with too much discussion on it beyond a few articles discussing the differences. This point struck me as a bit of a stretch, and the blog post itself is probably stronger without it.


Microsoft became Microsoft because they were better at marketing/business in the technology industry, not because they were better at building and shipping technology.

Amiga workstations and Macs in the late 80s were way ahead of DOS's UX with its 640k RAM limits and poor CGA graphics capabilities. But they became the standard and caught up with the graphics and multimedia capabilities of the other two platforms 20 years later once they could invest into removing their technical debt.

In contrast, Amiga died a slow and painful death because of mismanagement and owner squabbles, even though they were used for much more than home gaming (real time TV station visuals) even into the early naughties. They were just much better than the alternatives. And ahead of its time as a technical platform and home PC.


I was a diehard Apple user back in the 80s but PCs were clearly better in a lot of ways they were cheaper and because of the number of PC manufacturers more versatile.

After Windows 95 came out and before MacOS 10.1,Windows was better than Mac OS. Except for very brief windows, x86 was faster than the current 680x0 and PPC processors.


The importance of the selection of Microsoft to provide the operating system for the IBM PC cannot be overemphasized. It was not the only thing that made Microsoft what it became (the decision to make Windows for PC-compatible computers, and to make decent business applications for it, were critical), but without the first step, it probably would not have been in a position to do what followed.

There is also path-dependence in IBM's decision to make the PC architecture open (as an attempt to grow it quickly despite being late to the party), which is what made Windows possible.


You touch on something that's a very important point: path dependence. There are a ton of both de jure and de facto technology standards that we are locked into at some level because they gained dominance at one point because of some combination of tech, marketing, industry dynamics, and more. Once Ethernet, USB, x86, or whatever gets established it's hard to kick it out for something else that is locally better.

This has become increasingly the case in much of the tech sector because of the increased importance of interoperability , ecosystems, and network effects. Conversely, you do see fairly rapid switching aware from online properties that aren't sticky because something new and shiny has come along.


Also because Microsoft made the right decision for business customers when it came to backwards compatibility.


Also because Microsoft licensed the OS to hundreds of PC clone manufacturers. Amiga was a closed proprietary system.


I agree

"Superior" has to be followed by: "superior to whom?" and "superior in what aspect?"

"X is better than Y" but X does not do what Y does then it's not superior if it won't solve a problem.


UI has the big problem of power vs casual/noob user and everyone stars as a noob. There are good compromises, usually the power-user tool is hard to get in to and more efficient.

Similar for switching away from QWERTZ.


Someone once told me that the porn industry played a part in the VHS/Betamax and Blu-Ray/HD DVD battles. While Betamax and HD DVD were technologically superior, utility drove adoption.


In what way was HD DVD technologically superior? Unlike BetaMax, there’s literally not a tech spec that it has a clear pronounced advantage with[0]. HD DVD couldn’t even really be seen as the VHS of the 2000s, while it had price and some basic utility advantages (cheaper to produce), it didn’t ever have the adoption (by the movie industry or by hardware manufacturers).

[0] https://en.m.wikipedia.org/wiki/Comparison_of_high_definitio...


Agreed, HD DVD was the cheaper and technologically inferior option.


Important topic but strawman examples in many ways.

In most of the important cases, you will have never heard of what could have been.

Also, often the problem is that better ideas weren't developed as much as they could have been. That is, we look back and compare the inferior but dominant product that was revised and revised because of it's problems and dominance, and the superior but abandoned product in it's unimproved state, which is some sort of survivorship bias.

The OP has a point but it's exaggerated.


The tipping point by Malcom Gladwell might have an extensive answer on this matter and is similar effect to epidemics.


Professionality?

You can basically ship crap if you do it professionally.



Excellent post. It's short and doesn't waffle. It has a coherent idea and at least I learned something new.


Personally, if I may be crude, found the post to be too short and unsubstantiated. I can honestly not imagine this is a "serious" post by a professor (!). It's so short-sighted I don't even know where to begin.

My personal opinion is that "superior" often means "cheap as shit and I'm willing to compromise a lot of stuff".


I believe that the 90-90 rule [1] proves that many superior alternatives will fail to catch on. If something is not revolutionary better, and has the potential to compete with a product, but has not been developed to the point of actual competitiveness, it will lose out.

Combine this with network effects, and it seems that it would be impossible for superior techs not to fail.

Take email headers as an example. They are objectively inefficient and hard to parse. Everyone agrees on that. But we still use them despite the obvious possibility of a better format. Why? Obvious network effect.

The 90-90 examples will be less obvious. You cannot easily tell which tech is better, until you have invested the same amount of effort developing all of them. Superior products will be more likely to eventually succeed in markets where such parallel development is justified. Currently, most speakers are made by gluing a permanent magnet to a piece of plastic or paper, and using an electromagnet to pull on the permanent one. Through parallel development, we now know, that we can make much much better speakers by making the membrane magnetic, by gluing copper traces onto it, or statically charging it. We can make objectively better sound with these lighter membranes which don't have a heavy permanent magnet glued to them. Slowly, expensive speakers and headphones are using the new tech, but most speakers sold sill use the inferior tech. The superior tech was developed parallel to the inferior one, only because speakers are such a large market, that this parallel development could be justified financially.

Now take braille displays. A much smaller market. We think that the current tech used in Braille displays is worse than a set of other techs. There isn't enough money for the parallel development, so despite the fact that alternative, probably superior, technologies exist, we haven't developed them to the point of competitiveness.

[1] https://en.wikipedia.org/wiki/Ninety-ninety_rule


Most of the time. Take a look at developer tools for instance:

jenkins,nagios,graphite,jmeter,mysql,postgresql,puppet,vagrant,virtualbox,mongodb,npm,docker,openvpn.

What all these software have in common. They are the most popular despite being poor products and often having notably superior competitors.


Your list makes no sense to me. PostgreSQL is a poor product with noticeably superior competitors? This is - how should I put it? - a niche opinion.

And besides, popularity is its own quality sometimes. NPM might not be the "best" but everything's on it so it's the most useful. A lot of devops people already know docker. If you need to hire, there's a deep pool to draw from. That counts for a lot.


Obviously, the list doesn't make sense unless you know the products mentioned and their competitors, preferably with a bit the history over the last decade(s).

The only thing all the products mentioned have in common is that they are free to use, as-in no upfront money.

There are pools to hire for any software. First, the hype is cyclical, people who worked long enough will know multiple products. Second, new comers won't know anything so they will need to be trained anyway.


I'm guessing he's referring to Oracle as the competitor.


Well that's unexpected! I'm aware that Oracle has a few features pg doesn't, and replication is supposed to be easier, but at extraordinary cost and with the added bonus of supporting one of the worst companies in tech.

I don't really think you can even compare them as they serve totally different markets. If I was the CTO at a Fortune 500 and needed a DB for my new global logistics system, fine, Oracle would be in the running. Since this is HN though, I was thinking in terms of startups - and a startup choosing Oracle would be pure insanity unless they had an amazingly good reason.


Your list is heavily loaded with free products that are good enough.

The lesson is that if a commercial product wants to be widely adopted it needs to be significantly better than the similar free products to compensate for the hassles and headaches of buying software (and the god damn support contracts). Simply being free is a major advantage right off the bat.


Developers would rather die than spend a dollar on any product, no matter how better it is or how bad they will suffer.

That's only half the lesson though, because some of the alternatives are free or free up to N users.

The other half of the lesson is branding and hype cycle.


> Developers would rather die than spend a dollar on any product

That's just silly. I'm a developer and I have spent plenty of money on good products that help my work. Hell, literally today I dropped $30 on a word processor. And there is a huge market for developer-focussed SAAS and PAAS products.

> no matter how better it is or how bad they will suffer

Example? Citation needed.

> The other half of the lesson is branding and hype cycle

These cryptic pronouncements are kind of useless. Again, give an example. Out of those things on your list I'd give you mongodb as a hype-driven mistake but that's pretty much it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: