I always hoped for a more 30%/30%/30%/10% distribution, with two big proprietary players, one FOSS one and a bunch of other more experimental or niche systems. Guess that hasn't happened so far. The only place in technology where I remember three big and roughly equally powerful players was the console market, but only for a while.
Would be interesting to find out details about these dynamics.
Developers seem to be willing to support two competitive platforms, but when there are three or more, they throw up their hands and support just the top one, or maybe the top two.
It depends on there being strong cross-platform development tools or not.
Back in the early days of the micro-computer, many games used a virtual machine of sorts. The game logic was implemented in VM bytecode and then the VM engine was the only thing that needed porting between systems.
This was to a greater or lesser degree aided by the limited hardware of the time, and that said computers were single task more often than not.
This intrigues me, as it clashes with everything I know about how classic game software was designed. Around what decade are we talking about? Can you name one or two examples of games that were designed like this?
Ah, alright. When you said "early days of microcomputers" I was rather thinking about the Atari 2600.
The VM approach was in absolute terms actually pretty unusual in the late 80s/early 90s, AFAIK. Perhaps counting only cross-platform games or adventure it was more common, though.
They're willing to if they can bundle it in something large enough to be—and otherwise more resource hungry than—a (sane) minimal graphical OS. Electron, say.
Would be interesting to find out details about these dynamics.