We're not "stuck" with it. There have been hundreds if not thousands of OS's written since then. Unix is still around because it's still useful. Just like the Sistine Chapel is still around because it's still beautiful. Experts come and clean it up and keep it nice :)
Among the exceptions, almost all have kept the worst limitations of the Unix family.
Is there some analysis on this issue available?
What limitations have been kept?
The only operating system that was supposed to be an improvement over Unix (that I know about) is Plan 9, but over time many ideas from Plan 9 have been ported to Unix. Unix today is not what it was 40 years ago ... the only legacy that's truly staying is the philosophy, which for better or worse, it works.
Of course, Unix could be better. But let's be serious, what do you suppose we should do? Reinvent everything every 5 years?
The disk/RAM dichotomy is an atrocity, along with everything it entails (i.e. files.) There is no reason for the machine to expose multiple address spaces. The act of program installation and loading into memory, for example, should be one and the same. Think "Palm Pilot."
RAM is in practice simply a cache of disk, and for some reason we are stuck "driving stick" and moving things into and out of it by hand. Why can't we treat it the same as L0/L1 cache?
Tone is hard to convey online. I guess I should have added a smiley. I'm not angry at all; I don't miss anything from Plan 9. I just feel sorry for these guys who went to so much trouble to make a "proper" OS that fixed up one they hacked together (unix).
[10 days pass...] I wonder if unix beat Plan 9 because it was established in the market (certainly a huge factor), or if its hacked together approach was somehow intrinsically better (as in the "worse is better" essay); or maybe that there were some specific decisions in Plan 9 that were bad in practice (though they seemed good to the designers).
I guess unix being established and being adaptable makes it very hard to displace - so the first factor swamps any effect of the other two.
All in favor of porting Gnome, KDE, Firefox, Java and OpenOffice to Plan9 and to write device drivers for nasty things like 3D accelerators, wireless network interfaces and so on, raise your hands.
It depends on what you mean by 'driver'. FUSE has certainly created a portable set of filesystem drivers for several operating systems. There are also things like ndiswrapper and project evil to wrap windows network drivers for various unixy systems.
The real reason that most drivers don't get ported seems fairly obvious though. Different systems expose different amounts of surface area, different programming models, etc. Why is the current standard of tight coupling bad? Gains in portability are generally offset by a loss in performance, which isn't really something you want on your root filesystem or your graphics card (or your server's NIC, as a counterbalance to ndiswrapper).
Probably for the same reason we're still using that 3000-year-old state of the art known as "place-value": we haven't really found anything fundamentally better to replace it... yet.
> we haven't really found anything fundamentally better
This is plainly false if you bother to lift a finger and read about the ideas of Genera, CoyotOS, ErOS, Smalltalk, etc. The UNIXes (and VMS, and its mutant Microsoft progeny) are pieces of junk which remain popular solely through network effects.
I have read about some of those. Unfortunately I must have missed or misunderstood something since I can't think of anything they offer that makes them worth adopting over Unix. It wouldn't be the first time something like this has happened. The idea of the World Wide Web wasn't all that impressive to me when I first saw it, partially because I'd already been using networks for almost a decade before it and was completely adapted to using arcane file transfer commands. It could be that there are some ideas in those systems that are just as revolutionary and I've missed them.
Could you perhaps choose one or two of the ideas you think make those systems fundamentally better than Unix and elaborate on them?
Ok lets consider this. I'm willing to believe that we could eliminate some of these "gymnastics" by adopting new persistence models but not without paying a cost somewhere else. Perhaps that cost is only some additional education but I'm not convinced things are that simple. A decade or two ago many organizations tried proprietary object-oriented databases for various applications in part because they purported to simplify persistence issues. For some specific applications they did but more often than not the problems they caused their users were worse than the ones they solved and most ultimately decided to settle for app servers and ORM solutions.
I've always thought a key barrier to the adoption of systems that support Orthogonal persistence is that it's harder to make applications designed around it perform as efficiently as ones designed to use lower level things like files or databases, partly because many techniques (e.g. efficient working set management, concurrency control, schema change, replication, data archival) are pretty well understood for applications that use the latter on systems like Unix.
Maybe there are new opportunities for orthogonal persistence now that distributed cloud storage and computing is becoming popular. Such an OS specifically designed for application providers running on large clusters with S3-style storage could change things. However none of the systems you mentioned look like good candidates to me for such change.
The machine gets an address space of 0..N bytes. They stay there, whether or not there is AC power. Internally this could consist of an array of mechanical or flash disks, cached by abundant cheap RAM. How difficult is that? No proprietary anything, no "object oriented" anything necessary. Nothing to think about. No changes in software at all, in fact, except for all of the various things you no longer have to do. You build a data structure in memory - doesn't matter what kind - and it stays there, until you erase it. That's what orthogonal persistence is. Forget the snake oils which try to coopt the phrase.
One example of a current machine with actual orthogonal persistence is the Palm Pilot.
> it's harder to make applications designed around it perform as efficiently
Crank the R/W speed up to eleven and hope the issues go away. We're nearly there with striped solid-state disks.
> OS specifically designed for application providers running on large clusters with S3-style storage
Screw that, I just want a desktop which behaves sanely - from both the user and programmer's point of view. This means a single, nonvolatile address space.
If you really believe that a Palm Pilot is a current machine or that the OS its own manufacturer no longer supports is superior to present day Unix then I don't think there's much for us to discuss.
Plenty of people question it, it's just that good. Questioning simplicity is a futile exercise. At best you can question whether it is simple enough (see Minix).
Why do you assume that simplicity necessarily equals Unix? Many of its design decisions were not mathematical inevitabilities, but are merely artifacts of hardware limitations which no longer exist:
"The Jolitzes believed that from its origins as a series of quick, if elegant, hacks, Unix had hardened into a series of unquestioned rituals for getting things done. Many of these rituals were fossils -- work-arounds for hardware that no longer existed. "It's amazing how much we were still tied to the past," Lynne says. "The physical machines had been hauled away, but elements of the operating systems are still being implemented in the same way." She believes today's Linux and BSD developers are carrying on that unhelpful tradition. "It's like they're doing incantations," she says. "They're repeating what they've been taught, and they don't know what it means."
http://www.eng.uwaterloo.ca/~ejones/writing/systemsresearch....