Hacker News new | past | comments | ask | show | jobs | submit login

> Having lived through 20+ “the year of the Linux desktop”s I can tell you the goal was clearly to become he dominant desktop so that everyone use

Again, Linux isn't an operating system like Windows, macOS or even FreeBSD. Even the broader definition of Linux has it as multiple different collections of different individuals sharing different groups of packages. What one team might aim to do with Linux is very different to what another team might want to do. And "The year of the Linux desktop" wasn't a universal goal by any means. Case in point: I've been using Linux (as well as several flavors of BSD, Macs, Windows and a bunch of other niche platforms that aren't household names) for decades now and I've never once believed Linux should be the dominant desktop. I'm more than happy for it to be 3rd place to Windows and macOS just so long as it retains the flexibility that draws me to use Linux. For me, the appeal of Linux is freedom of choice -- which is the polar opposite of the ideals that a platform would need hold if it were aiming for dominance.

So saying "Linux failed" isn't really a fair comment. A more accurate comment might be that Canonical failed to overtake Windows but you still need to be careful about the term "failed" given that millions of regular users would be viewed as successful by most peoples standards -- even if it didn't succeed at some impossible ideology of a desktop monopoly.

> About that last part… the op was talking about people, it actual hardware, you seem to have missed that.

No i got that. It's you who missed the following part:

> Hardware is in fact predictable, unless it’s failing or just that poorly designed.

Don't underestimate just how much hardware out there is buggy, doesn't follow specifications correctly, old and failing, or just isn't used correctly by users correctly (eg people yanking USB storage devices without safely unmounting them first). The reality is that hardware can be very unpredictable yet operating systems still need to handle that gracefully.

The market is flooded with mechanical HDDs from reputable manufacturers which don't follow specifications correctly because those devices can fake higher throughput by sending successful write messages back to the OS even when the drives are still caching those writes. Or cheap flash storage that fails often. And hot pluggable devices in the form of USB and Thunderbolt have only exasperated the problem because now you have devices that can be connected and disconnected without any warning.

Then you have problems that power saving introduces. You now have an OS with hardware that shouldn't be connected and disconnected yet still able to power that on and off gracefully (otherwise your OS is borderline useless on any laptop).

...and all of this is without even considering external conditions (ie the physical nature of hardware -- the reason it's called "hardware"). From mechanical failures, hardware getting old, dusty, dropped etc. Through to unlikely but still real world problems like "cosmic bit-flips".

Now imagine trying to implement all of this via reverse engineering - because device manufacturers are only going to support Windows, maybe macOS and, if you're lucky, Linux. And imagine trying to implement that for hundreds of different hardware types, getting each one stable. Even just testing across that range of hardware is a difficult enough problem on its own.

There's a reason BeOS-clone Haiku support FreeBSD network drivers, SkyOS's user land was eventually ported to Linux, and Linux (for a time at least) supported Windows WiFi drivers. It isn't because developers are lazy -- it's because this is a fscking hard problem to solve. And lets be clear, using another OS's driver model isn't an easy thing to implement itself.

Frankly put: fact that you think hardware is easy and predictable is proof of the success of Linux (and NT, Darwin, BSD, etc).




Today I learned that humans are predictable and consistent, but hardware is not....


I wasn't making any comment about the predictability of humans. However there have been plenty of studies that have proven humans are indeed predictable. If we weren't then dark UI patterns for email sign ups, cookie consent pop ups, and so on wouldn't work. The reason UI can be tuned for "evil" is precisely because of our predictability. But this is a psychological point and thus tangential from the discussion :)

To come back on topic. I wasn't saying hardware is unpredictable per se -- just that it often behaves unpredictably. And I say that because there are a set of expectations which, in reality, hardware doesn't always follow.

However the predictability of humans vs hardware is a somewhat moot point because that's only part of the story for why writing hardware-interfacing code is harder than human interfaces. I did discuss cover a few of those other reasons too but you seem fixated on the predictability of the interfaces.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: