Hacker News new | past | comments | ask | show | jobs | submit login
BeOS: The Alternate Universe's Mac OS X (hackaday.com)
672 points by fogus on Jan 9, 2020 | hide | past | favorite | 415 comments



Ah, memories. Used to use BeOS as my primary OS for a year or two. I think it's the only OS I ever found to be truly intuitive and pleasant to work with.

That said, I don't think the world would be in a better place had Apple chosen Be over NeXT. The elephant in the room is security: NeXTSTEP, being Unix-based, has some amount of security baked in from the ground up. BeOS didn't; it was more akin to Windows 95 or classic Mac OS on that front. Consequently, I doubt it could have made it far into the 21st century. It would have died unceremoniously in a steaming pile of AYBABTU. Taking Apple with it, presumably.


I used to work for Be in Menlo Park in a previous life and I can confirm that the code base quality would have made for a very bad outcome for Apple. Security was the least of the numerous serious issues. That said BeOS somewhat still exists in spirit as a lot of folks from Be went to build/contribute to Android.


> a lot of folks from Be went to build/contribute to Android.

Does that include the quality and security perspective as well? ;-) j/k

Having never crossed paths with a former Be employee before, __thank you so much__ for your contribution. BeOS was so instrumental to my perspective on computing and operating systems (and potentially the conception of my disdain for what Microsoft did to the world of operating systems around the turn of the century).

From a user perspective, BeOS was nearly perfect. Great UI and utilities, POSIX command line, so fast and responsive. The "install to Windows" option was amazing for trying things out. BeFS was wonderful (it's nice to see Mr. Giampaolo's work continue in macOS).


> a lot of folks from Be went to build/contribute to Android.

That's correct, the IPC in AOSP, Binder, is basically borrow from BeOS


I too used to work at Be (Hi!) as well as developed applications for BeOS. I also worked at Apple on various releases of OS X. NextStep was far ahead of BeOS on multiple fronts. BeOS was a lot of fun to work on, but only scratched the surface of what was needed for a truly commercial general purpose OS. If Apple would have acquired Be instead of Next, who knows what the world would be like today. Apple ended up with a large number of former Be employess as well (some directly and others from Eazel.)


I can never let a thread about BeOS go by without adding my two cents, because I also worked at Be in Menlo Park, back in the day. (I went down with the ship and got laid off as they went out of business.)

I was sore about it at the time, but I agree that Apple made the right decision by choosing NextStep over BeOS. If for no other reason, because that's what brought Jobs back. It's hard to imagine Apple making their stunning comeback without him.


Thanks a lot! I ran BeOS fulltime for a few years (R3/4/5) and I'm looking at a BeOS "the Media OS" poster at my wall here. Fond memories!


Care to share where you got the poster?


It was not in the box. Back then, it was still quite difficult to get a hold on an actual R3 box here in Europe. There was 1 official reseller here in the Netherlands and I actually bought their official demo machine: the famous first dual processor Abit BP6 with 2x 400Mhz Celeron processors. When picking it up in their office I spotted the poster and asked if I may have it. Still got a T-Shirt and a hat too ;-).


I vaguely remember it being in the box (bought R4, R4.5, and R5).


And apparently a couple Amiga gurus made their way to Be (see: Fred Fish).

I’d always heard that after Amiga (and Be) many decided to opt for Linux for philosophical reasons.


Which is ironic, given that I am yet to see a GNU/Linux based hardware setup that matches the experience, hence why I went back to macOS/Windows that much a much closer multimedia experience.


Wow! Thanks so much for working on BeOS. This was a super fun OS to use.


I'm curious what sort of issues you have in mind. I was never very familiar with BeOS but from what I understood the issue with it was more that its responsiveness came from very heavy use of multi-threading, but that also made it very hard to write robust apps for it as, in effect, all app code had to be thread safe. App devs found that condition too hard to handle.

Can I assume that the quality issues were somewhat related to that? BeOS devs found it no easier to write thread safe code in C++ than app devs did?


I’m the guy who left a case of champagne at the office one weekend, to celebrate an early release.

Thanks for the memories.


“That said, I don't think the world would be in a better place had Apple chosen Be over NeXT.”

Yes. Except that it wasn’t acquiring NeXTSTEP that saved Apple’s skin; it was acquiring Steven P Jobs.

True, version 1 had been rough and flakey as hell, and honestly really didn’t work all that well.

But Steve 2.0? Damn, that one could sell.


NeXTSTEP pretty directly evolved into iOS, though, so it was certainly a significant asset in the acquisition, too.


True, but a technology is only a means to an end, not an end itself. What sells is product.

You may have the finest tech on the planet—and that means precisely squat. What counts is putting bums on seats. Your seats. And keeping them there. Limps of tech are just a vehicle for that; to be used, abused, chewed up, and/or discarded on the road(s) to that end.

Apple could have done better; they certainly did plenty worse (Copland, Taligent, the first Mac OS).

As it turned out, NeXTSTEP proved it was indeed “good enough” to fit a pressing need at the time; and the rest was just hammering till it looked lickable enough for consumers to bite. All it was needed was a salesman to shift it—and Steve 2.0 proved to be one of the greatest salesman in modern marketing history.

That’s what made the difference between selling a tech to a million dyed-in-the-wirewool nerds, and selling tech to a billion everyday consumers. And then up-selling all of those customers to completely new worlds of products and services invented just for the purpose.

..

Want to created a whole new device? Hire Steve Wozniak.

Want to create a whole new world? Oh, but that is the real trick.

And Steve Jobs gave us the masterclass.

..

Had Steve started Be and Jean-Louis built NeXT, we would still be in the exact same situation today, and the only difference would be chunks of BeOS as the iPhone’s bones instead. Funny old world, eh? :)


I'm not sure I've ever encountered someone so invested in the "great man" theory of history.

Jobs was obviously talented, but assuming no matter where he went he would have had the same level of success is discounting a lot of luck in how everything lined up,and who was available to help bring all the things to market jobs is famous for. There's no guarantee the hundreds or thousands of people that were also essential to the major successes of Apple would have been around jobs had he stayed at Next. Those people deserve respect and recognition too.


You forgot his family had been the largest share holder of Disney not because Steve got apple. He is VERY successful to the point he even gave up getting anything but an private jet. That is billion of course but that is not success. What is.

And unlike v1 v2 seems better on human level as well. We do not need saint. He still parked in space for hadicapped only I guess. But let us admit, it is not just one for all. But all for one.


ISTR a tale of Legal keeping a large slush fund from which to pay off all the ex-Apple-employees that Steve 2.0 would straight tell to their face to fuck off. Just because that is what worked best for him†. :)

“But let us admit, it is not just one for all. But all for one.”

Damn straight. Epically focused leadership.

--

(† For all others who aspire to build their own businesses, there is HR procedure and askamanager.org—and do not for the life of you ever bypass either!)


>Epically focused leadership. Just to support that, I remember hearing a story told by Larry Elison (they were apparently neighbours for a while), where he would pop over to see Steve, and would be subjected to the 100th viewing of Toy Story where Jobs was obsessively pointing out every new tiny improvement they'd made in the story or graphics.

Epically focused indeed.


“Those people deserve respect and recognition too.”

ORLY? Name them.

--

Not “great man”. Great vision.

Geeks tend massively to overrate the importance technical aptitude, which is what they’re good at, and underrate everything else—business experience, sales skills, market savvy, and other soft skills—which they’re not.

Contrast someone like Jobs, who understood the technical side well enough to be able to surround himself with high-quality technical people and communicate effectively with them, but make no mistake: they were there to deliver his vision, not their own.

Tech-exclusive geeks a useful resource, but they have to be kept on a zero-length leash lest they start thinking that they should be the ones in charge since they know more about tech than anyone else. And the moment they’re allowed to get away with it, and you end up with the tails-wagging-the-dog internecine malfunction that plagued Sculley’s Apple in the 90s and has to some extent resurfaced under Cook.

Lots of things happened under Jobs 2.0. That was NEVER one of them.

..

Case in point: Just take the endless gushing geek love for Cook-Apple’s Swift language. And then look at how little the iOS platform itself has moved forward over the 10 years, it’s taken to [partly] replace ObjC with the only incrementally improved Swift. When NeXT created what is now AppKit, it was 20 years ahead of its time. Now it’s a good ten behind, and massively devalued to boot by the rotten impedance-mismatch between ObjC/Cococa’s Smalltalk-inspired model and Swift’s C++-like semantics.

Had Jobs not passed, I seriously doubt Lattner’s pet project would ever have advanced to the point of daylight. Steve would’ve looked at it and asked: How can it add to Apple’s existing investments? And then told Lattner to chuck it, and create an “Objective-C 3.0”; that is, the smallest delta between what they already had (ObjC 2.0) and the modern, safe, easy-to-use (type-inferred, no-nonsense) language they so pressingly needed.

..

Look, I don’t doubt eventually Apple will migrate all but the large legacy productivity apps like Office and CC away from AppKit and ObjC and onto Swift and SwiftUI. But whose interest does that really serve? The ten million geeks who get paid for writing and rewriting all that code, and have huge fun squandering millions of development-hours doing so? Or the billion users, who for years see minimal progress or improvement in their iOS app experience?

Not to put too a fine a point on it: if Google Android is failing to capitalize on iPhone’s Swift-induced stall-out by charging ahead in that time, it’s only because it has the same geek-serving internal dysfunction undermining its own ability to innovate and advance the USER product experience.

--

TL;DR: I’ve launched a tech startup, [mis]run it, and cratered it. And that was with with a genuinely unique, groundbreaking, and already working tech with the product potential to revolutionize a major chunk of trillion-dollar global industry, saving and generating customers billions of dollars a year.

It’s an experience that has given me a whole new appreciation for what another nobody person starting out of his garage, and with his own false starts and failures, was ultimately able to build.

And I would trade 20 years of programming process for just one day of salesmanship from Steve Jobs’ left toe, and know I’d got the best deal by far. Like I say, this is not about a person. It is about having the larger vision and having the skills to deliver it.


Jobs was far more of a "tech guy" than either Sculley or Cook. He understood the technology very well, even if he wasn't writing code.

I would also say, Jobs had a far, far higher regard for technical talent than you do. He was absolutely obsessed with finding the absolute best engineering and technical people to work for him so he could deliver his vision. He recognized the value of Woz's talents more than Woz himself. He gathered the original Mac team. If he had, say, a random group of Microsoft or IBM developers, the Mac never would have happened. Same with Next, many of whom were still around to deliver iOS and the iPhone.

Your take is like a professional sports manager saying having good athletes isn't important, the quality of the manager's managing is the only thing that matters.


“Your take is like a professional sports manager saying having good athletes isn't important, the quality of the manager's managing is the only thing that matters.”

Postscript: You misread me. I understand where Jobs was coming from better than you think. But maybe I’m not explaining myself well.

..

When my old man retired, he was executive manager for a national power company overseeing distribution network. Senior leadership. But he started out as a junior line engineer freshly qualified from EE school, and over the following three decades worked his way up from that.

(I still remember those early Christmas callouts: all the lights’d go out; and off into the night he would go, like Batman.:)

And as he later always said to engineers under him, his job was to know enough engineering to manage them effectively, and their job was to be the experts at all the details and to always keep him right. And his engineers loved him for it. Not least ’cos that was a job where mistakes don’t just upset business and shut down chunks of the country, they cause closed-coffin funerals and legal inquests too.

--

i.e. My old man was a bloody great manager because he was a damn good engineer to begin with. And while he could’ve been a happy engineer doing happy engineering things all his life he was determined to be far more, and worked his arse off to achieve it too.

And that’s the kind of geek Steve Jobs was. Someone who could’ve easily lived within comfortable geeky limitations, but utterly refused to do so.

’Cos he wanted to shape the world.

I doff my cap at that.


“Jobs was far more of a "tech guy" than either Sculley or Cook.”

Very true. “Renaissance Man” is a such cliche, but Steve Jobs really was. Having those tech skills and interests under his belt is what made him such a fabulous tech leader and tech salesman; and without that mix he’d have just been one more Swiss Tony bullshit artist in an ocean of the bums. (Like many here I’ve worked with that sort, and the old joke about the salesman, the developer, and the bear is frighteningly on the nose.)

But whereas someone like Woz loved and built tech for its own sake, and was perfectly happy doing that and nothing else all his life, Jobs always saw tech as just the means to his own ends: which wasn’t even inventing revolutionary new products so much as inventing revolutionary new markets to sell those products into. The idea that personal computers should be Consumer Devices that “Just Work”; that was absolutely Jobs.

And yeah, Job always used the very best tech talent he could find, because the man’s own standards started far above the level that most geeks declare “utterly impossible; can’t be done”, and he had ZERO tolerance for that. And of course, with the very best tools in hand, he wrangled that “impossible” right out of them; and the rest is history.

Woz made tech. Jobs made markets.

As for Sculley, he made a hash. And while Cook may be raking in cash right now, he’s really made a hash of it too: for he’s not made a single new new market† in a decade, while Apple’s rivals—Amazon and Google—are stealing the long-term lead that Jobs’s pre-Cook Apple had worked so hard to build up.

--

(† And no, things like earpods and TV programming do no count, because they’re only addons, not standalone products, and so can only sell as well the iPhone sells. And the moment iPhone sales drop off a cliff, Cook’s whole undiversified house of cards collapses, and they might as well shut up shop and give the money back to the shareholders.)


I hear you, I do, but here's another perspective: Jobs without Wozniak wound up being California's third-best Mercedes salesman.

And neither of them would've mattered a jot if they were born in the Democratic Republic of the Congo, or if they were medieval peasants, or if Jobs hadn't been adopted, or or or ...

Luck is enormously influential. There thousands of Jobsalikes per Jobs. Necessity isn't sufficiency.


I think Steve Jobs The Marketing and Sales Genius is an incorrect myth.

Jobs was an outstanding product manager who sweated all the details for his products. And in contrast to Tim Cook, Jobs was a passionate user of actual desktop and laptop computers. He sweated the details of the iPhone too, but his daily driver was a mac, not an iPad. Cook is less into the product aspect, and it really really shows. Cook is a numbers and logistics guy, but not really into the product.

That's a thing I think Apple has fixed recently with some reshuffling and putting a product person (Jeff Williams) in the COO role. The COO role is also a signal that he'll be the next CEO when Tim Cook retires.

To be clear, I don't disagree that Jobs was a great marketer. But that stemmed from his own personal involvement with the product design of the mac--and later the iOS devices--rather than some weirdly prodigious knack for marketing.


> You may have the finest tech on the planet—and that means precisely squat.

You shouldn't talk about Sun like that.


NeXTSTEP appears to have first gotten incorporated throughly into the OS X codebase. Browse through the Foundation library for the Mac - https://developer.apple.com/documentation/foundation/ . Everything that starts with NS was part of NextStep.


It didn't get 'incorporated'.

OSX/macOS/iOS is the latest evolution of NeXTStep/Mach which originated in the Aleph (and other) academic kernels.

(of course OS's evolve pretty far in a few decades...)

(https://en.wikipedia.org/wiki/Mach_(kernel))


My understanding was always that NeXTSTEP served as the foundation of OS X, and while it certainly got a new desktop environment and compatibility with MacOS's legacy Carbon APIs, it was essentially still NeXTSTEP under the hood.


Yes. That is all those NS... prefix meant.


I always thought that, too.

It's wrong.

Original NeXT classes were prefixed NX_. Then NeXT worked with Sun to make a portable version of the GUI that could run on top of other OSes -- primarily targeting Solaris, of course, but also Windows NT.

That was called OpenStep and it is the source of classes with the prefix NS_ -- standing for Next/Sun.

https://en.wikipedia.org/wiki/OpenStep#History

This is why Sun bought NeXT software vendor Lighthouse, whose CEO Jonathan Schwartz who later became Sun's CEO.

Unfortunately for NeXT (and ultimately for Sun), right after this, Sun changed course and backed Java instead.


> Everything that starts with NS was part of NextStep.

Not quite. Everything in Foundation gets the NS prefix because it's in Foundation; only a fraction of it came directly from NeXT.


Yeah, Rhapsody > Mac OS X Server 1.0 > Mac OS X > iOS which was literally described as being "OS X" when it first launched.


Security from what? Do user accounts really provide much benefit in the personal computing space? Where the median user count is 1?

Neither OS had the kind of security that is really useful today for this usecase, which is per-application.


But a bunch of the methods we have for securing, say, mobile phones, grew out of user accounts.

Personally I don't know Android innards deeply, but when I was trying to backup and restore a rooted phone I did notice that every app's files have a different owner uid/gid and the apps typically won't launch without that set up correctly. So it would seem they implemented per-app separation in this instance by having a uid per app.

Imagine a world where Google had chosen to build on a kernel that had spent many decades with no filesystem permissions at all. Perhaps they'd have to pay the same app compatibility costs that Microsoft did going from 9x to NT kernel, or changing the default filesystem to ACL'd-down NTFS.


Then you'd maybe get something like iOS, where the POSIX uid practically does not matter at all, and the strong security and separation is provided by other mechanisms like entitlements...

Someone else pointed out that BeOS allegedly had "quality and security" problems in general (I myself have no idea), so that may indeed have led to problems down the line, whereas BSD was pretty solid. But I agree with the OP and don't think POSIX security in particular is much of a factor today.


Yeah. Funny enough, if Apple had skipped OS X and gone directly to iOS, BeOS would have been a superior foundation. No uselessly mismatched security model or crusty legacy API baggage to clog up the new revolution in single-user always-online low-powered mobile devices.

Of course, that was in back in the days when an entire platform from hardware to userland could be exclusively optimized to utterly and comprehensively smash it in just one very specific and precisely targeted market. Which is, of course, exactly what the iPhone was.

Just as the first Apple Macintosh a decade earlier eschewed not only multi-user, multi-process, and even a kernel; every single bit and cycle its being being exclusive dedicated to delivering a revolutionary consumer UI experience instead!

In comparison, NeXTSTEP, which ultimately became iOS, is just one great huge glorious bodge. “Worse is Better” indeed!

..

Honesly, poor Be was just really unlucky in timing: a few years too late to usurp SGI; a few too early to take the vast online rich-content-streaming world all for its own. Just imagine… a BeOS-based smartphone hitting the global market in 2000, complete with live streaming AV media and conferencing from launch! And Oh!, how every Mac OS and Windows neckbeards would’ve screamed at that!:)


On a similar note, I've often wondered what Commodore's OS would have turned into. Not out of some misplaced nostalgia, just curiousity about the Could Have Been.

My guess is that by now in 2020, it would have at some point had an OSX moment where Commodore would have had to chuck it out, since both Apple and Microsoft have effectively done exactly that since then. Still, I'd love to peek into Amiga OS 9 descended from continual usage.


I think AmigaOS 3 could be a nice kernel as it is. And to make it more Unix-y memory protection could be introduced but only for a new userland process with more traditional syscalls.

It's a bit how DragonflyBSD is slowly converging to.


Amiga OS 9 would have looked very different from the Amiga OS that we know (I am talking from a developer's point of view, not about the GUI).

Since inter-process communication in Amiga OS was based on message passing with memory-sharing, it was impossible to add MMU-based memory protection later. As far as I know, even Amiga OS 4 (which runs on PowerPC platforms) is not able to provide full memory protection.

There was also only minimal support for resource tracking (although it was originally planned for the user interface). If a process crashed, its windows etc. would stay open. And nobody prevented a process to pass pointers to allocated system resources (e.g. a window) to other processes.

The API was incomplete and tied to the hardware, especially for everything concerning graphics. This encouraged programmers to directly access the hardware and the internal data structures of the OS. This situation was greatly improved in Amiga OS 3, of course far too late. Amiga OS 3 was basically two or three years too late. As far as I know, Apple provided much cleaner APIs, which greatly simplified later the evolution of their OS without breaking all existing programs.

Finally, the entire OS was designed for single-core CPUs. At several places in the OS, it is assumed that only one process can run at a time. This doesn't sound like a big issue (could be fixed, right?) but so far nobody has managed to port Amiga OS to multi-core CPUs (Amiga OS4 runs on multi-core CPUs, but it can only use one core).

I have been the owner of an Amiga 500 and Amiga 1200, but to be brutally honest, I see Amiga as a one-hit wonder. After the initial design in the mid-1980s, development of the OS and the hardware basically stopped.


> Since inter-process communication in Amiga OS was based on message passing with memory-sharing, it was impossible to add MMU-based memory protection later.

Why can't you do shared memory message passing with MMU protection? There is no reason an application in a modern memory protected OS can't voluntarily share pages when the use case is appropriate. This happens today. You can mmap the same pages, you can use posix shm, X has the shm extension...


Or just take a Docker-like approach, where each app thinks it is the only user and intra-app communication is where you put the security functionality


But the predecessor to containers were features like having daemons chroot into somewhere else and drop their uid to something that can't do much. That very much grew out of the Unix solutions. If Unix daemons were written for decades assuming all processes have equal privilege maybe we wouldn't see that.


I think this sort of thing is a capabilities ladder in an arms race.

If you never evolved account based security, you never built the infra for even evaluating application permissions in the first place.


“Security” is a bit of a misnomer in this context: I think what you actually meant was “multi-user architecture” which, as remarked elsewhere, undergirds the whole notion of keeping processes from promiscuously sharing any and all resources.


yeah i think of it more as multi tenet safety


Yes, in short - users & groups serve as a rudimentary implementation of capabilities. Best example is Android. But there's more to it.

Separating admin user from non admin user always has advantages and I do it even on Windows.


Best counter example to their point is iOS, though, where POSIX permissions don't play much of a role in securing the system and separation applications.


I do like that you have to “sudo” a program to allow it to access certain files. Even if I am the only user, it stops malicious programs from modifying certain files without me noticing.


Obligatory related xkcd: https://xkcd.com/1200/


Posting links to XKCD like this is generally considered to be a low quality post, hence the downvotes. I’m not one of the downvoters, but thought I’d share the reason as nobody else did.

Edit: gotta love HN! I try to be helpful to someone else that was downvoted to heck with an explanation of why that was the case (based on past replies I’ve seen) and now my post is the own with a negative score. Cheers y’all!


First rule about downvotes is we don't talk about downvotes.


Under the hood though there's multiple accounts which different applications use, the user might only log in with one but applications are isolated from each other and the system because of it.


Security from malicious programs or exploits, accidentally altering system files and other device users.

We used to be able to trust intentionally installed programs not to exfiltrate data. It's sad that we still can't.


Wouldn't the median need to be over 1? I get your point but am feeling pedantic today.


If more than 50% of personal computers have 1 or 0 users, then the median would be 1, assuming 0 users is less common than 1, regardless of how many users the remaining computers had.


If more than (or equal to) half of computers are used by only one person, then the median user count is 1, no?


If you have 3 PCs in the world, one with 0 users, one with 1 user and one with 23 users the median is 1.

Median is literally the middle, like a highway.


They're just stating that having more than half of all computers with just 1 user guarantees that the median is 1.


No.

For example, suppose five computers have 1 user, 1 user, 1 user, 3 users, 300 users. The median is 1 user.

The claim of "median 1 user" just means more than half of computers has a single user.


> Used to use BeOS as my primary OS for a year or two. I think it's the only OS I ever found to be truly intuitive and pleasant to work with.

I love everything I've read about BeOS but to be honest I must mention I couldn't understand how to use Haiku (I've never used the original BeOS) once I've tried - id didn't feel intuitive at all. And I'm not really stupid, I've been using different flavors of Linux as a primary OS for over a decade.

> That said, I don't think the world would be in a better place had Apple chosen Be over NeXT. The elephant in the room is security: NeXTSTEP, being Unix-based, has some amount of security baked in from the ground up. BeOS didn't; it was more akin to Windows 95 or classic Mac OS on that front.

Some times I miss the days of Windows 95 so much. I wish desktop OSes could be more simple, i.e. without multi-user and file access rights. When it's my own personal computer all I want of it from the security perspective is to prevent others from unlocking it or recovering data from it and to prevent any network communication except that I authorized. Sadly Linux still doesn't even have a decent implementation of the latter (Mac has LittleSnitch).

Windows 9x did pretty well for me - I've never caught a virus, never corrupted a system file and it was easy to fix for others who did.


> I wish desktop OSes could be more simple, i.e. without multi-user and file access rights.

Have a look into Oberon and its successor A2/Bluebottle.

http://ignorethecode.net/blog/2009/04/22/oberon/

https://liam-on-linux.livejournal.com/46523.html


Security, networking, multiuser, i10n, print (don’t even begin to underestimate this and quartz and display postscript) beos was an rtos with a neat UI. it was still fun but there was a gigantic pile of work before it could do what system 7 did, let alone what next did.


Additionally, NeXTStep had been in use in production on investment bank trading floors, in scientific institutions, and in military/intelligence agencies. It wasn't widely used, but it was used.

So while it might not have been quite ready for the median System 7 user's expectations, it was pretty solid.


May be so. But I have Mac OS 1.0 running on my MacBook. It is so slow and really not that working. Unlike the Mac OS 9. It is not that smooth. Luckily he found iPod ... even the colour one is very slow.


Also, the the familial relations of MacOS and Linux made it possible to share code fairly seamlessly between both (providing not talking about hardware integration). In a world where we there was 3 separate universes: Windows, BeOS, and Linux it's possible Linux would've become more isolated.


BeOS had a regular Unix like (even posix IIRC) dev environment.

I was able to do most of the CS course work projects normally done on my University's Sun workstations on BeOS instead. Most of these of courses were data structures, algorithms, compilers, etc projects in C, and not things that required platform specific APIs.

But arguably, BeOS' overall model - a single user desktop OS built on top of but hiding its modern OS underpinnings like memory protection and preemptive multitasking - is far more similar to what eventually became MacOSX than Linux. Which isn't so surprising since it was built by ex apple folks. Remember that consumer OSs before this point had no memory protection or preemptive multitasking.

Linux, though it had the same modern OS features, was far more closely aligned in spirit with the timeshared modern multi-user Unix OS's like what ran the aforementioned Sun workstations (it's "Linus' Unix after all).


BeOS had a POSIX-compliant layer, but under the hood it was totally different from a UNIX.

Also, let’s keep in mind that Windows95 (released that same year) featured preemptive multitasking on a desktop user OS (albeit not a strong memory protection model), and WindowsNT has been available for a couple of years by then (having first shipped in 1993, If memory serves) and was a fully ‘modern’ OS (indeed it serves as the basis for the latter Windows), albeit with a comparatively large footprint.

I was an avid BeOS user (and coincidentally a NeXT user too) and I was enthralled by its capabilities, but in terms of system architecture it was a dead end.


IIRC the Unix compatibility layer had some pretty grotty warts. Porting Unix applications virtually always required fiddling to get them working, especially the network code.

Unfortunately this meant BeOS was perpetually behind the curve on stuff like the World Wide Web. I had a native FreeBSD build of Netscape long before Be managed to get a decent browser.


The Amiga had preemptive multitasking in the 80's. (No memory protection though.)


So did the Lisa even earlier (and Xenix, which was a derivat of Unix Vers. 7, anecdotally also seen on the Lisa).


Is that true? I see contradictory information about Lisa OS. Some posts claim it was cooperative, like the original Mac System. Example: https://macintoshgarden.org/apps/lisa-os-2-and-3


(A bit of research later:) It's actually a bit of a mixed bag. The "Operating System Reference Manual for the Lisa" [0] reads on pp. 1-3/1-4:

> Several processes can exist at one time, and they appear to run simultaneously because the CPU is multiplexed among them. The scheduler decides what process should use the CPU at any one time. It uses a generally non-preemptive scheduling algorithm. This means that a process wlll not lose the CPU unless it blocks. (…)

> A process can lose the CPU when one of the following happens:

> • The process calls an Operating System procedure or function.

> • The process references one of its code segments that is not currently in memory.

> If neither of these occur, the process will not lose the CPU.

In other words, non-preemptive, unless the OS becomes the foreground process, in which case it may block the active process in favor of another one currently in ready or blocked state.

[0] https://lisa.sunder.net/LOS_Reference.pdf


BeOS was as UNIX like, as Amiga was.

Surely it had a cli, UNIX like directory navigation and a couple of UNIX command like utilities.

But good luck porting UNIX CLI software expecting a full POSIX environment.

If I am not mistaken, Haiku has done most of the work regarding POSIX support.


It had a bash shell, and used glibc, and partially implemented POSIX.

I was also able to get most my CS homework done in BeOS. But I definitely needed to keep FreeBSD around for when I hit a wall.


It was ok. Back when I ran BeOS as my primary OS (2001 or so) I built half a C++ web application on BeOS, the other half on a HP-UX server logged in through an X terminal using ftp to sync between the two. Not much support in the wider *nix ecosystem though, so anything big would often fail to build.

I regretted having to move away from BeOS, it was by far the most pleasant OS I’ve used, but the lack of hardware and software support killed it.


In college I wrote a web server in beos and ported it back to Linux, learning pthreads along the way. Bonus achievement was making it multithreaded, so I got that for free, since beos makes you think architecturally as multithreaded first


AmigaOS was not UNIX-like in the least. Amiga UNIX, which shipped on a couple models, was directly System V UNIX, though.


That was the point I was trying to convey regarding BeOS.

Having a shell that looks like UNIX, and a couple of command line utilities similar to the UNIX ones, does not make an OS UNIX.


Ah. I gotcha now.


Hmm possibly. But it could also have been to Linux benefit. It would be alone among these in having then advantage of Unix heritage.


Yes, I remember so many software developers switching from Linux to OSX in the 2000's because "it's a Unix too, but it's shiny".


bounced between windows and os/2, never really used beos as an os, mostly just as a toy for fun. the one thing I remember is that I could play a video that for the time looked amazing without issue. I want to say I even played Quake on it, in a window!


Funny you should mention Windows 95. The company that sold that ended up doing pretty well.


Sure, but at the time Windows 95 was released, they already had a couple of Windows NT releases (3.1, 3.5, and 3.51). Windows NT was a different, more modern operating system than the Windows 95/98/ME line. So, they did not have to evolve Windows 95 into a modern operating system. After ME, they 'just' switched their user base to another operating system and made this possible through API/ABI compatibility (which is quite a feat by itself).


The company that sold classic Mac OS did, too.

But you have to consider what else was going on at the time: Microsoft was actively moving away from the DOS lineage. OS/2 had been in development since the mid-1980s, and, while that project came to an ugly end, they had also released the first version of Windows NT in the early '90s, and, by the late '90s, they were purposefully moving toward building their next-gen consumer OS on top of it.

Apple needed to be making similarly strong moves toward a multi-user OS with concerns like security baked in deeply. BeOS had the memory protection and the pre-emptive multitasking, which were definitely steps forward, but I don't think they would have taken Apple far enough to allow them to keep up with Microsoft. Which, in turn, would have allowed Microsoft to rest on its laurels, probably to the detriment of the Windows ecosystem.


Really? Most people I talk with these days seem to agree that the proprietary OS is a liability.


I’ve never heard anyone say Windows is a problem because it’s proprietary. I have heard that having to pay to upgrade is a pain because you (the company) have to budget for it. Even then, you would also need to budget for the downtime and time to verify that it works before deploying the update, and both those have to be done on Linux too (it’s why LTS releases are a thing).

Anyways, Windows 10 may have its problems, but Microsoft the company is doing pretty well. Their stock is up about 50% this year (200% over the past 5). And that’s not to mention the fact that they’ve open sourced .NET among many other things.


I interpreted then as saying it was a liability to Microsoft.


Outside HN and Reddit talks, most people I know don't even care about FOSS OSes existence, they just want something that they buy at the shopping mall and can use right away.


In fairness, I don't think most people care about the OS at all, FOSS or otherwise; they care that the UI is something they can use, and that their apps work. If you perfected WINE overnight, I'll bet you could sit 80% of the population down at a lightly-skinned FreeBSD box and they'd never know.


I don't even think you'd need that for most of the population: it's been quite some time since the median user cared about desktop software[1]. I switched my parents over to a Linux Mint install a decade ago when I went away to college, and it lowered my over-the-phone tech support burden to zero overnight.

I also had (non-CS but very smart) friends who switched to (ie dual-booted) Linux on their own after seeing how much better my system was than a Windows box. A decade later, one of them is getting her PhD in veterinary pathology and still dual boots, firing Windows up only when she feels like gaming.

[1] My impression is that committed PC gamers aren't a large portion of the desktop user population, but I may be wrong.


I know a decent number of people who have That One Program that they've been using for 20 years and can't/won't leave. It probably varies by population group.


It didn't kill them, though, which was my only point. I guess HN didn't think it was as funny as I did.


AYBABTU = All Your Base Are Belongs to Us, which is a mangled or broken translation in English of a Japanese phrase from a Japanese game `Zero Wing` [1]

[1] https://en.wikipedia.org/wiki/All_your_base_are_belong_to_us

Edit: removed the extra A in the acronym


Got an extra A in there


The anti competitive business practices of Apple make it hard to imagine the world could be worse.

Instead of competition, Apple survives off marketing medium quality products at high prices.

I'm not sure how that's good for anyone unless they own Apple stock.


You don't get to Apple is (large market cap, high customer satisfaction scores, high reviews in the tech press, etc.) because of marketing. If it were that easy, companies would just copy their marketing or load up on marketing and they would be successful.

And a huge part of Apple's current success is based on the tech and expertise they got from NeXT. That work underpins not just laptops and desktops but phones, tablets, set-top boxes, and more.


Perhaps you only get to where Apple is with world-class marketing.

Apple's iPod wasn't the first mp3 player, and it for damn sure wasn't technically superior.

The iPhone was not the first smartphone, nor the first phone with a touchscreen, nor the first phone with a web browser, nor the first phone with an App Store. It arguably had a better UX than incumbents, but better UX doesn't win markets just by dint of existing.

The iMac was a cute computer that couldn't run prevalent Windows software and didn't have a floppy drive.

Recent MacBook Pros have an awful keyboard, not just aesthetically but with known hardware problems. I understand at long last they're reverting to an older, better design.

Tech and expertise don't win just because they exist.


You've left out the part where Apple makes products that have user experiences that are miles ahead of whatever existed at the time.


I'm as reflexively inclined as many technical people to be dismissive of marketing, but I dont think you're right here. You can't "just copy" marketing in the way you can't "just copy" anything else that a company is world-class in, and good marketing can indeed build market dominance (do you think coca cola is really a vastly superior technical innovation over Pepsi?)

The fact that it isn't a net good for users in most cases doesn't mean that it's trivial to do.


> If it were that easy, companies would just copy their marketing or load up on marketing and they would be successful.

Maybe good marketing is really hard and you can't just "copy Apple"?


If people willingly exchange currency for products from a company and are satisfied with the value that they get out of it to the point that they become repeat customers, then how can you judge that no one except stockholders are benefitting?


Because Apple obviously sucks. I don't understand how hard it is for all their happy customers to understand that they suck. /s


Network/lock-in effects and negative externalities can easily have that result.


> negative externalities

This is very true. macOS and the iPhone, for me, went from being "obviously the very best of the best" to "the lesser of all evils".

When my 2015 rMBP finally gives up the ghost and / or when 10.13 loses compatibility with the applications I use, I have no idea what I'm going to do - probably buy another working 2015 rMBP used and pray that the Linux drivers are livable by then.

I know it's ridiculous, but it helps me fall asleep at night sometimes.


You don’t agree on the 16” being the spiritual successor of the mid 2015 15”?


I feel like it's a huge step in the right direction, but for my own personal use:

- I still have mostly USB 2.0 peripherals. I don't see that changing anytime soon.

- I'm still hung up on the MagSafe adapter.

- I love the form factor. The 13" display is the perfect size, for me. I could've switched to a 15" 2015 rMBP with better specs, but I hated how big it was.

- I have no interest in using any version of macOS beyond 10.13, at present.

I'm really glad that they brought the Esc key back, especially as a pretty serious vim user. I don't know, maybe I'm stuck in the past. I'm certain that many, many people are really enjoying the new Macbook Pro 16; I just really, really like this laptop. It's the best computer I've ever owned.


I'm in the same boat as the sibling poster (albeit with a 15" machine) and I'll add this:

- The TouchBar is terrible

I hope they'll bring back a non-TouchBar configuration when they release the "new" keyboard on a 13" MacBook Pro. I could live with both a 13" or 15" laptop, but right now the list of drawbacks is still 1-2 items too long.


Can? Sure. I would commend anyone that can make the case that this is the best explanation for Apple’s success as a whole though.


make it hard to imagine the world could be worse

This seems like a failure of imagination.

I'm not a huge Apple fan, but I lived through the Bad Old Microsoft of the '90s, and grew up on stories of IBM of the '80s.

Apple is nothing like them.


I was another former Be "power user." And I think that was probably accurate -- if you weren't in the "BeOS lifestyle" during the admittedly short window that it was possible, it's hard to understand how much promise it looked like it had. When I tell people I ran it full-time for over a year, they wonder how I managed to get anything done, but...

- Pe was a great GUI text editor, competitive with BBEdit on the Mac

- GoBe Productive was comparable to AppleWorks, but maybe a little better at being compatible with Microsoft Office

- SoundPlay was a great MP3 player that could do crazy things that I still don't see anything doing 20 years later (it had speed control for files, including playing backwards, and could mix files that were queued up for playback; it didn't have any library management, but BeOS's file system let you expose arbitrary metadata -- like MP3 song/artist/etc. tags! -- right in file windows)

- Mail-It was the second-best email client I ever used, behind the now also sadly-defunct Mailsmith

- e-Picture was an object-based bitmapped graphics editor similar in spirit and functionality to Macromedia's Fireworks, and was something I genuinely missed for years after leaving BeOS

And there were other programs that were amazing, even though I didn't use them: Adamation's video editor (videoElements? something like that), their audio editor audioElements, Steinberg's Nuendo, objektSynth, and two programs which are incredibly still being sold today: Lost Marble's Moho animation program, now sold by Smith Micro for Mac and PC, and the radio automation package TuneTracker (incredibly now being sold as a turnkey bundle with Haiku). Also, for years, there was a professional-grade theatre light/audio control system, LCS CueStation, that ran on BeOS -- and it actually ran Broadway and Las Vegas productions. I remember seeing it running at the Cirque de Soleil permanent installation at Disney World in Orlando.

At the time Apple bought Next rather than Be, I thought they'd made a horrible mistake. Given Apple's trajectory afterward, of course, it's hard to say that looking back. It's very possible that if they'd bought Be, they'd have gone under, although I think that would have less to do with technology than with the management they'd have ended up with (or more accurately, stayed with). But it's still an interesting "what if."


I actually toyed with the idea of starting a radio station based on the BeOS MP3 player + file system. The thought was to have a system without human DJs that used a simple web interface to gather "votes" for songs/genres, and to use the metadata in the file system to queue up the songs. If I remember correctly, BeOS also had a macro programming interface (ie. AREXX) that could be used to glue things together.

This made BeOS (and BeBox) a great product in my mind; the ability to use it in unexpected ways.


You may have hinted at it, but I think Apple's subsequent turnaround after acquiring Next was mainly due to their founder, Steve Jobs, coming back to Apple.


Jobs helped enormously, of course, but if Apple was still trying to sell classic MacOS in 2005 I'm not sure even Steve Jobs could have kept them afloat long enough to ship an iPhone.


But the choice wasn't Next vs MacOS, it was BeOS vs Next.


That's true, but most keep forgetting that even before his comeback, Apple was very close to filing for bankruptcy and who knows what would have happened without the intervention from Gates. Microsoft was the only juggernaut who's fate was never doubted in the 1980s - 1990s.

NeXT hardware also failed but was the rightful choice for Apple over Be due to getting NeXTSTEP and Jobs again. But even after the war is over, we're now all generals.


When was Sun close to collapsing? I want to learn more about this history.


Absolutely. Even though I honestly think Gil Amelio gets dumped on more than he deserves, I doubt he could have really saved them in the long run.


NetPositive was much better in Internet Explorer compatibility than Netscape when it used to matter.


> Pe was a great GUI text editor, competitive with BBEdit on the Mac

Why wasn't it called BeBeEdit?!


wow, this list is bringing back all the memories. You're right, there was a short wave of enthusiasm, and some apps which were actually very innovative for the time. I remembered that it was actually used in some pro audio and lighting stuff, but I'd forgotten most of those apps. I remember playing around with Moho.

what's the name of the 2d illustration software that modelled real wet paint brushes and textured pencils? That was unlike anything I'd seen at the time, I remember putting quite a lot of effort into finding a compatible wacom.


Gosh. I remember the illustration program you're talking about but can't remember its name, either. :) I was surprised that it seemed to take so long for that concept to show up on other platforms, though -- other than Fractal Design Painter, it didn't seem like anyone on the Mac or Windows was really trying for that same kind of "real ink and paper" approach.


One of my favorite anecdotes about BeOS was that it had a CPU usage meter[1], and on the CPU meter there were on/off switches for each core. If you had two cores and turned one off, your computer would run at half speed. If you turned both off, your computer would crash. Someone once told me that this was filed as a bug against the OS and the response was "Works As Intended" and that it was the expected behavior.

(These are fuzzy memories from ~25 years ago. It would be nice if someone could confirm this story or tell me if it's just my imagination.)

[1]: http://www.birdhouse.org/beos/8way/8way-1.jpg


The CPU monitor program was called Pulse and early versions allowed you to turn all the processors off and crash the machine. I think it was fixed in 3.something or 4.0.

The 8-way PIII Xeon was a Compaq someone tested BeOS on before it went into production. I Remember it being posted on some BeOS news site. There should be another screenshot or two with 25 avi files playing and a crap load of CPU hungry programs running at once. Impressive feat circa 2000. Edit: browse the screenshot directory for the other two. Amazing they survived time, internet, bit rot and my memory: http://birdhouse.org/beos/8way/

The BeOS scheduler prioritized the GUI and media programs so you could load the machine down to 100% and the GUI never stuttered and windows could be smoothly moved, maximized and minimized at 100% CPU. Rather, your programs would stutter. And everything was given a fair chance at CPU time.

Very nice design and the OS was built from the ground up for multimedia and threading for SMP. It was a real nice attempt at building a next generation desktop OS. Had no security even though it had basic POSIX compatibility and a bash shell. Security bits meant nothing.


I remember circa 2000 being able to simultaneously compile Mozilla, transfer DV video from a camcorder into an editor, check email, and surf the web on a dual Pentium Pro system with no hint of UI stutter or dropped frames in the firewire video transfer. It was at least another decade before SSDs and kernel improvements made that possible on Linux, Windows, or OS X.


The tradeoff was the throughput of your compilation was terrible. BeOS wasn't magic, it just prioritized the UI over all else. That's not advanced, it's just one possible choice.

MacOS prior to OS X had the same property: literally nothing else could happen at the same time if the user was moving the mouse, which is why you had to take the ball out of the mouse before burning a CD-R on that operating system.


Oh, sure, it was obviously limiting the other tasks. The point was that this is almost always the right choice for a general purpose operating system: no user wants to have their music skip, UI go unresponsive, file transfers to fail, etc. because the OS devoted resources to a batch process.

You’re only partially correct about classic macOS: you could definitely hang the OS by holding down the mouse button but this wasn’t a problem for playing music, burning CD-Rs, etc. in normal usage unless you had the cheapest of low-end setups because a small buffer would usually suffice. I worked with a bunch of graphic designers back then and they didn’t get coasters at a significant rate or more than their Windows-using counterparts, and they burned a lot of them since our clients couldn’t get large files over a network connection faster than weeks.


You can down play it all you want but it was a really nice OS for its time. It's smooth GUI was very competitive to other clunky windowing systems of the time. The advanced part was threading and smp support were woven into the system api making smp development a first class programming concept. Other operating systems felt like threading was bolted on and clunky. And thanks to the smp support prioritizing the GUI made 100% sense. And I believe there were some soft real time abilities of the scheduler so processes with high priority ran reliably.


Thanks for this. I remember being at MacWorld and watching a movie play while holding down menu items. On Classic Mac, which I was used to, this would block the entire OS (almost). BeOS seemed space-age.


Oops, probably too late but the memorial day videos were included with BeOS. It was a bunch of the Be employees tossing a few broken monitors off the roof of their office building. https://www.youtube.com/results?search_query=beos+memorial+d...


Reminds me of a game called NieR:Automata. You play as an android and the skill/attribute-system is designed as a couple of slots for chips. There were chips for things like the minimap and various other overlays along with general attributes, so if you decided you want to exchange your experience gauge for another chip with more attack speed, you could totally do that.

Among these chips was one called "OS chip" you had from the very beginning. If you'd try to replace that or simply exchange it for another one you "died" instantly and were greeted by the end-credits.


It also has an `IsComputerOn` system call.


int IsComputerOn() { return 1; }

??


if i recall correctly, if the computer is off the return value is undefined.


When I started University in 2000, I had a quad-boot system: Win98, Win2000, BeOS 5 and Slackware Linux (using the BeOS bootload as my primary because it had the prettiest colors). I mostly used Slackware and Win98 (for games), but BeOS was really neat. It had support for the old Booktree video capture cards, could capture video without dropping frames like VirtualDub often did, and it even had support for disabling a CPU on multicpu systems (I only saw videos of this; never ran BeOS on a SMP system).

I wish we had more options today. On modern x86 hardware, you pretty much just have Windows, Linux and maybe FreeBSD/OpenBSD if you replace your Wi-Fi card with an older one (or MacOS if you're feeling Hackintoshy .. or just buy Apple hardware). I guess three is kinda the limit you're going to hit when it comes to broad support.


I think BeOS was the only OS that allowed smooth playback of videos and work at the same time, something Windows was capable 5 and Linux 10 years later :D


Bluebottle OS (written in Active Oberon, a GC enabled systems programming language) was also capable of it.

http://www.ocp.inf.ethz.ch/wiki/Documentation/WindowManager?...


Ok, so this is the big BeOS thing I've heard.

What technically enabled this on such limited hardware? Was there lack of security/containerization/sandboxing that made os call much faster and context switches better?


Other people mentioned the real preemptive scheduling — and the general overall better worst-case latency — but another factor was the clean design. The other operating systems tended to be okay in the absence of I/O content but once you hit the capacity of your hard drive you would find out that e.g. clicking a menu in your X11 app did a bunch of small file I/O in the background which would normally be cached but had been pushed out, etc. A common mitigation factor in that era was having separate drives for the operating system, home directory, and data so you could at least avoid contention for the few hundred IOPs a drive could sustain.


Yes. This always amazed me with BeOS. It would play 6 movie simultaneously making my PC very slow but still responsive. As if the framerate just went down.


Bear in mind that resolutions back then were much lower than now, and not all computers had 24 bit color frame buffers. Video cards ran one monitor for the most part, with no others attached.

Be had well written multi threading and preemptive multitasking implemented on a clean slate - no compatibility hacks required. That meant it worked well and was quick/responsive. There were still limits, and the OS didn't have many security protections that would get written in today.


I mean maybe. I was running 1600x1200 on my monitor back then.


Some people were, but it wasn't too common. Workstations had far higher resolutions long before this, but home PCs running non 3d accelerated hardware were still mostly 1024x768-ish.

The BeBox itself was vastly different hardware than a standard PC as well, so it could break a lot of rules as far as smooth concurrency and multitasking... kinda like the Amiga did.


Also we had actual 32bit colour, not this 24 + 8 alpha. You could actually look at a rainbow gradient without banding.


2048x1536 was already a thing as well.


Yup, had a 22” Mitsubishi monitor that could do that resolution in ~2002. Everyone would pick on me about the text being so small, but I’d let them sit at my desk squinting and I’d stand ten feet back and read the screen with ease as they struggled. The monitor was a beast though, around 70lbs if memory serves.

Edit:

Pretty sure this is the monitor: https://www.necdisplay.com/documents/ColorBrochures/RDF225WG...


1280x1024, not so far from 1920x1080.


That was more the exception than the rule. Besides, 1080P is about 45% more pixels per frame than 1280, and likely at a higher frame rate. Big difference in hardware load.


I think it was their thread/process scheduler. It had a section of priorities which got hard realtime scheduling, then lower priority stuff got more "traditional" scheduling. (Alas, I don't know too much about thread/process scheduling so the details elude me.) That way the playback threads (and also other UI threads such as the window system) got the timeslices they needed.


Isnt giving near real time priority scheduling to audio/video how Windows handles things those days? I think I read that somewhere last week under Linux kernel scheduler behaviour response discussion.


Real pre-emptive multitasking? I seem to recall that was one of the huge differentiating factors of it against Mac / Windows.


Amiga did this in 1985. It's just that for compatibility reasons Apple couldn't do this. Even funnier: the fastest hardware to run old MacOS (68k version) on: an Amiga computer.


A non-Apple computer? Was "Hackintosh" (presumably under a different name) a thing back then?


You didn't need to build a Hackintosh. You could buy legal, officially-licensed Mac clones back then.


Ah yeah I still have my PowerComputing PowerTower Pro! At the time it was a current model, its 512mb of RAM was insane and my friends & classmates were jealous! hahah :)


Check out this video[0], basically an Amiga with an accelerator card potentially makes for the fastest environment to run 68k-based Mac OS (System 7) ...

[0] https://www.youtube.com/watch?v=Jph0gxzL3UI


Whoa, the Amiga is faster even though it's running in a VM!


Well, it's more akin to something like Wine where it's not exactly a virtual machine, since the processor instructions are the same. Tho that's about the extent of my understanding.. haha


I sometimes used my Atari ST with an emulator called Aladin. "Cracked" to work without Mac ROMs. But wasn't really useful to me because of lack of applications (at the time).

IIRC there were solutions like this for the Amiga too.


Linux could do that in 2001 just fine, and without crashing like Windows. XV was amazing. So was MPlayer.


> without crashing like Windows

That depended _very_ heavily on your graphics card at the time. In 2001, I could get X to crash on my work computer if I shook my mouse too fast. At home on my matrox card, yes, it was rock stable.


Nvidia TNT2/Geforce2 MX later.

EDIT: Also, Slackware was rock solid and it crashed far less than SuSE/Mandrake.


High definition playback is still not as smooth as it could be in browsers on Linux (or if your CPU is fast enough, it will drain your battery more quickly), because most browsers only have experimental support for video acceleration.

https://wiki.archlinux.org/index.php/Hardware_video_accelera...


Pretty much any CPU released in the past decade should be capable of decoding 1080P video as well as a GPU (though yes, will use slightly more power). The only exceptions I can think of are early generation Atom processors, which were terribly slow.


Pretty much any CPU released in the past decade should be capable of decoding 1080P video as well as a GPU (though yes, will use slightly more power).

The point is that modern GPUs have hardware decoding for common codecs, and will use far less power than CPU decoding. But the major browsers on Linux (Firefox and Chrome) disable hardware decoding on Linux, because $PROBLEMS.

So, you end up with battery draining CPU-based 1080p decoding. And even more battery draining or choppy 4k decoding.


I found Wayland more capable than xwindows in this regard


In 2001 I could still hang X Windows as much as I liked.

And it still happens occasionally, while my Windows 10 userspace drivers just reinitialize instead of locking the OS.


I think most hangs on Linux today are caused by low memory pressure, see this post: https://news.ycombinator.com/item?id=20620545

No swap, some swap, huge swap, swappiness=0, zram - nothing helped me.


Linux could do that only if your system was lightly loaded. Once you started to have I/O contention, none of the available kernel schedulers could reliably avoid stuttering.


I had this experience too, my video card was so shitty that I wasn't able to watch 700mb divx videos in windows, I had to boot into linux and use mplayer.


Don't forget about xanim


Windows 2000 was rock solid stable with uptime measured in "how long do I keep running before I get bored if bragging about my uptime?"

No comment in regards to the 9x line. ;)


BeOS would also let you do that while playing them backwards; useless, but a nice demo of the multimedia capabilities of the OS.


This would be challenging with modern codecs using delta frames. The only way I can see it work is precomputing all frames from the preceeding keyframe. Doable, but decent effort for a fairly obscure feature.

But did video formats back then use delta frames?


I never saw BeOS do that with video, but I heard it do it with MP3 files. SoundPlay was a kind of crazy bananas MP3 player -- it could basically act as a mixer, letting you not only "queue up" multiple files but play each of them simultaneously at different volume levels and even different speeds. I've still never seen anything like it outside of DJ software.


The thing also booted up faster than you could blink.

Of course that might have changed if they added more system services, but from POST screen to a usable desktop was easily under 15 seconds.


> you pretty much just have Windows, Linux and maybe FreeBSD/OpenBSD [...]

That sounds just as good? Compared to quad-booting Win98/Win2000/BeOS5/Slackware, today you could quad-boot Win10/FreeBSD/OpenBSD/Ubuntu. Actually, depending on what you count as different systems and what exact hardware you have, you could have 2 laptops sitting on your desk: a pinebook running your choice of netbsd, openbsd, freebsd, or some linux (https://forum.pine64.org/forumdisplay.php?fid=107), and an x86 laptop multibooting Windows 10, Android, Ubuntu GNU/Linux, Alpine Busybox/Linux, FreeBSD, OpenBSD, NetBSD, and Redox (https://www.redox-os.org/screens/). That's 2 processor families in 2 machines running what I would count as 4 and 8 operating systems each.

I think we're doing fine.


There also used to be other CPU architectures--though even at the time, enough people complained about "Wintel" that maybe it was obvious that the alternatives weren't ever going to catch on.


People complained about "Wintel" because the 32-bit x86 chips were so fast and cheap they destroyed the market for RISC designs and killed existing RISC workstation and server architectures, like SPARC and HPPA and MIPS.

By the time the Pentium came around, the future looked like a completely monotonous stretch of Windows NT on x86 for ever and ever, amen. No serious hardware competition, other than Intel being smart enough to not kill AMD outright for fear of antitrust litigation, and no software competition on the desktop, with OSS OSes being barely usable then (due to an infinite backlog of shitty hardware like Winmodems and consumer-grade printers) and Apple in a permanent funk.

We were perpetually a bit afraid that Microsoft/Intel would pull something like Palladium/Trustworthy Computing [1] and lock down PC hardware but good, finally killing the Rebel Alliance of Linux/BSD, but somehow the hammer never quite fell. It did in the cell phone world, though, albeit in an inconsistent fashion.

[1] https://en.wikipedia.org/wiki/Next-Generation_Secure_Computi...

Microsoft and Intel really seemed permanent back then. I wonder if that's how people felt about IBM back in the 1950s.


Microsoft went out of their way to prohibit computer manufacturers, namely Dell, from bundling BeOS:

https://www.osnews.com/story/681/be-inc-files-suit-against-m...


To Microsoft's credit, the early Windows NT versions were multiplatform. I remember that my Windows NT 4.0 install CD had x86, Alpha, PowerPC, and MIPS support.


The other thing people forget, which is still a bit incomprehensible to me, is that the multiple Unix vendors were saying they'll migrate to Windows NT on IA-64.


I don't know if it's true or not, but I've long blamed Microsoft for killing SGI (Silicon Graphics Inc).

MS worked with SGI on a project known as Fahrenheit - to unify Direct3D and OpenGL:

https://en.wikipedia.org/wiki/Fahrenheit_(graphics_API)

...well, we all know what happened - but I've often thought that Microsoft hastened their demise.

Somewhere in there, of course, was also the whole SGI moving away from IRIX (SGI's unix variant) to Windows NT (IIRC, this was on the Octane platform) - there being some upset over it by the SGI community. Maybe that was part of the "last gasp"? I'm sure some here have better info about those times; I merely watched from the sidelines, because I certainly didn't have any access to SGI hardware, nor any means to purchase some myself - waaaaay out of my price range then and now.

Of course - had SGI not gone belly up, I'm not sure we'd have NVidia today...? So maybe there's a silver lining there at least?


They couldn't afford to compete with Intel on processors... they just didn't have the volumes and every generation kept getting more expensive. For Intel, it was getting relatively cheaper thanks to economies of scale since their unit volumes were exploding throughout the 90's. Also, Intel's dominance in manufacturing process kept leapfrogging their progress on the CPU architecture front.


Perhaps Digital UNIX and HP/UX but HP/Compaq was a collaborator on IA-64. I don't think I heard of SUN or IBM saying that.


Sgi is another prominent example cited elsewhere in the thread.


Indeed, it probably killed SGI.


I think there was some token POSIX compatibility in Windows NT back then. Probably for some government contracts.


It actually worked pretty nicely - if anything better back in those days when software expected to run on different unixes, before the linux monoculture of today.


It also did in the PC hardware, unless you failed to notice the trends of laptops, 2-1 hybrids hardware.

Desktops are now a niche product, where buying cards has to be done mostly online, with most malls having only laptops and 2-1 on display.

Servers are away in some cloud, running virtualized OSes on top of a level-1 hypervisor.


> We were perpetually a bit afraid that Microsoft/Intel would pull something like Palladium/Trustworthy Computing [1] and lock down PC hardware but good, finally killing the Rebel Alliance of Linux/BSD, but somehow the hammer never quite fell. It did in the cell phone world, though, albeit in an inconsistent fashion.

I agree that phones are more locked down than desktops/laptops nowadays, but it's worth pointing out that neither Microsoft or Intel are really winners in this area. They both still are doing fairly well in the desktop/laptop in terms of market share though.


I honestly think it was less any type of Wintel conspiracy and more that platforms have network effects. Between Palladium not working out and Microsoft actually making Windows NT for some RISC ISA's, there wasn't actually an Intel/Microsoft conspiracy to dominate the industry together. They each wanted to separately dominate their part of the industry and both largely succeeded, but MS would have been just as happy selling Windows NT for SPARC/Alpha/PowerPC workstations and Intel would have been just as happy to have Macs or BeBoxes using their chips.


> I honestly think it was less any type of Wintel conspiracy and more that platforms have network effects.

True. I've always regarded "Wintel" as more descriptive than accusatory. It's just a handy shorthand to refer to one specific monoculture.

> Between Palladium not working out and Microsoft actually making Windows NT for some RISC ISA's, there wasn't actually an Intel/Microsoft conspiracy to dominate the industry together.

Right. They both happened to rise and converge, and it's humanity's need to see patterns which turns that into a conspiracy to take over the world. They both owe IBM a huge debt, and IBM did what it did with no intention of being knocked down by the companies it did business with.


OS X was around in the days of XP and Linux was perfectly usable on the desktop.

A few years earlier things were a little more bleak.


> OS X was around in the days of XP and Linux was perfectly usable on the desktop.

> A few years earlier things were a little more bleak.

I admit I was unclear on the time I was talking about, and probably inadvertently mangled a few things.

As for Linux in the XP era, I was using it, yes, but I wouldn't recommend it to others back then because it still had pretty hard sticking points with regards to what hardware it could use. As I said, Winmodems (cheap sound cards with a phone jack instead of a speaker/microphone jack, which shove all of the modem functionality onto the CPU) were one issue, and then there was WiFi on laptops, and NTFS support wasn't there yet, either. I remember USB and the move away from dial-up as being big helps in hardware compatibility.


Yeah Wifi on Linux sucked in those days. For me that was the biggest pain point about desktop Linux. In fact I seem to recall having fewer issues with WiFi on FreeBSD than I did on Linux -- that's pure anecdata of course. I remember the first time I managed to get this one laptop's WiFi working without an external dongle and to do that I had to run Windows drivers on Linux via some wrapper-tool (not WINE). To this day I have no idea how that ever worked.


> I remember the first time I managed to get this one laptop's WiFi working without an external dongle and to do that I had to run Windows drivers on Linux via some wrapper-tool (not WINE). To this day I have no idea how that ever worked.

ndiswrapper. It's almost a shibboleth among people who were using Linux on laptops Way Back When.

https://en.wikipedia.org/wiki/NDISwrapper

> NDISwrapper is a free software driver wrapper that enables the use of Windows XP network device drivers (for devices such as PCI cards, USB modems, and routers) on Linux operating systems. NDISwrapper works by implementing the Windows kernel and NDIS APIs and dynamically linking Windows network drivers to this implementation. As a result, it only works on systems based on the instruction set architectures supported by Windows, namely IA-32 and x86-64.

[snip]

> When a Linux application calls a device which is registered on Linux as an NDISwrapper device, the NDISwrapper determines which Windows driver is targeted. It then converts the Linux query into Windows parlance, it calls the Windows driver, waits for the result and translates it into Linux parlance then sends the result back to the Linux application. It's possible from a Linux driver (NDISwrapper is a Linux driver) to call a Windows driver because they both execute in the same address space (the same as the Linux kernel). If the Windows driver is composed of layered drivers (for example one for Ethernet above one for USB) it's the upper layer driver which is called, and this upper layer will create new calls (IRP in Windows parlance) by calling the "mini ntoskrnl". So the "mini ntoskrnl" must know there are other drivers, it must have registered them in its internal database a priori by reading the Windows ".inf" files.

It's kind of amazing it worked as well as it did. It wasn't exactly fun setting it up, but I never had any actual problems with it as I recall.


Yeah I know what ndiswrapper is (though admittedly I had forgotten it's name). I should have been clearer in that I meant I was constantly amazed that such a tool existed in the first place and doubly amazed that it was reliable enough for day to day use.


Oh man! I first tried BeOS personal edition when it came on a CD with Maximum PC magazine. (Referring to same demo CD, though the poster is not me: https://arstechnica.com/civis/viewtopic.php?f=14&t=1067159&s.... Also, how crazy is it that Ars Technica’s forums have two decade old posts? In 2000, that would be like seeing forum posts from 1980.) I remember being so happy when we got SDSL, and I could get online from BeOS. (Before that, my computer had a winmodem.)

BeOS was always a very tasteful design. And well documented! I learned so much about low level programming from the Be Newsletters: https://www.haiku-os.org/legacy-docs/benewsletter/index.html. The BONE article is a great introduction to how a network stack works: https://www.haiku-os.org/legacy-docs/benewsletter/Issue5-5.h.... I still have a copy of Dominic Giampalo’s BeFS book somewhere.

BeOS was very much a product of its time. (Microkernel, use of C++, etc.) What would a modern BeOS look like? My thought: use of a memory and thread safe language like Rust for the main app-level APIs. (Thread safety in BeOS applications, where every window ran in its own thread, was not trivial.) Probably more exokernel than microkernel, with direct access to GPUs and NICs and maybe even storage facilitated by hardware multiplexing. What else?


> What would a modern BeOS look like?

Haiku. (0)

But if you change your question to "What would a modern OS look like?"

Fuchsia. (1)

The only relationship that they have is that a kernel engineer called Travis Geiselbrecht designed NewOS (Haiku's modified kernel) and Zircon (Fuchsia's Microkernel).

[0] https://haiku-os.org

[1] https://fuchsia.dev


> In 2000, that would be like seeing forum posts from 1980.

That's the premise of Jason Scott's project launched in 1998 :) http://textfiles.com/


He also livestreams on Twitch; recently he's been streaming the archival of Apple II floppies.

https://www.twitch.tv/textfilesdotcom


> What would a modern BeOS look like?

There's a bit of BeOS in Android. Binder IPC is much like BMessage. And nowadays everyone puts stuff like graphics and media in a separate user space daemons, which was unusual for the time. Pervasive multithreading basically happened in the form of pervasive multiprocessing.



BeOS did not use a microkernel, it was a straight-up up boring monolithic kernel.


People at the time described the kernel as a microkernel, including the O’Rielly book: https://www.oreilly.com/openbook/beosprog/book/ch01.pdf. It ran networking for example in user space.


Haiku OS still uses C++.


I installed BeOS a long time ago on a PC. It was something ahead of the times.

I still remember how incredible it was the rotating cube demo where you coud drag and drop images and videos on the cube faces... it worked without a glitch on my pentium.

Just found out the demo video shows the application with a GL wave surface playing a video over it: https://youtu.be/BsVydyC8ZGQ?t=1074


Agreed, I remember trying BeOS in the late 90s and I felt the way Tesla fans report feeling about their cars - "it just feels like the future".

The responsiveness of the UI was like nothing I'd ever seen before. Unfortunately BeOS fell by the wayside, but I have such fond memories I keep meaning to give Haiku a shot.


By all the stars in heaven, that was an impressive demo.


It's about making a virtue of a necessity.

When Be wrote that demo the situation is that the other operating systems you might plausibly choose all have working video acceleration. Even Linux has basic capabilities in this area by that point. BeOS doesn't have that and doesn't have a road map to get it soon.

So, any of the other platforms can play full resolution video captured from a DVD for example, a use case actual people have, on a fairly cheap machine and BeOS won't be able to do that without a beast of a CPU because it doesn't have even have hardware colour transform acceleration or chromakey.

But - 1990s hardware video acceleration can only play one video at a time, because "I want to play three videos" isn't a top ask from actual users. So, Be's demo deliberately shows several different postage stamp videos instead of one higher resolution video, as the acceleration is no help to competitors there.

And then since you're doing it all in software, not rendering to a rectangle in hardware, the transform to have this low res video render as one side of a cube or textured onto a surface makes it only very slightly slower, rather than being impossible.

Audiences come away remembering they saw BeOS render videos on a 3D surface, and not conscious that it can't do full resolution video on the cheap hardware everybody has. Mission success.


BeOS R4.5 did have hardware accelerated OpenGL for 3dfx Voodoo cards. I played Quake 2 in 1999 with HW OpenGL acceleration. For R5 BeInc wanted to redo their OpenGL stack, and the initial prototypes seeded to testers actually had more FPS on BeOS than under Windows.


Eh, multithreading decoding could help a lot. And by the population of DVD video in computers (and the PS2), most people had a Pentium3 450MHZ at homes, which was more than enough for DVD video with an ASM optimized video player such as MPlayer and a good 2D video card.

2D acceleration was more than enough.

http://rudolfs-place.nl/BeOS/NVdriver/3dnews.html

On Linux you didn't need OpenGL, just Xv.

Source: I was there, with an Athlon. Playing DVD's.


Impressive. But it makes me think how far we've come; it's long been possible to do a rotating cube with video using pure HTML and CSS.


Remember when the Amiga bouncing ball demo was impressive? Ironically 3D graphics ended up being the Amiga's specific achilles heel once Doom and co came on the scene.


That's curious to me. Doom is specifically not 3D. Was it a publishing issue (that Doom and co weren't produced for the Amiga), or a power issue, or something else?


The Amiga had planer graphics modes, while the PC/VGA cards had chunky mode (in 320x200x256 color mode).

It means that, to set the color of a single pixel on the Amiga, you had to manipulate bits at multiple locations in memory (5 in 32 colours), while for the PC each pixel was just one memory location; In chunky mode you could just do something like: videomem[320*y+x]=158 to set the pixel at (x,y) to color 158, where videomem would point directly to the graphics memory (at address 0xa0000) -- It really was the simplest graphics mode to work with!

If you just copied 2D graphics (without scaling/rotating) the Amiga could do it quite will using the blitter/processor, but 3D texture mapping was more challenging because you constantly read and write to individual pixels (each pixel potentially requiring 5 memreads/writes on the Amiga vs. 1 on the PC).

Doom's wall texture mapping was affine, which basically means scaling+rotation operations were involved. The sprites were also scaled. Both operations a problem to the Amiga.

As software based 3D texture mapping games became the new hot thing in 1993-1997, the Amiga was left behind. Probably wouldn't have been a problem if the Amiga has survived until the 3D accelerators in the late 90s.

This is quite well described elsewhere. Google is your friend if you want to know more! :-)


Also Amiga didn’t have hardware floating point whereas DX series of PCs in the 90s did. Essential for all those tricky 3D calculations and texture maps.


No. Hardware floating point was _Quake_

Quake has software full 3D which runs appallingly if you can't do fast FP, it's targeting the new Pentium CPUs which all have fast FPUs onboard, it runs OK on a fast 486DX but it flies on a cheap Pentium.

Doom is just integer calculations, it's fixed point math.


Duke3D Build engine did use FPU for slopes :O http://fabiensanglard.net/duke3d/build_engine_internals.php Luckily you already needed at least DX2-66 to play the game comfortably so not many people stumbled onto this.


I didnt know Doom was all integer ... quite a feat.

In the general sense though the lack of floating point, as well as flat video addressing seriously hampered Amiga in the 3D ahem space.

EDIT I just remebered there is definitely at least one routine I know of that performs calculations based on IEEE 754 - “fast inverse square” or something. That could be at the root [badum] of my confusion vis-a-vis Doom ...


The famous "fast inverse square root" was in Quake 3.


Doom didnt use polygons but it very much was 3D in any practical sense of the term.


No, it was "distorted" 2D, like cardboards put in perspective. Not 3D.


You are still getting confused by polygons. It was a 3D space that you could move around in. The matter of how it was rendered is an implementation detail.


Doom was a 2D space that looked like a 3D space due to rendering tricks. You could never move along the Z-axis though because the engine doesn't represent, calculate, or store one. That's why you can't jump, and there are no overlapping areas of the maps.


Regardless of the “technicalities”. My point was that this, and other 3D games were something that Amiga could not do well - whether 3D, or “simulated 3D”.


It really wasn't. Doom's gameplay almost entirely took place in a 2D maze with one-way walls. It was rendered to look 3D, and as you said, that's an implementation detail.


You coudn't look up and down, neither you could do with DN3D.

I am not confused, the opposite. I grew up with that.


You can look up/down in Duke3D, its under home/end keys. It doesnt look pretty nor correct, but you can do it.


I grew up with it too ... I disagree with your categorical boundaries. The distinctions you draw are purely technical.


purely technical? You can't go above or below anything; no two objects can exist at the same X/Y; height doesn't exist in any true fashion (the attribute is used purely for rendering --- there is no axis!). How is the existence of the third axis in a supposedly 3D environment purely technical?

With only two axis, it is literally a 2D space, which gives some illusion of 3D as an implementation detail --- not the other way around.


It isn't "literally" a 2D space. It is "topologically" a 2D space in that you could represent it as a 2D space without loosing information. It doesn't provide 6 degrees of freedom but it is very much experienced as a 3D game environment.

EDIT also, using the term "literally" to talk about 3Dness when it is all rendered onto a 2D screen, is fairly precarious. No matter how many degrees of freedom, or how rendered, it will never be "literally" 3D, in the literal sense of the term.


No free look meant no perspective distortions in Doom.


Doom's 3Dness or lack thereof only mattered to programmers. Players didn't care, to them Doom looked entirely 3D.


Curious. As a player, I certainly cared. There's a world of difference between Doom and Quake...


Players didn't have to aim up to shoot something above them


One of the last lines threw me off...

"Would Tim Berners-Lee have used a BeBox to run the world’s first web server instead?"

The BeBox didn't ship until 1995. Tim Berners-Lee wrote the first version of the web in 1991. So nope, that wouldn't have happened.


He used a NeXT computer for that IIRC


> What’s left for us now is to wonder, how different would the desktop computer ecosystem look today if all those years ago, back in 1997, Apple decided to buy Be Inc. instead of NeXT? Would Tim Berners-Lee have used a BeBox to run the world’s first web server instead?

For this hypothetical scenario to ever had been possible, BeOS would’ve had to time travel, as TBL wrote WorldWideWeb on a NeXT machine in 1990[0]. BeOS was initially developed in 1991 per Wikipedia[1] and the initial release of BeOS to the public wasn’t until 1995.

[0] https://www.w3.org/People/Berners-Lee/FAQ.html#browser

[1] https://en.wikipedia.org/wiki/BeOS


I used BeOS for most of the 2nd half of the '90s and I guess in my mind at least the regrettable, messy, and unethical end of BeOS in 2001-2002 is emblematic of the Dot Com collapse.

Crushed by Microsoft's anti-competitive business practices and sold for scrap to a failing company who was unable to actually do anything with parts they wound up with but who never the less made damn sure that no one else could either.


PSA: nevertheless is written as a single word


BeOS was really something of what the future 'could' have almost been. Too bad that it was killed by better competitors. But again I think its fair to compare with the lessons learned from its successor 'Haiku' that can be learned by many other OSes:

From what I can see from using Haiku for a bit, it has the bazar community element from the open-source culture with its package management and ports system from Linux and BSD whilst being conservative with its design from its apps, UI, and SDK like macOS. Although I have tired it and its surprisingly "useable", the driver story is still a bit lacking. But from a GUI usability point of view compared with many Linux distros, it feels very consistent unlike the countless confusing interfaces coming from those distros.

Perhaps BeOS lives on in the Haiku project, but whats more interesting is that the real contender who learned from its failure is the OS that has its kernel named 'Zircon'.


I installed BeOS when it first came out. To me it was a cool tech demo, but it was fairly useless as it didn't have a usable browser (NetPositive was half baked at best), couldn't play a lot of video codecs and couldn't connect to a Windows network share.

I feel like if they launched a better experience for existing Windows users, it would have done much better.


There was a version of Opera 3.62 for BeOS around 2000. At the time, Opera was a great browser.


I actually licensed that version :)


- VLC existed for Be

- So was Opera


> the driver story is still a bit lacking

That's a hell of an understatement right there. It still doesn't have any capability for accelerated video, does it?

Unfortunately that's the story for any OS these days that isn't already firmly established. Which is a huge shame since they all suck in their own ways.


> Unfortunately that's the story for any OS these days that isn't already firmly established.

Maybe because we're coming at this from the wrong perspective?

I love the theoretical idea that I could build a generic x86 box that can boot into any OS I feel like using, but has that ever truly been the case? We certainly don't pick software this way—if you're running Linux, you're not going to buy a copy of Final Cut and expect it to work.

Well-established software will of course work almost everywhere, but niche projects don't have the ability. Unless you use something based on Java or Electron, which is equivalent to using Virtualbox (or ESXi) in this comparison.

It's long been said that one of Apple's major advantages with macOS is they don't need to support any hardware under the sun. Non-coincidentally, the recommended way to make a Hackintosh is to custom build a PC and explicitly select Mac-compatible hardware.

Now, if an OS doesn't for instance have support for any model GPUs at all, cherry picking hardware won't help. But perhaps this is where projects like BeOS need to focus their resources.


> The "correct" way to go about things is to choose the OS first, and then select compatible hardware.

Yeah, wouldn't it be nice if we weren't constrained by real world requirements? If I were to write an OS today, the hardware I'm targeting may become quite rare and/or expensive tomorrow. Or it may just go out of fashion. Regardless, very few people are going to buy new hardware just to try out an OS they're not even sure they want to use yet.


> very few people are going to buy new hardware just to try out an OS

We do have VM's and emulators, but yes, the cost of switching OS's is huge. That's true with or without broad hardware compatibility.

My point is this: I don't think the idea of OS-agnostic hardware ever really existed. The fact that most Windows PC's can also run Linux is an exceptional accomplishment, and not something other projects can be expected to replicate. You might get other OS's to boot, but not with full functionality.


> That's a hell of an understatement right there. It still doesn't have any capability for accelerated video, does it?

We do not. But that (and proper power management) basically all that's missing at this point; the rest are "just bugs".

That is to say: WiFi, ethernet, USB, SSDs, EFI, etc. should all work on the majority of hardware, both current and past.


That's the case. I can't use Haiku til the video is sorted, and it looks like that's a long way out. I'd love to help but I don't know C++ and I don't have time to dive into something like that.


Well it wasn't as simple as "killed off by better competitors". It was actually both much better than Windows 98 and Mac OS at the time.

But ultimately the deathblow came from Apple which, after struggling with low sales and poor quality software, almost chose to buy BeInc's tech but dropped it so they could bring in Steve Jobs. So it was more like vendor lock-in (Windows) and corporate deals (Apple) as well as failing partners (Palm).


Apple also dropped it because they couldn't come together on price, partly because BeOS was in a fairly unfinished state:

> Apple's due diligence placed the value of Be at about $50 million and in early November it responded with a cash bid "well south of $100 million," according to Gassée. Be felt that Apple desperately needed its technology and Gassée's expertise. Apple noted that only $20 million had been invested in Be so far, and its offer represented a windfall, especially in light of the fact that the BeOS still needed three years of additional expensive development before it could ship (it didn't have any printer drivers, didn't support file sharing, wasn't available in languages other than English, and didn't run existing Mac applications). Direct talks between Amelio and Gassée broke down over price just after the Fall Comdex trade show, when Apple offered $125 million. Be's investors were said to be holding out for no less than $200 million, a figure Amelio considered "outrageous."

> ...With Be playing hard to get, Apple decided to play hardball and began investigating other options.

http://macspeedzone.com/archive/art/con/be.shtml


Yeah, I feel saying "better competitors" really does Be a disservice. They largely failed due to an unfair playing field and very shady practices.


In my hazy recollection, there was another, rather pedestrian reason Apple didn't go for BeOS: it had almost no infrastructure for printing. The Mac's niche was prepress and desktop publishing (remember that phrase?), and BeOS could barely print and had no color management.

(Though I could be totally wrong on this, and welcome a correction.)


I also read a story about how BeOS DPx (developer previews) lacked decent printing support and this was another reason why Apple chose NeXT. The irony is that Apple had to redo the NeXT printing stack anyhow, as did BeOS in R4, and they both ended up using CUPS. Also another reason was lack of x86 support, which forced BeInc to quickly rush out a x86 port in R3.0. Intel were so impressed by the x86 performance, that they ended up investing $4M into BeInc.


Before I first saw BeOS running on a colleague's machine back in the ~mid-late 90s (same guy who introduced me to Python) I used an SGI Onyx Reality Engine machine [1] (roughly $250K computer back in the day) for molecular mechanics simulations and BeOS ran circles around it on perceived responsiveness. I really wish we have OS's that prioritize input/output latency over all else.

[1] https://en.wikipedia.org/wiki/SGI_Onyx


Fun vid that might bring back some memories :) https://www.youtube.com/watch?v=Bo3lUw9GUJA "SGI's $250,000 Graphics Supercomputer from 1993 - Silicon Graphics Onyx RealityEngine²"


Windows does that. Balance Set Manager boosts the priorities of threads that receive I/O. That’s why mouse movement is so smooth since Windows NT. (Not so much on Win9x).

MacOS probably does a similar thing too.


Don’t mobile OS’ prioritize perceived latency above all other resources (besides battery)?


Considering how often I hit a button more than once because it took so long to do anything about the first one, that seems unlikely.


I was drinking water when I read this and ended up snorting it out I laughed so hard when I read this. It’s so, so true.

I am a mobile dev and use a plethora of iOS and Android devices all the time, often on many different software versions.

It’s not unique to any platform, and seems to be most often affecting the keyboard input, but occasionally seems to affect the rest of the UI as well.

Software updates will oscillate between breaking and fixing these things on and off between different devices.

I’ve been doing app development since iOS 3 and the first version of the App Store and used an ADP1. There’s no method to the madness I can see. Especially with the keyboard input.

Lord help you if you happen to be using a 4S or 5 or iPad 2/3. I often wait for up to 10-15 seconds for text input to catch up to itself, only to catch up to itself all at once - I can type whole sentences knowing by the time I wait for them to finish it’ll be the same 10-15 seconds to wait as if I’ve typed a single character.


I should have said that I really want a general-purpose OS to prioritize perceived latency. Mobile OS (and gaming consoles) of course are tuned quite differently and in my brain I view them as completely different experiences when I use them. My desktop while significantly more powerful just _feels_ slower because of the latency.


That’s a great point, however they also lock down the enduser experience to something diminished from a fully general purpose computer.


Haiku is really good. Would recommend anyone to try it out in a VM (I had it running on my actual laptop for a short time, but unfortunately my job pretty much requires me to run Linux so it couldn't stay). Haiku has a really responsive UI with a 90s look so you can actually tell what is a button.


Also recommend.

https://haiku-os.org


Oh man, I really do miss the days of actual coherent UI that is clearly "readable". The trend of flat UI drives me crazy. So much wasted cognitive effort just to make sense of something on-screen.


Feel the same. Clean UI doesn't have to be flat. That said , "semi-flat" interfaces can look really good.


How painful was it to get it running on a laptop? I've been interested in Haiku for a long time now, but I don't really have a place to play with it except on my laptop.


You have to have the right laptop. At the time I had a Thinkpad (X1 IIRC?) which fairly much worked out of the box, but I'm fairly sure it won't work on $random laptop. For best results and the lowest barrier to entry try it first in a VM.


Actually these days, Haiku will probably at least boot on $random laptop, and if you have a WiFi chip that FreeBSD also supports (we reuse their network drivers), you can probably even connect wirelessly.

I run Haiku on my ThinkPad E550 (2015), I know some of the other devs run on Dell XPS 13, HPs, etc. And the same is true in towers (Ryzen is becoming a popular pick.)

GPU acceleration drivers are the one real kind of driver we lack at this point.


Thanks for correcting me, and good to know that Haiku is easier to use than ever.


In this alternative universe, Objective-C would have died, Swift would never happened, and C++ would rule across desktop and mobile OSes (thinking only of Windows, BeOS X, Symbian, Windows CE and what might have been BeOS based iOS).

Also POSIX would be even less relevant, and very few would be buying BeOS X for doing Linux coding.


In my alternate universe, GNUStep or some other implementation of the NS APIs would have allowed Linux/BSD to rise in popularity along with a NeXT-ified Apple. Except that didn’t happen.

I think in your alternate universe it’s likely we’d have seen something entirely different emerge.


I pretty much doubt that.

GNUStep still looks much like to the on the same state it used to be when I was using WindowMaker in the late 90's.

I regretted the time I once spent on the GNUStep room at FOSDEM, as the only subjects of relevance seemed to be the theme engine, as if there wasn't anything more relevant to get working properly.


What would have been interesting is if Apple had bought Be in 1993 or 1994, and incorporated BeOS tech in System 8, then wound up buying NeXT anyway at the end of 96 and OS X incorporated Be and NeXT tech.

Though it's entirely possible Jobs would have tossed the BeOS tech along with Quickdraw GX, Newton, etc.


> Also POSIX would be even less relevant, and very few would be buying BeOS X for doing Linux coding.

I don't think I'd enjoy a GNU userland/glibc/Linux monoculture, quite honestly. I'm glad POSIX exists to have a slightly less moving target.


That was exactly part of my point, with BeOS that wouldn't have happened anyway.

The only successful UNIX desktop is OS X.


Well, the last major revision to the Single Unix Specification was released in 2008. Ever since then, Linux has basically become the defacto POSIX standard. So while glibc may not rule the roost, most POSIX OSes bend to the will of the GNU userland and the Linux monoculture in many ways. It's not really like the old days with Sun, IBM, DEC and even Microsoft with Xenix. POSIX is being washed away a little more every day.


Proof being that other POSIX and Windows, now go with Linux kernel ABI compatibility instead of POSIX.

And it isn't at all relevant on mobile OSes.


> and C++ would rule across desktop and mobile OSes

C++ in a non-standard version as compiled by gcc 2.96 release! Awesome.


What lack of imagination, as if they wouldn't be upgrading their toolchains.

In fact, Haiku does support more modern toolchains and the BeOS R5 is only kept around due to ABI issues with binary only BeOS software.


The ABI issues are exactly the reason why having C++ as OS API level language is a bad idea.


Not all, Android, IBM i (C++ has been slowly replacing PL/S), mbed, Windows, and OS X (IO Kit, now Driver Kit), are doing pretty alright.

There is no such thing as C ABI, it only happens to work on OS that are, surprise surprise, written in C.


There is a C ABI. The "x86-64 System V ABI" (the ABI for C on everything except Windows on an x86-64, ie a typical PC) was designed by AMD working with early adopters on Linux and other platforms. Here are several extant ABI documents:

https://github.com/hjl-tools/x86-psABI/wiki/X86-psABI

The ABI for C needs to agree less stuff than a C++ ABI but it's still quite a lot of stuff, if these things don't get agreed then code won't work unless everybody uses the same (version of the) compiler.


As the name already implies, that is an UNIX ABI, the OS where C was born, and naturally the OS ABI.

x86-64 platforms not using the said ABI:

- macOS, "This article describes differences in the OS X x86-64 user-space code model from the code model described in System V Application Binary Interface", (https://developer.apple.com/library/archive/documentation/De...)

- Unisys Clear Path OS200 and MCP

- Android (where JNI is what matters)

- Chrome OS (where JavaScript and WASM is what matters)

So no, it isn't everything except Windows on x86-64 and then there are all the other OSes running on ARM, MIPS, PowerPC, SPARC, PIC and plenty of other less relevant CPUs .


I feel like the biggest missed opportunity of the “mobile revolution” ten years ago was BeOS.

It seemed clear to me that Android would be a bust for smartphone manufacturers (nobody has really made money off of Android except for Google and Samsung, the latter of whom accomplished this by dominating that market).

If Sony, for example, had gotten ahold of BeOS and tried to vertically integrate in a manner similar to Apple, they could have been a contender.

Neal Stephenson’s In the Beginning Was the Command Line has quite a lot of interesting observations about BeOS during its prime. http://cristal.inria.fr/~weis/info/commandline.html


Well there's your reason why Google has the expertise to build Fuchsia. Most of the BeOS guys were hired there to work on Android and now they are doing it again with Fuchsia.

We will all come back to this comment in 10 years to find ourselves running Fuchsia on our phones, tablets and our new Pixelbooks.


Huh! I thought it was the Danger Sidekick crew


I think Microsoft made quite a lot off Android.


Yup. Mostly around patents.

Ironically the only good (IMO, obviously) Outlook client is Android.

Really curious how the Surface Duo turns out. Weird form factor aside, Microsoft embracing Android on a branded device is interesting.


Some trivia: There was an unofficial successor of BeOS called ZETA from a german company yellowTAB (later magnussoft). I remember this because they tried selling it via a homeshopping channel in german TV which was completely hilarious.

I found a (very bad) recording of this: https://www.youtube.com/watch?v=FQW-q2vp6W4


For some weird reason I still have two (?) copies of yellowTAB Zeta on my bookshelf, next to my boxed copy of Red Hat Linux 7.3. Amazingly Zeta got more use than Linux, because it was much easier to get everything working on my machine.

I just checked the discs and it's the same "Deluxe Edition RC2" as in that home shopping video. I think my copies were bought as special advance preview builds for third-party Zeta developers, though. And I don't think I paid as much as 100EUR...


The way BeOS used filesystem attributes like a database was way ahead of the curve and it still might be.

This book was a great read back in the day of what goes into a "modern" filesystem design

   https://archive.org/details/practical-file-system-design
"Practical File System Design" was technical but also readable. Straight from the man who designed BFS which makes it more of an accomplishment IMO.


I enjoy the regular reminiscing of BeOS, but for all the talk about how fast it was on hardware common at the time, I wonder why nobody remembers an even more impressive "tech demo" of an OS from that same period - QNX 6 desktop? An ISO of evaluation edition of 6.2 was easily downloadable for a while, and it was pretty neat:

https://guidebookgallery.org/screenshots/qnx621

(I know that QNX was and still is widely used - the "tech demo" bit refers to its use as a primary desktop OS, not the usual embedding scenarios.)


Did you know that Blackberry 10 is QNX?

I had a Passport for a while. A lovely phone, and the OS had an Android runtime so _some_ Android apps worked... but like other OSes that let you run non-native apps, it resulted in BB10 never attaining a critical mass of apps.


I've been wasting a bit of time lately trying to get BeOS 5.0 Professional running on my PowerMac 8500/180.

Had to order a USB CD/DVD burner off of Amazon to write the ISO to a CD as I didn't actually own any hardware with a CD/DVD burner anymore :D


Hey, consider yourself lucky you did not need a boot floppy :D


I remember seeing a BeBox under someone's desk at the video game company I was interning at in 1997. I nearly lost my shit. When Be released an Intel-compatible build while I was at Santa Clara in 1998, I installed it onto one of the lab computers. Sorry about that, IT team.


I ran BeOS back in the day (even have the developer book!) and I've been trying Haiku on and off over the years.

It's been interesting. The browser isn't quite all there yet but might be considered serviceable, and you can sort of get a working dev environment going on it (not many modern runtimes, though, nor a way to run Linux binaries that would let me do Clojure).

It's certainly worth keeping an eye on, although there were some weird decisions - for instance, I remember a thread on ARM support where whomever was tackling that was completely dissing the Raspberry Pi, and yet today, if I were to install it permanently on any machine to tinker with, it would almost certainly be a Pi...


> if I were to install it permanently on any machine to tinker with, it would almost certainly be a Pi

The pi is such a great device for this. If I were working on any niche operating system (or building one from scratch), I'd target qemu first, and the pi second. It may not be the nicest hardware out there, but it is a single platform that loads of people have, that allows trivial disk swapping (upside of no onboard flash -> everything on swappable SD card), and is dirt cheap.


Doesn't Clojure run on the JVM? Haiku has OpenJDK, you know...


Last time I tried Java was two years ago. A few months back I installed it on KVM to check out the state of WebPositive, but didn’t actually get to try coding anything since the browser couldn’t load my webmail.


We didn’t get to have this because Microsoft strangled it in the crib. I am never forgiving them for that.


Actually, you didn't get it because Jean-Louis asked too much when Apple inquired.


The way I see it, the two are related- Microsoft stopped it from having much of a chance as an independent OS.. This stopped us from getting it outside of a major vendor like Apple acquiring it.

Apple saw this, so didn't want to pay as much as they might have if it were selling well on it's own.

So Apple ended up going with NeXT "instead of plan Be" as it were.


Yep. OEMs were not allowed to sell dual-boot systems if they wanted to keep buying cheap MS licenses. I think they lost that lawsuit, but it was long after BeOS closed up their commercial operations.


Didn't Be publicly offer up a free Be OS license to the first OEM willing to ship a duel boot system?

When there were no takers for a free way to have your hardware stand out from the pack, it was hard to imagine a reason for every single OEM in a very crowded field to back away that didn't involve antitrust shenanigans.


Now that you mention it, I recall this as well.


I love how everyone blames Microsoft for what was OEMs doing a race to the bottom.

OEMs had an option, yes they would had to pay more for licenses, but no one pointed them a gun, or did visits mafia style to forbid them to sell other systems.


Microsoft, the incumbent, offered OEMs cheaper Windows licenses as long as there was no dual boot, or a windows license was paid for every machine sold regardless of what it ran.

Under these terms, it was suicide for OEMs to offer anything except “only Windows”. I was actually working for another company whose software was killed by Microsoft by a similar “or else” threat from Microsoft at the time.

Microsoft was a bully doing (likely) illegal things, except the FTC wasn’t doing its job.


And what would have stopped OEMs to offer the same device with a different SKU with BeOS AND Windows for say x USD more?

(kind of like Dell does with their Linux lappies)


More SKUs cost more in manufacturing, QA; choice paralysis may drive away customers. All for a relatively niche offering that might piss off their big product. Little upside. It's even a bit surprising Dell sells Linux laptops, considering.


Lack of demand (for BeOS at an extra cost; or for Windows by people who wanted BeOS)

Also, some of the time Microsoft simply forbade dual booting as a condition for OEM licensing.


It'd be business suicide to charge more just so they could offer an obscure OS as an option; in my mind, this is the kind of clear anticompetitive situation it'd be good to use the law to avoid. Instead the DOJ went after MS for bundling a web browser. So dumb.


Microsoft was sued, and settled for $23M. https://www.computerweekly.com/news/2240052523/BeOS-will-liv...


Hey, that’s neat, but too little too late!

Was it just a strategic misstep for Be to not sue at the time, when it would have mattered? Why did the DOJ care about MS bundling a web browser w/ its OS but not care about MS preventing OEMs from selling any other OS than Windows? The seems wildly, clearly more anticompetitive to me!

And why does no one care about this stuff anymore? Not only does iOS bundle a browser, you cannot install any other browser (you can install alternative UI frontends for webkit, but you can’t ship your own js/html etc engine). Not too mention that they can veto any software on the platform for any reason...


There are laws about dumping prices, beyond that it is business as usual.


Only slightly off topic, but are you aware of

[1] https://www.quora.com/Why-didnt-Japan-make-their-own-brand-O...

?


We still wouldn't have it; no way Apple w/o Steve Jobs but w/ BeOS survives.


I mostly agree.

Jobs was able to squeeze Microsoft. In a way I don't think any one else would or could.

Referring to Microsoft outright stealing Apple's IP and Gates' subsequent settlement. The $150m investment. The commitment to maintain Office on Mac. Forever license to all of Microsoft's API, enabling Mail.app to have pretty great integration with Exchange, for example.

BeOS was probably the better tech. But Jobs had the better strategies and the better team.

Compare this to Jonathan Schwartz cotemporaneous mismanagement of Sun's amazing IP, snatching defeat out of the jaws of victory. Schwartz just wasn't a bare knuckled brawler like Jobs.


Schwartz was an odd duck in charge if another odd duck. His writing was a good read, but it was a little odd to see such antiestablishment talk from an establishment player. And in the near term, his own establishment suffered the most.

I used the hell out of Java, but most of their other tech was in the weird quadrant of cool stuff that I won’t use. Something about their fit and finish always left me cold. Or if not that, price point.


Who knows. Maybe BeOS also had other nice talented people working there, with good vision for products, that just needed more room and market that apple could provide. We will never know


Maybe! It’s more than just having the talent tho — you need to have a talented leader. Johnny Ive was at Apple for 5 years before Jobs showed up, without too much to show for it.


Maybe.

All conjecture. Had I finished my Java port (BeKaffe), the world would run on BeOs :).


If Apple had failed, I wonder what Steve would have done after Pixar. Start a new computer company? Could he (or perhaps a better question: would he) have swooped in with Mac compatibility mode?

The real win was a mobile future sans Microsoft. I doubt smart phones would be what they are today if not for the iPhone, and the iPhone required huge resources and a maniac cracking the whip. Would Jony Ive leave a failing Apple and work with Steve? If not, would we even have the iPod let alone the iPhone?

On the plus side, Nokia might have still been around.


Nokia is still around, and working from the offices I used to visit regularly in Espoo.


I wouldn't call that what's left after elopcalypse "alive" and "around"


Alive enough to be one of the best Android brands, that actually provide updates without having to pay Apple prices.

Around enough to still have a good networking business, and be the owner of Bell Labs.


The money wasted on the BeBox killed BeOS


It was supposed to prove BeOS is a good idea. A problem they ran into with the BeBox and again later is that Be had lots of _fans_ rather than developers.

Fans think they're helping. If there's a hardware product then as a fan I should buy it right? Well, no, that's a subsidised development toolkit. Every unit that ships is VC sunk in the hope you'll write like Photoshop or Mosaic or 1-2-3 or something and create a market for this new system. When you instead compile two demo apps, call the support line to ask how to plug a mouse in, and tell all your friends to buy one as well, you are in fact not helping.

This is also because Be's original plan says competing with Windows is suicide (it was). They're going to build a system which is NOT a competitor for Windows, it's a whole separate environment. Maybe you have a Windows PC to do billing and surf that new Web, but in your studio you have a BeOS system and it doesn't matter that it can't print because that's not what it's for.

Be shouldn't have made it to the turn of the century, nowhere close. In 1998 the investors should have said we're sorry, there isn't money for this to continue Jean-Louis, better luck next time, and turn off the lights. BeOS gets sold off to whoever is in the market for pennies. But they got "lucky" they were in the right place at the right time. In 1999 you could raise an IPO for a dog turd so long as the offering paperwork said "Internet" on it and you were based on the US West Coast. The institutions got out, those fans who'd squandered the company's money and opportunity years earlier got in to replace them, and they all lost their shirts. In the process Be Inc. bought itself an extra couple of years runway.

(Edited to fix name)


Apple began to re-kick ass when they made an amazing laptop (tiBook). This was the first truly viable Unix laptop, with promise to be a cut above the competition (there was none) in terms of style and usability.

tiBook was bonkers. Suddenly, I didn't really need to sit in the room, surrounded by boxes, but rather could go to the park and access the room from under a tree.

If Be had managed to capture that, I think it would have been an amazing time. Imagine if SGI had pulled off the first titanium, unibody laptop, designed specifically for 3D. Would Alien-ware?

If there had been a BeLaptop, things might have been a lot different in 2000/2001, when things started to look very, very interesting.

I mean, the fact that Apple shipped a Unix laptop when all the other 'super-' vendors were unable to pull it off.,,


Agreed. One small pedantic nit. TiBook was not unibody. They did not machine the frame from one giant chunk of titanium. I had one and loved it also, but it was made from a bunch of parts. Fantastic laptop. Still have mine.

More info for the curious: https://www.macworld.com/article/2918513/the-powerbook-g4-ti...


Toshiba was selling Solaris laptops.


Yeah, but were they as sexy as the tiBook?

Like the OP, for me the tiBook was a defining moment. A unix laptop, and it also looked good? Easy, immediate purchase.


Not as colorful,

http://museum.ipsj.or.jp/en/computer/work/0012.html

https://books.google.de/books?id=wDlUmfY9nQsC&pg=PT11&lpg=PT...

Additionally there were UNIX laptops from NatureTech

https://www.youtube.com/watch?v=Dios0v0n9eY

and Tadpole

http://www.computinghistory.org.uk/det/32324/Tadpole-SPARCbo...

https://blog.adafruit.com/2019/04/01/sparcbook-3000st-the-co...

So no, "I mean, the fact that Apple shipped a Unix laptop when all the other 'super-' vendors were unable to pull it off.,, " isn't how history went.

Other vendors were already at it, while Apple was trying to make Copland work and not being so successful with A/UX.


At that time, Linux on laptops was an absolutely great choice.

None of your vendors were considered viable - far, far too expensive. However, Linux on x86 laptops at the time was amazing - portable Linux.

So when Apple joined that party, and made the hardware neat, it was an easy switch. Portable, PPC-based, Unix. This was a delightful moment.

Of course, I still have a Linux laptop around, and 20 years later .. I consider moving back to it, 100%. The ride has been good with Apple, but the horizon doesn't look too great ..


At that time no one sane was putting GNU/Linux in production to replace battle tested UNIX systems, if they loved their job.

What Linux laptops? It hardly worked properly in most desktops.

The first laptop I managed to get Linux working at all was a Toshiba in 1999, while these systems were being sold since 1992.


Linux has always worked great if you choose your hardware wisely, and for a long time in the 90's it was perfectly viable to put a Linux machine against an SGI, Sun or DEC system as a workstation. Really, Linux had traction even before the 21st century cloud and 'droid reality came along.

For most of the 90's I was using Linux in some capacity, professionally as well as personally.

I also had a Linux laptop (Winbook, Sony, and then Sceptre..) on which I did a lot of development and for which my carefully selected hardware did in fact work, just fine - it was certainly viable as a dev platform, and for us Unix programmers at the time, a small and light Linux laptop was far more preferable to the pizzaboxes that had to be carried in the trunk .. or more, bigger iron that you would stay at the office to use, instead of having at home.

The point is Linux really did okay in the 90's, in terms of providing Unix devs a way of working away from the computer room. I think this is an underappreciated mechanism behind Linux' success over the years .. you could carry Linux with you pretty easily, but this was not the case for Solaris, Irix, etc.


tiBook arrived 10 years after BeOS


I know, but want you to imagine what the world would have been like if it'd been a beBook instead. Like, could've happened 5 years sooner, imho.


I tend to agree. If they had targeted Mac or PC hardware (different at the time!) and perhaps built some nifty gizmos as add-in cards (LED blinkenlights array, GeekPort) they would have saved serious cash.

That was the time period when ordinary users could be persuaded to pay money for their operating systems and major upgrades.


The BeBox had an incredible amount of I/O and little to no professional grade software to take advantage of it.


I remember the Palm acquisition. At that time Palm had invested heavily in webOS [1] which wasn't as performant. When they acquired BeOS, I thought it was going to be a turn around. But that didn't happen either.

1. https://en.wikipedia.org/wiki/WebOS


First of all Be was purchased by Palm several years before webOS ever came out. WebOS didn't even exist in the same company. After Palm split into palmSource(software) and palmOne(hardware), the BeOs stuff(along side the old PalmOS stuff) went with palmSource. Later palmOne bought the rights to the full Palm name from palmSource and became Palm again and that Palm came out with webOS.


Thanks for the correction. Clearly, my recollection was wrong.


What was truly revolutionary for BeOS was the muti-threaded user interface where you could have multiple mice connected to the same machine and they could interact with the UI at the same time. Hardly anyone paid any attention to this. But the possibilities are amazing.


We had one of the two-processor BeBoxes at my uni computer club – it was a really cool machine to play around on, and at the time I was using it (1999) one of our few graphical terminals with a web browser, so in quite a bit of demand.

We also had a NeXT tower, three SGI machines running IRIX, a Mac upon which someone had installed Mac OS X Server 1.0 (NeXT innards, UI looked like classic MacOS).

I kind of miss the diversity of systems we had back then. In many ways we've gone forward – tinkering is much easier now with the preponderance of cheap, fast dev boards and systems like the Raspberry Pi, but it does feel like actual user-facing stuff is now largely locked down, without as much innovation and competition.


That's sad Haiku won't get traction. An viable alternative on the desktop PC OS market would be a great thing to have (even on a commercial basis, not necessarily for free). Linux for 1% geeks, MacOS X for Apple hardware owners and Windows for everybody else does not seem like a healthy competition.

Android, iOS, Chrome, Facebook - everything is monopolized nowadays. Governments should really consider supporting alternative OSes, browsers and social networks for sake of national security as the monopolies enjoy too much power over the whole humanity nowadays.


Unfortunately the way the PC market is makes it basically impossible for a new desktop OS to show up at this point. The hurdle in drivers alone would be insane, and the easy solution to that is to only target a small set of hardware which effectively makes you a hardware company (kinda like Apple), but that probably won't work either since people are a lot less likely to try out your OS if they need to buy a new computer to do so.


> ...the easy solution to that is to only target a small set of hardware which effectively makes you a hardware company (kinda like Apple)

Perhaps with this idea the only companies able to do this are Apple, Microsoft and Google. Every other OS aiming to be a new desktop OS has essentially failed. Including the Linux desktop community and everyone else.

The desktop OS market only has room for 2 or luckily 3 and always requires at least a billion-dollar (in profit) tech company with such resources to even plan for years to do such a thing. Hence the fierce competition in this space, some may ask themselves why bother.

> but that probably won't work either since people are a lot less likely to try out your OS if they need to buy a new computer to do so.

The exception to that is Google with ChromeOS (Laptops) and Android (Mobile) which they're probably both replacing with Fuchsia. I'd expect Fuchsia running pixelbooks and phones to arrive in this decade. Which will make the desktop market as Windows, macOS and Fuchsia OS.


> Perhaps with this idea the only companies able to do this are Apple, Microsoft and Google.

Unfortunately they all have perverse incentives that run counter to everything that makes personal computing valuable.


Perhaps one should build their new OS around a compatibility layer to support Windows drivers. I hardly am too much of a system programmer and don't know how much sense would that make but AFAIK this is possible. E.g. I remember using Windows WiFi NIC drivers on Kubuntu a decade ago.


If it were reasonable to do, then Linux or one of the BSDs would have done it ages ago. NDISWrapper was a special case.


Linux and BSDs hardly care much about desktop stuff and they already have all the drivers for the relevant server stuff.


The founder of Be wrote a bunch of articles about Be's life, and demise

https://mondaynote.com/tagged/beos


This part seemed like the punchline:

> Apple’s decision to go with NeXT and Jobs was doubly perilous for us. Not only would we not be the next MacOS, Jobs immediately walked third party Mac hardware makers to their graves. No more Mac clones for BeOS. With tepid BeBox sales and no future on the Mac, Valley VCs weren’t keen for another round of funding — and the 1995 round was running out.


I was really taken with beos when it was a live product. However, Nextstep really was a much much better basis for taking Apple into the future compared to beos. As has been proven resoundingly first by the seamless switch from PowerPC to intel, and then the ongoing smoothness and general acceptance of OS X by industry.

I know it’s not universally loved but OSX/nextstep for me is really everything I could have wanted from an operating system.


When I went to college in 1999, a kid down the hall from me pushed me really hard to install Be. It looked really cool when he showed it to me...

... But no programs that I wanted ran on it! As cool as BeOS was, without programs, it was little more than a demo or a hobby.

Within a year I tried Windows NT and Windows 2000, and then forgot all about BeOS. Windows 2000 did everything that BeOS did for me, didn't crash, and ran all the programs I wanted to.


That's a coincidence this surfacing right now. There must be something in the air. I was an enthusiastic BeOS user back when it was a thing (ironically, I switched to it because I thought NEXTSTEP didn't have much of a future left), and I used it for a few years, quite happily. It left me with a legacy of several BeBoxes, and over the holiday period, I was vaguely inspired to dust them off and wonder about what to do with them. I got one of them booting perfectly, here's a link I posted to reddit showing it's first POST

https://www.reddit.com/r/vintagecomputing/comments/eku19u/du...

(first POST should have gone to slashdot really, ah well.)


I remember using one at NTH in Trondheim in 1995-96 and it was awesome compared to the silicon graphics stations or pcs. It just felt insanely smooth and quick compared to the clunkiness of the other machines. I wish it had taken off, it would probably have gotten us faster to multi core machines.


There is another big reason that Apple didn't buy Be and was right not to -- and I say that although BeOS is my favourite x86 OS ever.

Dev tools.

Remember Steve Ballmer's "Developers! Developers! Developers!" dance? https://youtu.be/I14b-C67EXY?t=10

He was right. Without developers for a new OS, you are dead in the water. Which is my MS has fought so hard to keep compatibility.

Apple had to switch OS. That meant it had to persuade all its devs to switch OS. That meant it had to offer the devs something very special, and that something was NeXTstep and Interface Builder. NeXT's dev tools were the best in the software business and _that_ offered trad Mac devs a good enough reason to come across.

Be had nothing like that.

BeOS was wonderful, but it was not a replacement for NeXTstep as a replacement for MacOS.

But there was another company out there.

BeOS was a natively multiprocessor OS, when that was very rare. One of the reasons is that in fast x86 computers, the x86 chip is one of the most expensive single components in the machine, and it puts out most of the heat.

Especially at the end of the 1990s and early 2000s, the era of big fat Slot 2 Pentium IIIs and worse still Pentium 4s.

But there was one company making powerful desktops with the cheapest, coolest-running CPUs in the world, where making a dual-processor machine _in 1998_ was barely more expensive than making a uniprocessor one.

That company's CPUs are the best-selling CPUs ever designed and outsell all x86 chips put together (Intel + AMD + Via etc.) by well over 10 to 1.

And it needed a lightweight, SMP-capable OS very badly, right at the time Be was porting BeOS from PowerPC to x86...

https://liam-on-linux.livejournal.com/55562.html


https://en.wikipedia.org/wiki/BeOS#Products_using_BeOS

Pretty cool BeOS still lives on in the audio/video/broadcasting industry.


That was actually a fairly awesome OS (for its day). Many folks thought Apple would buy it. When they purchased NeXT, instead, a lot of us were disappointed.

However, all these years later, I'm very glad that BeOS wasn't selected.


I have never used BeOS, can you explain why you are glad now that it wasn't selected?


Mostly because of the UNIX subsystem. Having all that horsepower under the hood has been awesome.

Also, NeXTStep became Cocoa, which knocked PowerPlant into a cocked hat. That may not be as compelling an argument. I heard that the BeOS framework was damn nice (never tried it, myself).


Besides that, it is also very likely that Apple would have remained stagnant (or worse) if Steve Jobs had not been brought into the fold. I am not into persona worshipping, but Jobs brought some well-needed focus to a sea of overpriced beige boxes.



I was going through boxes of old CDs and DVDs the other day and throwing out a lot of crap, I found my old BeOS5 disk. Didn't throw out that one. I really enjoyed using that OS back in 2000.


The BEOS software needs to be preserved in an archive for the free, PD, open source, software.

Mike Crawford RIP: Wrote spellswell: https://github.com/ErisBlastar/spellswell

and word services: https://github.com/ErisBlastar/wordservices

It was taken from one of his old sites before he lost control of it.


I remember attending MacWorld Boston in '97, at the age of 12, and seeing a BeOS demo. I was blown away by a demonstration that consisted of a video file playing on the page of a rendered book.

If you clicked the page of the book and dragged it around, it simulated the page turning and the video deforming, without skipping a frame.

I may've only been 12, but that demo has stuck with me since.

Edit: I would love to lay eyes on that demo again if anyone has an idea of where video of it may still exist.


"The features it introduced that were brand new at the time are now ubiquitous — things such as preemptive multitasking"

Without taking anything away from actual new things introduced in BeOS, preemptive multitasking and dual CPUs were not "brand new". Computer system had been doing these for a long time before 1995, or even 1991 when BeOS was initially developed. Heck, minicomputers were doing this stuff in the 80's!


It is not the OS, not even the software ... Apple would still be a failure if it is all desktop OS. And the first thing he did is to do sos to Microsoft and ask for office license for 5% Apple share for a reason. That is done.

Of course you lost in a battle or even a campaign does not mean you just lost the war.

The world is better with Steve, Microsoft with competition ...

Now we even have Microsoft having GitHub and linux running on windows.

Not if it is BeOS.


If you are interested there is a way to make linux look like BeOs [0].

[0]: https://www.reddit.com/r/unixporn/comments/d8e54k/window_mak...


Try it yourself today https://www.haiku-os.org


I was a paid BeOS customer and still have copies of multiple versions in the original branded shipper envelopes. I have fond memories of using it back during my college CS days. Really loved the interface, that was quite the time when I used NT4, Solaris, Irix, and BeOS all at the same time.


Crazy timing, just this week I pulled my BeBox put of storage and fired it up. Still impresses me even now, loads of nice touches. Also got a bit of a shock when I played a MIDI file and perfectly serviceable sound was produced by the little built in speaker.


My favorite part of BeOS was that in the control panel you could turn individual processors on and off. And it happily let you shoot yourself in the foot and turn off the last processor with no warning whatsoever.

I appreciated that the OS didn't coddle you.


Now I had read somewhere that Apple would have purchased BeOS but JLG was pushing for too much money. JLG fucked up the deal basically. I dont have a source... was years ago I read it.

Had JLG not fucked up the deal, they would have picked up BeOS and not NeXT.


BeOS was cool and all but I loved the BeBox with the GeekPort.

When I worked at a dot-com 1.0 failed startup in 1999 one of our big wigs was a former exec at Be, and he was like the coolest dude I knew at the time. Still up there.


As someone who's interested in platform/OS interface and UX design, is using modern Haiku (in a VM) a decent parallel for what using BeOS was like, or would I need to get ahold of the real thing?


I've played with both BeOS and Haiku, and I would say Haiku is practically the same UX wise. They have built more tooling etc on Haiku, like a package manager where as far as i remember BeOS was More windows like (download file, install)


> In 1990, Jean-Louis Gassée, who replaced Jobs in Apple as the head of Macintosh development, was also fired from the company. He then also formed his own computer company with the help of another ex-Apple employee, Steve Sakoman. They called it Be Inc, and their goal was to create a more modern operating system from scratch based on the object-oriented design of C++, using proprietary hardware that could allow for greater media capabilities unseen in personal computers at the time.

Even IBM with OS/2 couldn't surmount the juggernaut network effect of Windows. By 1990, this was apparent to many. It's odd that Gassée and company thought they could succeed where IBM had failed.


I saw BeOS as a spiritual successor to the Amiga OS, where it might be used as a turnkey media creation system. I think Apple was the Microsoft of this market, and had enough user/developer inertia to make inroads impossible. They sold a depressing 1800 BeBoxes.


They thought they would have their chance with hardware+software combo.

Indeed, the BeBox should have looked like this to succeed: https://upload.wikimedia.org/wikipedia/commons/thumb/f/fa/Ap...

That computer made Apple cool again, and paved the way for their recovery way before the iPhone skyrocket.


Before the eMac, the iPod made Apple cool again, AFAIK


Sorry, I meant the iMac (G3) in 1998 - maybe it's not the best pic then since it's an eMac that came later...

The iPod was 2001, 3 years after the first iMac.


IBM also failed to lock down the IBM PC hardware standard. They tried, with the PS/2 and the Micro Channel Architecture, but it turns out clone makers were more interested in standardizing and improving the existing non-standard (retconned into being the Industry Standard Architecture, ISA) into EISA than in signing on to perpetually license MCA from IBM.

https://en.wikipedia.org/wiki/Micro_Channel_architecture


IBM also didn't quite know what to do with OS/2 and how to sell it.

Maybe Microsoft's ties to the PC industry would have killed it anyway but IBM was kinda lost on what it meant to sell an OS to consumers.


And Jobs was celebrated for cracking 10% market share. It’s hindsight now for sure but it turns out you can do pretty well without a majority stake.

The problem I think was that Microsoft competed hardest in the enterprise space, which was supposed to be IBM’s roost. Apple went after creatives. It’s not clear what was left for a hypothetical third place contender.


IBM failed with OS/2 due to executive incompetence.

John Thompson doesn't want to be in the "kitchentop" business... Dumb ass.


beos demos never fail to make me weep https://www.youtube.com/watch?v=ggCODBIfWKY


Oh man the theme song to this brings me back! I have it in my iTunes to this day. It's Virtual(Void) by Baron Arnold.

I spoke to him years ago. If I recall he worked for Be before moving to Sidekick…


I feel like modern OS's are in some(several?) ways, worse than these classic OS's... They just assume too much, impose too much, hide too much etc...


I actually bet a little bit of money on that alternative universe. I used to jokingly recount the lesson learned: “Never bet against Steve Jobs.”


I know Microsoft was really big back then, but why BeOS and OS/2 Warp never get into and GNU/Linux did?


I'm waiting for Haiku stable. I just hope it's worth it :) Could use a change from current ones.


it's one Microsoft's greatest business achievements to have won against the OS/2 and BeOS competition and it's one of Microsoft's biggest business failures to have not won against Google's Android.


I almost thought we were talking comeback when I saw the headline!


BeOS was the way it was thanks to the async API. I think that a Rust port of BeOS would be amazing due to Rusts first class support of async.


BeOS did not use async, it used a "half-hearted, primitive, bug-ridden" implementation of actor model in C++.


Unlike Erlang which copies messages, and Pony which uses reference capabilities, BeAPI used a pragmatic approach where BMessages shared a kernel memory pool. Add the ability to filter messages and dynamically retarget messages, and you get a PRACTICAL (vs academic) Actor model. I wouldn't say it was half-hearted and primitive, on the contrary.


Erlang isn't exactly academic, or impractical. And the BMessages themselves did not share the kernel memory pool, it was the thread system (or BLooper if you will), but only some of the primitives. In fact, BeOS was kind of split brained. You could send a message to a thread either through the kernel send command (which had a limited queue length) or a one which use local shared memory (which effectively did not). That is, of course rather dangerous, since there are no protections on that memory (it being C++).

I don't miss the segfaults that I experienced time after time and strange stateful consistency errors from when I was programming in the BeOS, and I don't miss biting my knuckles when having to recast data as (void*) in order to send it to another thread, now that I'm programming in ERTS. I do however strongly appreciate the experience, because it was a hell of a lot of fun, and, importantly, I learned that you can't use a buffered matrix mult to save on memory allocations, because you don't know that your preemptive kernel won't kick you out midway through the matrix mult and start filling the bins with data from the other thread.

I would say that half of the reason why I am really adept at identifying race conditions in erlang is that I've seen them all, and worse, in BeOS.

And the BeOS runtime does not have half of the resilience and fault isolation properties that Erlang has, which is what I mean by "half hearted and primitive".


...which is async.


It's not async by default. It's concurrent. In order to get async, you have to do a ton of wrapping, e.g. https://github.com/elixir-lang/elixir/blob/v1.9.4/lib/elixir...


...which also a Rust port does exist for Haiku at least after a quick search. [0]

[0] https://depot.haiku-os.org/#!/pkg/rust_bin/haikuports/1/36/0...


I know. I'm not sure I understand.


How does Haiku compare to Beos?


Ah the memories! Had no idea Haiku was an open source implementation of BeOS. Port Electron to it, and I commit to writing tools for this OS in my spare time!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: