Amelio and the rest of his senior staff began searching for a way out. They needed a new operating system to buoy their attempts to compete with the Wintel juggernaut. The same search had taken place several times before, but now the company was desperate.
Limited Options
Ultimately the list of possible targets was narrowed down to five options:
License Windows NT from Microsoft and bolt a Mac-like interface onto it.
License Solaris from Sun and bolt a Mac-like interface onto it.
Narrow the focus of the flagging Copland project and release it in a year and a half.
Acquire Be and use BeOS.
Acquire NeXT and use OpenStep.
Next's APIs were also ported to run on top of Windows NT and Sun's Solaris.
In addition, the entire NextStep OS ran bare metal on several CPU architectures.
> originally ran only on NeXT's Motorola 68k-based workstations and that was then ported to run on 32-bit Intel x86-based "IBM-compatible" personal computers, PA-RISC-based workstations from Hewlett-Packard, and SPARC-based workstations from Sun Microsystems.
Yep, I used to run Apple's (maybe it was still NeXT at the time?) Mail app on Windows NT, I can't recall where I got it, maybe it was part of some goodies included with WebObjects?
Yes. WebObjects development was possible on NT due to “YellowBox”. It shipped with ProjectBuilder, EOModler, Mail, and other NeXT apps.
You could even do Objective-C monkeypatching shennigans on YellowBox, which I remember being needed at the time to get the scroll wheels on certain mice to work on YellowBox apps.
BeOS wasn't ready. I was there I used BeOS as my daily driver on a maxxed-out Power Macintosh 6400. I have, today, that machine still running BeOS. I also have a second dual-PII running BeOS.
BeOS wasn't ready.
Apple was making a decision in 1996 for a deal that had to be struck and struck fast (1997). I started using BeOS in 1997 with one of the first Power Macintosh releases.
Those days (my days) it wasn't ready-- you had to pile patch on top of patch on top of community fix on top of some set of drivers some dude in Boise made in order to get things functional. I spent more time browsing BeOS listservs and download sites fixing things (at 56k) than using it.
Everyone is remembering fondly the R4.5/R5 days in late 1999. Three years after the December 1996 announcement of the NeXT acquisition. By then BeOS was... better. It was running on PCs and had a much larger user community. In 1996/7? Just a handful of BeBox owners and Mac users dumb enough (me) to try it.
Management would have been professionally negligent to choose it over NeXTStep.
In 1996, when Apple was up against the wall, NeXTStep was almost ten years old. All they had to do was buy the company, port it to PPC, and change some copyright messages and icons around. Took about two years.
BeOS would have required way more time, money, and expertise.
BeOS had no multi-user, no security (at all), an "aspirational" level of posix compliance, drivers that were a disaster, a network stack specifically designed to frustrate you, and no "companies that actually matter and no Gobe doesn't count" application support.
I get it. It was pretty. It was fast. French electro DJs loved it because it was low latency, and some audio tools were ported to it.
A userbase of French electro DJs doesn't pay the bills.
The C++ API bound forever to the gcc 2.95 ABI IIRC was also an "interesting" choice. Sure with success they would certainly have engineered a way forward, out of necessity, but if I would have had to give my opinion about that for Apple at the time...
The Haiku project folks worked out ABI compatibility for newer GCC releases. Current builds of Haiku support binary backwards compatibility with the 2.95 ABI while also supporting newer GCC ABIs at the same time. So at least we know it is (and would have been) possible.
I have a spare machine to do some offline work on running Haiku OS, the open source successor to BeOS. Even today it simultaneously feels 20 years ahead and 20 years behind.
Good example, the interface is incredibly snappy but no Wifi. I love the vision of BeOS but it probably would have been the death of Apple if they had gone with it.
Haiku does support WLAN adapters (even USB ones), through the support isn't as extensive as Linux or the BSDs. You might want to use the current nightly builds instead of the latest beta version, though, which was released in December 2022.
I'd love to see what Apple would have done... but at the same time I fear very much it would have killed Apple.
BeOS was amazing, but its dev tools were nothing special while NeXT's were state of the art, industry-beating. That means no way to win Mac developers over to the new OS.
BeOS was as insecure as Classic MacOS, so no way to crack the server market (not that that worked) and no way to build nice locked-down little gadgets like iPhones.
And while Be had JLG, a very smart cookie, NeXT had Jobs, who had more vision in a day than JLG in a year.
Me, I wish a disappointed Be had done a deal with Acorn and ported to Arm.
Arms in the hundreds-of-MHz range were there and gigahertz-class was coming soon. Acorn had prototype multi-processor machines. Be had the best SMP support in the business in the late 1990s.
They could have reskinned BeOS to look more RISC OS like, run RO in a VM like OS X ran Classic in a VM, and offered both the thinnest lightest Web-capable laptops in the world, and compellingly-priced multiprocessor desktop workstations that didn't need a dozen cooling fans and could have gone to 4-way or even 8-way at an affordable price, which x86 and PowerPC chips could not touch.
i think you missed the greater point. apple with beos would not have turned Enterprise dead end that it is today.
that market was what jobs had to pivot to (there's a great video where he pushes the rationalization of going to academia and Enterprise workstations) so that next could survive. and it infected apple from the inside.
the timeline everyone missed with beos is exactly apple not becoming the mix of ibm and Microsoft it is today.
Everyone also forgets that JLG was a horrible manager and his decision to keep Macs obscenely high priced was what ultimately almost killed Mac.
And Steve Jobs as the founder of Apple had a political capital than JLG would have ever had.
While I know that the measly $250 million that Microsoft invested in Apple didn’t “save it”, only SJ could make peace with MS (only Nixon could go to China) and make the tough choices.
Founders are given a lot more leeway to make huge changes than any other manager.
While I LOVE LOVE LOVED BeOS. It resonated with me and it was a great experience...it wasn't multiuser and didn't have some stuff that I forget, but seemed necessary for a system to have long-term success. But mam did it cut out the cruft and wring out ALL of the HP of the hardware of the time.
Why was multiuser so important? Apple’s most successful operating system in the Jobs 2.0 era isn’t multiuser either.
BeOS would have been a fine foundation for smartphones and tablets. But of course it’s an open question whether Apple could have got that far in the 2000s without the return of Jobs. I suspect the company would have been acquired or merged with some unsuitable suitor like Sun.
macOS and iOS use a completely different mechanism to ensure a game doesn’t read my contacts or make network requests. It has nothing to do with Unix-style users.
They could have built that on top of BeOS just as well as on NextStep.
Because computing devices, and access to them, was not so ubiquitous back then. Families all had to share a single computer. Business users had to share access to large servers. There were no smartphones. Some had to travel to an educational setting just to see or use a computer.
Was any operating system actually able to go from single-user to multi-user so easily? Windows NT and OS X were totally rewritten from the ground up relative to their single-user predecessors Windows 9x and classic Mac OS.
yea, a big one. Once there was an intentionally multi-user-from-ground-up OS called "Multics". Dude named ken thompson or dennis ritchie or something worked on it but was like "bro, so bloated". So dude writes a DISK BENCHMARKING system, single user, called it "Unix" as if to castrate multics. The original unix was disk benchmarking, barebones. So we know how that moved into multi-user
source: kernighans readingbook, "unix a memoir" or something like that, great read
I'm not sure how much that really counts. According to [1], the time that Unix spent as a single-user operating system was intentionally very short-lived. Even the earliest version of the operating system that can be reconstructed today had multiple logins and processes, though only one could be active at a time. It seems that, by the time Unix was written in C, ported to multiple architectures, and spreading outside of Bell Labs, it was already multi-user.
NT had no predecessor (at least from Microsoft; architecturally it's predecessor would be VMS). It was ground-up multi-user. Mac OS [Classic] was not OS X's predecessor, either -- it was NeXTStep.
You are right on the technical lineage, but I was referring to how these products were (ultimately) presented commercially. The facts that they are both older than one might think and they were developed with specific goals from the start I think more clearly illustrate that you can't just "bolt on" such fundamental differences.
I agree that iOS isn’t multi-user in any real, like, multiple user accounts intended to be used by real people sense.
I wonder, though, it is based on MacOS somehow, right? Which is based on BSD. Could there be some left over multi-user plumbing sticking around in a technical sense?
Android isn't in a similar boat, AOSP has full isolated Multi-User support that is realized through Unix users. You can create new users through Settings and they have their own home screen, apps and files.
Most vendors have this enabled now and things like work accounts/MDM use the same system.
iOS barely uses that. Processes commonly run as “mobile” or “root”, but it does not matter very much. POSIX users and access permissions are archaic, and, in my opinion, don’t match with how almost any device is being used nowadays. iOS implements its own concepts through entitlements, containers, vaults, sandboxes etc. (Look up the “Apple Platform Security Guide” for details.)
> [...] User data is partitioned into separate directories, each in their own data protection domains and protected by both UNIX permissions and sandboxing.
Heh, funny you mention that, considering Be's pivot to BeIA. Some Be engineers also worked on the (unreleased) Palm OS Cobalt, and eventually, Android. (And then Fuschia, but I don't think that OS will ever hit smartphones.)
Many moons ago I read that Apple came super close to buying Be, but Jean-Louis Gassée wanted to much money and negotiations fell apart.
I reckon it was the star power of Jobs more than the OS as such that saved Apple in the late 90s.
Jobs' ability to do the deal with MS to get MS Office on the Mac and a cash injection of $150m saved Apple at the time.
We were well into the 2000s before OSX really began to make a difference to Apple's fortunes, and no one could have predicted that OSX would be the foundation for the iPhone OS.
It wasnt even a given that OSX would be the OS for the iPhone during development.
But absolutly, NeXT was the best choice, even if the OSX Beta and 10.0 were almost unusably slow and buggy!
I was at the Paris Mac Expo when Jobs launched the beta, and the excitement was amazing!
Same year as the Key Lime iBooks IIRC.
They rose up our of the floor on a pedastal if I'm remembering it right LOL.
I worked at Apple in the 90s, and saw all the demos of early releases of OSX and the confusing stack of platforms and tools. Jobs did the right thing ditching that!
Finally, I remember being at the Mac Expo in London and BE was there. We were standing around admiring the cool light bars on the front of the Be Box and its multitasking abilities.
If I'm remenbering right the lights on the front went up and down to indicate CPU usage. I could be imagining that?
The $150 million had nothing to do with saving Apple. They had already secured a billion dollar line of credit and besides, the same quarter Apple spent $100 million to buy out PowerComputings Mac assets and license.
It was years before Apple became profitable and the $50 million was a drop in the bucket
I suspect that without Jobs, Apple is dead by 2002. Turned into an arm of, like, RIM. Then Google comes out with the Android phone and it's all over for everyone including Microsoft.
I suspect if Apple and NeXT never "merged", the phone market would be RIM vs (a very different) Android, but we'd be worse off.
Blackberries were OK at the time, but my god RIM was so culturally conservative and Google at the time had no design sense that we'd probably be 10 years behind where we are now - and if we even had software keyboards they'd be terrible.
They probably would still be the #1 phone manufacturer, still making terrible Symbian phones with terrible developer support. I was working for Nokia (and a shareholder) on their future phones c. 2004-2005 and it was horrible all the way down. Apple deserved to eat everyone's lunch.
HTC G1 the original Android phone, was basically another sidekick. We’d have this shitty Android and WinMo for a decade longer and eventually things would get good.
Assuming Apple would still be successful, and not close shop.
It would have settled C++ as the main systems language on desktop OSes, between Windows, and Mac Be OS.
Objective-C would have died, and Swift would never come to be.
Clang and LLVM probably would never had gotten Apple's sponsorship.
POSIX would have died on the desktop, as Be OS like other non UNIX OSes wasn't that keen in being UNIX like.
There wouldn't be a flock of Linux and BSD developers rushing out to buy Apple hardware, instead of sponsoring OEMs shipping PCs with those distributons.
For sure, it was pretty clearly the right call at the time and objectively so with the benefit of hindsight. But BeOS was just too good to be allowed (forced, really) to die on the vine the way that it did.
We kind of live in an alternate to that alternative reality.
A lot (most?) of Be engineers ended up at a company called Danger, which was bought by Google a few years later, and they all went on to be the original core team of Android. Some BeOS technologies even ended up in Android such as its Binder, which from memory the Be engineer working on it open sourced it just before Palm bought Be, and then he used the same code in Android.
Right innit? It's not just Apple using BeOS but wondering how everything else would be different. You actually need to... think different at this point, aye?
I have read these kind of stories many times but I'm starting to realize how much the relative engineering success of MacOS 8 and 9 is overlooked with the now "classic" storytelling centered around Jobs and his comeback. Mac OS 10 was not really usable until 10.1 in 2001, yet the iMac, iBook and PowerBook G3 were successes, so the OS obviously played a role, but reading these stories it's kind of absent.
My memories are surely tainted by nostalgia because 1997-2001 I was a teen and avid Mac user but from what I remember between the mid-90s Performa on 7.5 and late 90s iMac on 9 it felt like a lot happened. While not the future, as an end user 8.6/9 felt relatively modern, especially compared to Windows 95/98. And even in the mid-90s even if from a business POV the future was bleak for Apple from an end user one 7.5 was not really worst than 95.
I'd argue that OS X (to use the nomenclature of the time) 10.0 was alpha quality software, and even 10.1 was at best what we now would consider beta software. OS X was not usable until 10.2 (and that's when many major software such as Photoshop and even Apple's own Final Cut Pro received their first native OS X port), but even 10.2 didn't reach feature partiy with classic MacOS. It wasn't until 10.3 Tiger that you could safely recommend to friends and family to upgrade to OS X.
I think of it as a time when people became so entranced with the idea of the emerging object-oriented zeitgeist that they imagined that making things OO and component driven would just hand-wave towards success. People rushed to make infrastructure to support this model, and, well, it was no magic bullet. Complexity, fragility, and a lot of hard work remained.
the thing I'd like to see is for software to expose functional/"rest"-like interfaces. similar with websites and the content/user data.
if you look at what accessibility software achieves by simply looking at the screen/hooking into the OS, now imagine any and all software could be connected like that.
some of this functionality is now provided in individual silos, e.g. an email with an appointment in gmail can go into your Google calendar. a ticket from your train company can be added to Google wallet.
but if you look at a flight booking system and want to compare the total price of a given set of dates for travel, including hotel and other things at different times - you're back to doing it on paper (or use somebody's website where 20% of the flights or hotels you want aren't included).
> if you look at what accessibility software achieves by simply looking at the screen/hooking into the OS
Unfortunately that's kind of a pipe dream... in the real world, the first organization to flagrantly violate the OS vendor's human interface guidelines is the OS vendor itself, who ships some application with custom-schmustom, owner-drawn widgets that cannot be queried according to the standard OS APIs for querying them for content. So the accessibility software literally has to OCR the screen to determine what's written in the text fields and buttons.
Source: Knew a guy who worked on Dragon NaturallySpeaking, he had some war stories.
> now imagine any and all software could be connected like that.
As other commenters mentioned, there's COM, but that's a hairball to code for and a hairball to configure. The nicest systems that worked that way historically were the Lisp machine and Smalltalk environments, but the closest day-to-day software is probably Emacs. Everything in Emacs really can be connected together through the power of text buffers and Lisp. Objective-C is Smalltalk semantics retrofitted onto C, and originally NEXTSTEP, later macOS, really was a promising environment for this sort of integration... but in recent years Apple has been so focused on "apps" that it's doubtful they can keep supporting that vision.
Java on the AS/400 is the weirdest thing. The whole platform was designed to be compile once to near assembly and then run cached byte code or whatever. Java is just too late compilation is clunky in comparison imo.
I believe Tribes 2 did something similar just at a higher level
TIMI is compiled at installation time usually, the bytecode format is used only as portable executable format.
While initially Java on AS/400 did take advantage of TIMI, IBM eventually moved away from it, into regular JVM design on now IBM i.
Most likely because IBM z doesn't use TIMI, rather language environments, and they want a more straight design, or due to Java dynamic capabilities using TIMI wasn't the best approach.
On AS/400 (IBM i), only Metal C, and the underlying kernel/firmware, written in a mix of PL.8, Modula-2 and C++, are pure native code.
It's funny how cancelled projects are somehow making people almost more "nostalgic" than projects that actually shipped. One of the reason may be that cancelled projects don't need to be completed, to ship with a reasonably good quality, to have an easy to use interface, to be sustainable for a long enough period of time, etc.
I rather see a reason that many such cancelled projects had very elegant architectures/structures, which is something that hackers love.
If you want to ship a product so that it becomes commercially successful, you often have to "deeply taint" this clear vision to make the product compatible with the multitude of very inelegant (industry) standards that customers expect/require.
Thus the nostalgia is not about shipping vs not shipping, but about keeping an elegant vision or be willing to taint it to be compatible with a "depraved" world.
Right. Most cancelled projects are beautifully architected and discount tons of installed products with all their decades of cruft.
Then you start trying to bring the New Thing(r) into the Real World(tm) and there are 5000 edge cases the original project dealt with in a 42-page if/else block for 270 different OEM patches that were sold by some JV team in another building who have never written a line of code in their lives.
There's a concept in political theory about why people become nostalgic for lost causes, failed movements, etc - kind of a backwards looking utopianism?
If only we had done X, if only this group had succeeded instead of failed in the past... I do agree that it's a way to avoid looking at the actual implementations and details of a real thing, and just go to what might have been directly.
I saw Copland at MacWorld Expo Boston at Apple’s booth. It kernel panicked. We didn’t get to use it.
But wow do I remember how exciting it was the first time I read about it in MacUser magazine. Basically some GUI mockups and promises of cool under-the-hood improvements, possibly the first time I was excited about future computing. Thank goodness they bought NeXT instead. They had no ability to manage projects at scale at the time.
Too bad TimApple has no sense of how to do anything other than chase growth; we need another product person at the helm. Oh well.
I'm tired of hearing "Tim Apple's Apple Can't Engineer".
VisionOS has some of the most incredible, coolest engineering I've seen in a long time, from top to bottom. Even if you're not interested in AR/VR, the writeups and sessions about the dual processor OS layout they've used, the scene and room graphing technology they've built, and the challenges of building ultra low latency pass through scene reconstruction and mapping are incredible.
Vision Pro might honestly be one of the technically coolest things built in the last 10 years, and I really hope they get the price down and fix some of the UX (weight, battery, etc), because it is something I use every day (and in fact, am writing this post on right now). It's for sure a Beta/DevKit/etc, and I wouldn't recommend the casual person to buy one, but again, on engineering chops alone, it is a masterpiece.
Very much agreed. I get why it’s expensive and I’m no stranger to the apple tax (and usually don’t mind it all that much), but I got the email today that it’s now available in Canada and $5k for a headset is… steep.
For what it's worth, the entry level PowerBook 100 in 1991, when adjusted for inflation, cost about $5500 USD. An entry level iBook in 2003 would cost about $2700 USD.
Yeah, it's expensive, but it's also astonishing how much prices have come down.
We don’t even have to adjust for inflation. I (well my parents) paid $4000 dollars for a Mac LCII with a //e card, a crappy 12 inch monitor, a LaserWriter LS printer, a 5-1/4 drive for the //e card and SoftPC. It had 10MB RAM
It's so easy for people to look at the dollar figure for something they bought in the 80s or 90s and not account for inflation and make a pithy comment, but when you back-calculate the actual price my parents paid for some things, it blows my mind how cheap technology is now.
$5500 for an entry level laptop! You can get an M2 MacBook for $800 today, and I couldn't even begin to describe how much more tech is inside that thing for 20% of the price.
Apart from the LLM bot "AI" toys, what significant new technological advances can you point to in the decade from 2014-2024?
Offhand I can't think of a single thing.
Between 1994 and 2004 we went from Windows 3.x and Classic MacOS, to OS X and WinXP.
Linux went from being a toy to a serious viable OS.
Before: 16-bit OSes with some 32-bit parts, all resolutely single-CPU, many primarily based around cooperative multitasking with no real memory protection, and mainly proprietary networking.
After: pure 32-bit OSes, with proper preemptive multitasking and hardware memory protection, capable of SMP on multiprocessor systems, with TCP/IP based networking.
From 2004-2014 the industry moved from mainly single-CPU 32-bit machines to multi-core 64-bit machines everywhere, with UIs rendered through hardware 3D, and a heavy reliance on Web protocols for almost all network functionality. A much smoother transition but that was because of the big changes in the previous decade.
Linux went from being a nerd tool to a usable mainstream OS that was rapidly taking over the server market. Smartphones went from toys to maintstream. Chromebooks arrived in 2011, and Linux started to become a consumer OS.
Since 2014... er... containers everywhere on servers? Electron apps proliferating? That's about all that springs to mind.
"Too bad TimApple has no sense of how to do anything other than chase growth …" I feel like this is more emblematic of the wider industry trend than Apple's own strategy that they chose to impose on us.
The focus shifted very hard over the past 15-20 years from selling people new and novel gadgets and software to turning us ordinary folks into "monthly active users." Personal computing got a lot less personal, and Apple leaned into it to stay relevant.
> License Windows NT from Microsoft and bolt a Mac-like interface onto it.
Gosh, am I glad that that didn't happen. I wonder how serious of an option that really was, it seems wildly out of place compared to the other propositions, which are all reasonable.
IIRC from reading Showstopper, NT was intended to afford that sort of different frontend swapping. It just happened that Windows became the primary focus in the end.
It might have been necessary to stick with the NT 3.51 codebase. I think they broke the clean separation of the GUI from the rest of the OS in 4.0, which was launched that year, for performance reasons.
Google for Mac OS X Server 1.0 screenshots... the UI was a bizarre mashup of classic Mac OS and NeXTSTEP, you can see they were part way through bolting a Mac-like interface on.
When Mac OS X was eventually released it looked utterly different.
I remember the AT&T / BSD lawsuit in 1992. Maybe Apple looked at NeXTSTEP and since it is based on the Mach microkernel and BSD that they would be able to fix the 10 or so files to avoid the AT&T / Unix System Labs licensing fees like Free/NetBSD did.
In the 1980's Microsoft had their own port of Unix named Xenix. Why didn't they push this instead of the very limited MS-DOS?
>
In March of 1982, Western Electric reduced the royalty fees that it charged for UNIX with the introduction of UNIX System III (a combination of UNIX V7 and PWD). This was done by raising the source license cost, but lowering the royalty per user license to $100 (about $318 in 2023) from $250. This meant that Microsoft’s prepayment of $200000 in 1980 (around $700k in 2023) for a discount in volume was voided if they wished to use the newer system. That same month, on the 10th, Paul Allen held a seminar in NYC where he outlined plans for MS-DOS 2.0.
> Why didn't they push this instead of the very limited MS-DOS?
Hardware requirements.
For a (quite long) while the plan was to keep DOS for microcomputers and Xenix for the serious stuff. DOS gained some extensions that made it very Xenix/UNIX-like (like subdirectories, pipes, "-" as command line switch char, device files in "\dev" instead of a global namespace, etc.).
I think for the time period, hobbyists can't be written off either. My dad got into Commodore and Atari in the late 70's/early 80's, and sparked my interest in software. Computing already wasn't cheap, but if the home consumer needed to pay for extra licensing fees, I could easily see adoption being far more limited to business. I think a lot of developers, myself included, got their start as hobbyists and never would have caught the "itch" to develop if for not this early exposure at home.
While MSDOS had subdirectories, pipes were not supported at the kernel level. It was simply a convenience of COMMAND.COM that simulated pipes by running each command sequentially. The command switch character was the slash (/) until the end, with the exception of a few non-standard third-party softwares insisting on the dash. Device names were always global namespace and \dev never existed in any form.
> While MSDOS had subdirectories, pipes were not supported at the kernel level. It was simply a convenience of COMMAND.COM that simulated pipes by running each command sequentially.
You are confusing the lack of concurrency with lack of pipes -- the mere fact that COMMAND can do what you describe requires UNIX-like pipes and channels (e.g. stdin/out/err), and programs avoiding direct console I/O (way common for DOS programs...). Comm software such as Kermit did the same thing to simulate remote login (I used this one in the day, at least).
That came in with MS-DOS 2.x which is when the very rudimentary level of Xenix compatibility came in: device names such as `LPT1:` or `COM2:` could be prefixed as `/dev/lpt1` or `/dev/com2`. Pipes simulated in COMMAND.COM.
I don't recall DOS ever accepting command switches prefixed with `-` instead of `\` as standard, though.
I'm guessing that Apple would have to pay a fee to both UniSoft and AT&T and maybe Sun was much larger and a partner with AT&T / Unix System Labs on SysV so maybe they could resell Solaris for cheaper.
Sun open sourced Solaris in 2008 for 2 years. Maybe AT&T / USL didn't care by then or maybe Sun's licensing agreement allowed them to do it. I don't know.
A/UX was also ridiculously insecure in its default configuration. It had a "guest" user, no password. I remember labs of Mac II or IIx systems at a local university, with public IP addresses, guest accounts, no passwords. A local BBS guy posted a message on how to get "free" internet access through the university's dialups. I think it was at least a month before things got locked down.
It was the time, only trustworthy people on the net so far. Told the story here about my mid-ninties job where I ran Quake servers on the lan before we had a firewall.
Yes, though contemporary OSes like SunOS were more secure than A/UX. Once you were on the network though, it was like swiss cheese (NIS / YP... unshadowed password files... etc.)
A/UX 4.0 I believe was going to be based on OSF/1. The MkLinux port was also useful in the Mach 3.0 migration (as were the open source BSDs which refreshed the 4.4BSD code Rhapsody was based on, from NEXTSTEP 4.0 which only ever shipped as an alpha).
The thing that I didn't think about was that A/UX combined 680x0 UNIX™ code with 680x0 MacOS code in very clever ways. MacOS provided the windowing, terminal emulators, the filesystem browser, etc. Unix provided the underlying kernel, the filesystem, networking, etc.
(This is a hand-wavey simplification for illustrative purposes.)
The point being, intimately intertwined, closely integrated.
But classic MacOS only ran on 680x0 and to port it to PowerPC, Apple invented a clever hack built around partial emulation: a nanokernel containing a 680x0-to-PowerPC interpreter (with a way for 680x0 code to call PowerPC code), on top of which ran the MacOS Toolbox ROM image, on top of which ran most of MacOS in 680x0 code. Then they profiled that, and converted the most performance-critical parts -- and only those -- to PowerPC code.
When I ran a PowerMac with classic MacOS, I had a little indicator applet in the menu bar. It was red when executing 680x0 code and turned green for PowerPC code. (Not very helpful for colour blind folks, but hey, fine for me.)
It was red almost all the time. Only years later when substantial PowerPC-native apps started to appear did it sometimes stay green long enough to see.
It wouldn't have been so hard to port the Unix bits of A/UX to PowerPC-native, but that would have ruined the integration between the Unix code and the MacOS code. Getting MacOS all running natively would have been a massive task. Apple never did it; it replaced the entire OS with NeXTstep.
Getting 680x0 MacOS code integrating with PowerPC A/UX code would have been a major technical challenge.
It sounded easy and obvious but it was an even bigger project than Copland, and Apple wisely dediced against it.
Interesting, and sounds quite plausible. Perhaps they could have replaced the GUI with a X window lookalike instead of trying to brute force the old OS.
But in its way OS X was much more of a clean break: a whole new OS, ported to the old hardware, tweaked to understand filesystems and file formats and given a shallow cosmetic resemblance.
A/UX was a much tighter integrated system. Some of the OS functionality was provided by actual classic MacOS.
What was left, if that was removed, was a fairly unremarkable Unix for its time. You could even boot it in a sort of vanilla-Unix form.
E.g. for v3, top 2 rows (pics 1-6) are in Mac mode, bottom row (7-9) are in Unix mode. A bit like the ANS with AIX, there's little to appeal to Apple users here:
The best 68k Mac I ever had was MAE in Solaris 9 on my SunBlade 2500. Fully patched up Solaris 9 had all the filesystem and disk i/o improvements, and none of the libc changes that happened in 10 which broke all backwards compatibility.
That's still the same NT line tho they changed how versions are called after Windows 8.1 which was NT 6.3. Initial Windows 10 release was bumped to NT 10.0 but further versions use year-month and year-half year formats
Much of that pre-2000 code remains. But so much more has been added in the last 1/4 century that "the same codebase" is somewhat a matter of perspective.
Can someone explain the context here? Surely NT is closed source and was never developed for macs, what actually is this? And what are the chances you can get software for it? Presumably most NT software was compiled for Intel only and closed source.
NT was designed to be portable, so the machine specific parts are cleanly separated, even on the same CPU. So if you port only those parts, the existing binaries for an architecture should work. The three bits are:
1) ARC boot firmware. NT was developed on non-x86 systems like i860 then MIPS, and ARC is the native boot firmware used (on x86 NTLDR emulated it prior to Vista). Similar to a PC BIOS / UEFI, and a PCI PowerMac would use OpenFirmware. As well as providing an ARC compatible environment that loads over OpenFirmware, this project seems to does some fun so the boot firmware can pretend there's a storage device so that driver "floppies" can be loaded during the initial stages of setup. (ie an F6 disc)
2) A HAL. The main NTOSKRNL is hardware agnostic, so the idea is that there's one binary for each CPU architecture. But the kernel needs to interface with actual timers, busses etc., so the interface code is in HAL.DLL and the appropriate one is copied by setup. (For example see https://www.geoffchappell.com/studies/windows/km/hal/history... for a list of x86-32 ones in older versions of windows, with various HALs for NEC PC-98, IBM MCA, various early multiprocessing systems, nowadays there's just one AMD64 one that's mostly in the kernel itself). So the main kernel in unaltered, and halgoss handles the Mac specific stuff.
3) Device drivers. Once NT is up and running it does need actual drivers.
The specs for 1) are known, so can presumably be emulated, and I guess there's a DDK for 3) (or it can be deduced from another DDK), I guess 2) is probably the one that would need the most insider knowledge, I'm not sure if the leaked NT sources go down to that level as I've never looked at them.
As for compatibility, 32 bit PowerPC Win32 binaries, 16 bit x86 Win16 binaries, and whatever x86 DOS stuff will run in an NT4 DOS box. No x86 Win32, only Alpha had an emulator for Win32 x86 stuff (until ARM stuff).
I remember migrating whole Windows installations using image copies, but taking a fresh install's HAL files and copying them onto the images and volla, system transfer completed (device drivers would need reinstalling though).
NT4 was developed for PowerPC architectures (and MIPS, Alpha). A large portion of the NT4 source code was leaked [0]. I'm not saying the author used the NT4 source code in any fashion, but I would imagine using such source would make life much easier to accomplish this task.
Or the author clean room reversed engineered the bootloader. Or there is enough information out there to forgo the need for any internal knowledge of Windows code.
The PPC code base was never explicitly targeted for Macs, but other systems from IBM/Motorola; but because it is a 'common' platform, the binaries themselves on the NT4 ISO do not need modification.
And yes, you can find the source code on GitHub. In multiple repos!
Isn't this just drivers and a HAL? Microsoft built the OS in a way so that such ports would be possible so there should be some API documentation out there. The question is how complete it is because everybody just used the MS provided HAL.
Nt has a dynamic HAL layer to adapt it to platforms. I'm not sure if it was publicly documented, but in any case yes the source code leak don't harm that kind of port, even if you "just" have to write a HAL.
The current PE exe format supports it. As well as a few Alpha, a few ARM formats, a few MIPS format, Itanium and of course x86. The big part would be finding a compiler that spits out the right code. The preceding NE format also did a few variations.
Yeah, I think the only commercial OEM releases of NT that got sold were x86 and Alpha, though there was support for a handful of specific MIPS and PPC machines, I don't think there were any OEM sales of them with it. I do know that only x86 and Alpha had all the service packs released. There were also at one time internal MS builds on Sparc, but that didn't last long. Alpha did have some pre-release builds of windows 2000 that went out, but there was no final release.
A fun fact is that because of where he worked at the time was a DEC shop, the creator of PuTTY originally created it because he got an alpha windows machine and there was no compatible terminal emulator for alpha NT. He supported alpha builds well into the 2000s and IIRC only dropped it as the machine he was using still for builds finally died.
Man, I loved Windows NT back in the day. It was light enough that I could run it on fairly low end late-90s hardware, and it was substantially more stable than Windows 95.
> Had Microsoft been serious with POSIX subsystem, and most likely I would never bothered with Linux for our UNIX assignments.
Arguably they did have a serious POSIX subsystem w/ the Interix (nee OpenNT) acquisition but they didn't do anything with it. I remember building GNU software under Interix and having a blast with it.
I still wish there was a distribution of NT that booted text -mode and had an Interix-based userland. That would be tons of fun.
Interix was so much more orthogonal to the design of NT-- so much more elegant. WSLv1 was more elegant than WSLv2. WSLv2 feels like a minor step up from just running a Linux VM.
It's a shame. NT did support POSIX, but it was in practice designed to make it easier to port UNIX apps. At co-op job I had back in the day (~2000), I had to fight with Veritas' backup software on NT that wrote to tapes. I don't recall what it was (a device mapping to the drive maybe?), but you could see the UNIX foundations of the software.
Ironically they had multiple iterations of it, with POSIX subsystem in NT, MKS Toolkit, Interix, Windows Services for UNIX, for various levels of "POSIX support", dropped everything on Windows Vista, only to create WSL out from Project Astoria ashes.
And before they got golden goose with MS-DOS/IBM deal, they were into the UNIX business with Xenix.
A few reasons why Linux would probably never taken off, had Microsoft stayed closer to their UNIX related projects.
Absolutely, I had a university instructor that taught us both Windows NT and (I believe) Red Hat Linux in 1998, for running web servers. Linux made much more sense when thinking about a multi-user system running various services. I ended up dual booting Windows and Slackware to learn Linux, while also being able to use the software I was already familiar with on Windows. If Microsoft had gone deeply into POSIX for their consumer systems, I don't believe I would have switched over to Linux except for quality of life improvements that may have come about (which may have never happened).
Someone gave me an NT4 CD when I was a kid and on the jewel case it said it supported x86, PPC, I think some other architectures too. I was disappointed I could never get our PowerMac to boot from it. :P
I had a job in 2006 where the sysadmin ran the entire shop on a single OpenVMS server with two 500Mhz Alpha CPUs. It was our file server, AD server (somehow), DB server, web server, everything.
I just looked it up and that CPU came out in 1996, at the same time Intel released their 200Mhz Pentium. Alpha was way ahead of their time.
Somewhere there is a blog about how that CPU had some hardware bug that prevented it from hitting 1ghz. if it didn't have that I'd be a 1ghz 64bit processor in a desktop early on. I really cannot find the blog. It was down a rabbit hole of how Intel AMD both picked some IBM mainframe to base their designs off but AMD went for an older quirky design but it allows a lot of advantages. Anyway, search engines are awful now.
I tried a few Kagi searches, but found Wikipedia claiming that at least one Alpha ran faster:
> The Alpha 21164 or EV5 became available in 1995 at processor frequencies of up to 333 MHz. In July 1996 the line was speed bumped to 500 MHz, in March 1998 to 666 MHz. Also in 1998 the Alpha 21264 (EV6) was released at 450 MHz, eventually reaching (in 2001 with the 21264C/EV68CB) 1.25 GHz.
Because desktop PowerPC was unpopular for anything other than Mac, and PPC NT wasn't designed for Open Firmware. Though you can blame Microsoft's monopoly tactics - in which loyalty to x86 was the prime directive, alternative OSes on PPC Macs were not a widely accepted idea either - people stuck around with classic Mac OS, in (ultimately) vain hopes that Copland/Taligent would deliver.
The PReP/CHRP was a venture by Apple, IBM, and Motorola (but primarily IBM) to allow companies to build OSes for a common platform. Apple didn't want to participate in the end. But like other ports of NT to MIPS and Alpha (Alpha enjoyed much more support until Compaq dropped support), the popularity was well beneath the dirt cheap in comparison x86 platform.
This had nothing to do with Microsoft's monopoly. And nothing to do with exclusively 'desktop' PPC -- remember, NT4 Server also had PPC support through SP2. IBM's intention was for servers, not desktops (though they did produce compatible laptops/desktops), to support operating systems such as AIX and S[l]o[w|l]aris
I wanted one on my desktop so bad. DOS/Win3.1 PCs were just so bad to me. Then I saw SunOS and really just wanted any Unix system on my desk. I bought a PC in 1994 just to install Linux.
> AIX was the first Unix I ever used in 1991 on an IBM RT with the ROMP CPU
Nice. Me too. But it was about 1989. The first machine I ever compiled a C program and it took me ages to find `a.out` and work out that that was my binary. I was more used to DOS compilers that turned $SOURCENAME.$ext into $sourcename.EXE.
Right; NT was and is likewise quite capable of being a desktop or server OS, the difference is that MS actually pushed that and NT 10.0 is still actively used in both contexts.
An interesting footnote is that installing the web browser package on AIX caused a telemetry dial-home to Big Blue. I found this because it was keeping our ISDN BRI leased line up constantly and costing us money. Blocking that address solved it.
It's been mentioned here before, but Dave Plummer's YouTube interview[1] with Dave Cutler[2] (NT lead architect) is a must-watch with never revealed before history about Windows NT.
Windows NT was something pretty interesting though. It was meant for a corporate environment but was well known as a much more "stable" version of Windows than 95 and 98. All happening in parallel, with the Power Macintosh even!
NT4 was more stable, to a point. But it had it's serious annoyances.
IP address change? Reboot. I can't remember other scenarios, but there was a lot of <settings change, reboot> in NT4.
Graphics and print drivers? Moved from user space (NT 3.5, 3.51) to kernel space -- and oh boy, graphics card vendors SUCKED at writing drivers (printer manufactures never improved to this day, hence the elimination of vendor print drivers). BSOD city.
NT4 was also hobbled as a desktop OS for consumers due to only supporting up to DirectX 3.
But! It was super popular for CAD/CAM/3D modeling. FireGL cards. 3DLabs cards. Yissss those were awesome monsters.
It also required a hefty 16MB RAM but as with all minimum system requirements from Microsoft, it really shined on 32MB+.
This is one of the reasons we had Windows 9x -- memory was too expensive to run NT4.
One of my favorite Microsoft-led NT4 (and SQL Server) proving grounds was TerraServer [0][1].
I miss the days of NT4. Computing was much more interesting and varied.
> It also required a hefty 16MB RAM but as with all minimum system requirements from Microsoft, it really shined on 32MB+.
This is just me going down memory lane, but my aunt gave me her corporate Toshiba laptop with NT4 back in 1996 since she hated the idea of working from home back then. I made friends with a kid who brought his dad's work laptop to school with similar specs and it always ran blazingly faster, always wondered why. I remember my aunt's laptop had 16mb of RAM. Thanks for clearing this up :)
That Terraserver wiki article has my head spinning a bit. Crazy how time flies.
Amelio and the rest of his senior staff began searching for a way out. They needed a new operating system to buoy their attempts to compete with the Wintel juggernaut. The same search had taken place several times before, but now the company was desperate.
Limited Options Ultimately the list of possible targets was narrowed down to five options:
License Windows NT from Microsoft and bolt a Mac-like interface onto it. License Solaris from Sun and bolt a Mac-like interface onto it. Narrow the focus of the flagging Copland project and release it in a year and a half. Acquire Be and use BeOS. Acquire NeXT and use OpenStep.