Hacker News new | past | comments | ask | show | jobs | submit login
Why did Windows 95 setup use three operating systems? (microsoft.com)
304 points by mooreds 3 days ago | hide | past | favorite | 191 comments





Raymond Chens blog and book (The Old New Thing) is an absolute delight! I always had a big respect for how intuitive the Windows 95 GUI is, and reading his description of the thoughts and methods behind its inception, it's no surprise that it became so good. It seems like Microsoft was extremely pragmatic and reasonable in many of their endeavors back then. It's a wonder how it degenerated into the absolute unit of sh*t that is modern Windows (even if the filesystem and kernel is arguably a lot better, everything on top seems to be developed by an army of interns)

Army of interns should about right. I always suspect that Teams is developed by some beginners who are learning Scrum.

But yes, Windows 95 to Windows 2000 were a huge jump in usability. From Windows 8 on and the “Metro” interface they threw it all away.


>I always suspect that Teams is developed by some beginners who are learning Scrum

I get the same feeling from Google's android and Pixels. Lots of neat features keep getting added, but the SW and HW issues that end up in the final product make it seem like an incredibly amateurish effort for such a wealthy company hiring top talent.


I think they rotate the interns every year. That will explain the GUI^WUX advances. /s

UI/UX keep getting shittier over time is an industry wide phenomenon not a Google exclusive one. To me everything kinda peaked in late 2000's and has been on a downwards slope ever since the mobile became the dominant platform so everything from desktop PCs to cars had to look and function like a phone.

Perhaps controversial but I think the ability to ship continuously is also a factor. You just have to pass and fix the things with the most complaints after. A holistic view on the general user experience will never make it to the top of a pile now.

I think yearly releases were much better. I feel a lot of sites and software now have constant change you can’t keep up with and often the change doesn’t really improve anything. Reddit would be a good example. I constantly have buttons disappear and rep appear or move around. Same in YouTube. Much friction for no benefit.

I know at one point Microsoft and IBM both invested significantly in studying UX research. It doesn't feel like that's happening or if it is, I guess I must be drifting out of touch with what's considered intuitive in user interfaces. It's not just MS either, I feel like the ability to discover what you can do in an app/site any more is hidden by aesthetic choices over functional ones.

I remember being pulled into user surveys and usability studies while wandering the mall back in the day and being given series of tasks to accomplish on various iterations of a windows GUI (in the Windows 9x era) while they observed, and then paid $100 for my time for each one I participated in.


These kinds of studies are still very much there at MS, and not just for Windows.

The problem, in my experience, is that when some manager really wants to do something, they'll find a way to justify that with a study. Pretty much every UX decision that ended up being universally panned later had some study or another backing it as the best thing since sliced bread.


>filesystem is a lot better

In my tests it was 6-7x slower than on Linux (in VirtualBox on Windows). I assume by better you mean more features?

On a related note I used one of those system event monitor programs (I forget the name) and ran a 1 line Hello World C program, the result was windows doing hundreds of registry reads and writes before and after running my one line of code.

Granted it doesn't take much time but there's this recurring thing of "my computer being forced to do things I do not want it to do."

I also — and this is my favorite, or perhaps least favorite one — ran Windows XP inside VirtualBox (on Windows 10). When you press Win+E in XP, an Explorer window is shown to you. It is shown instantly, fully rendered, in the next video frame. There is no delay. Meanwhile on the host OS (10), there is about half a second of delay, at which point a window is drawn, but then you can enjoy a little old school powerpoint animation as you watch each UI control being painted one by one.

(Don't get me started on the start menu!)

Twenty years of progress!


>On a related note I used one of those system event monitor programs (I forget the name) and ran a 1 line Hello World C program, the result was windows doing hundreds of registry reads and writes before and after running my one line of code.

You were probably using Sysinternals process monitor. https://learn.microsoft.com/en-us/sysinternals/downloads/pro...

Windows does a tonne of things in the background, yes. If I run that and let it monitor everything, things will happen even if I do nothing. It is an OS and complex.

>It is shown instantly, fully rendered, in the next video frame. There is no delay

THIS is true and also crazy to me. I forgot how fast XP was. Especially on modern hardware. I TeamViewer into a laptop with an i5 CPU and Windows XP (medical clients...) and it felt faster than my more powerful local machine!!

I have set my sysdm.cpl performance settings to this, and it does help a bit to get rid of the animations and crap.

https://imgur.com/a/jlre36e

Yea... I like the 10 start menu but they destroyed it in 11...


And it was considered bloated at the time!

> I assume by better you mean more features?

I would guess better on current Windows than on Windows 95. I don't know about faster, but NTFS is most probably more reliable than FAT32. And also more features of course, and fewer limitations. At least the file size limit (4 Gb) and ownership / rights metadata (ACL).


Depends how you use it.

If you are handling a stupendous number of small files (say doing an npm build) then metadata operations are terribly expensive on Windows because it is expensive to look up security credentials.

Maven is not too different from npm in how it works except instead of installing 70,000 small files it installs 70 moderate sized JAR files that are really ZIP files that encase the little files. It works better in Windows than npm. Npm got popular and they had to face down the problem that people would try building the Linux kernel under WSL and get awful times.

Microsoft knows it has to win the hearts and minds of developers and they believe in JS, TS and Visual Studio code so they’ve tried all sorts of half-baked things to speed up file I/O for developers.


No filesystem is great in the lots-of-small-files case, partly simply due to syscall overhead.

There's a reason https://www.sqlite.org/fasterthanfs.html , SquashFS, etc. are a thing, or why even Europe's fastestest supercomputer's admins admonish against lots of small files. https://docs.lumi-supercomputer.eu/storage/#about-the-number...


SquashFS is a (read only) filesystem designed for small files.

Which shows that even if you don't want to call any filesystem great here, they differ vastly in just how bad they handle small files. Windows' filesystems (and more importantly it's virtual filesystem layer including filters) are on the extremely slow end of this spectrum.


The reason LUMI is advising against many files is that it uses the Lustre parallel filesystem, which is notoriously bad with small files. See here: https://www.lanl.gov/projects/national-security-education-ce... .

I guess that's where the "sqlite competes with fopen" part might help.

That used to be ReiserFS's claim to fame - tail packing small files. Doesn't seem like any FS has really optimized small file handling since then.

It is not so much the wasted space that bugs me as all the metadata and processing of that metadata.

For instance, the 70,000 files in your node_modules do not need separate owner and group fields (they all belong to the same user under normal conditions) and are all +rw for the files and +rwx for the directories. If you have access to one you have access to all and if your OS is access checking each file you are paying in terms of battery life, time, etc.

On Windows it is the same story except the access control system is much more complex and is slower. (I hope that Linux is not wasting a lot of time processing the POSIX ACLs you aren’t using!)


As someone who has done some fairly extensive reboot testing of “power failures” for fat32 and ntfs I can guarantee you that NTFS is far more robust but it’s not bullet proof

Oh, i meant that ntfs is a lot better than fat16/32. I mainly use linux, and I like ext4 and love zfs, but ntfs is also reasonably stable.

> Twenty years of progress!

Well, it does have more ports normally open and starts connecting to MS as soon as possible, so yes, it is a progress. /s


Interesting take, I feel like the current iteration of the Windows UI is actually really good. Microsoft seems to be continually tweaking and improving it.

Tweaking, yes. Improving, not so much. If anything, things have been regressing lately - for example, we were able to make taskbar vertical ever since it became a thing in Win7, and it makes perfect sense to do so on any landscape-oriented display for most efficient use of display space. The new redesigned taskbar in Win11 killed that option outright.

With heavy tweaking, 11 is okay... but the standard right click menu is a disaster for power users. And many other settings like the start menu they destroyed.

Open Shell has been a first install for me since Windows 7.

I've heard tales of when working at MS was the job everyone in "dev" aspired to, like for Google and the like in their heyday.

I love that little nugget of info at the end. You could originally run excel standalone without an OS and it came with windows 2.1 bundled

Only a thumbnail from the Wikipedia page mentioned in the article was saved to the Internet Archive [1], but it appears the same image was uploaded to Wikia: https://static.wikia.nocookie.net/windows/images/3/34/Excel2....

The original description of the file uploaded to Wikipedia read [2]:

Microsoft Excel 2.1 included a run-time version of Windows 2.1

This was a stripped-down version of Windows that had no shell and could run just the four applications shown here in the "Run..." dialog.

The spreadsheets shown are the sample data included with Excel.

[1]: https://web.archive.org/web/20090831110358/http://en.wikiped...

[2]: https://web.archive.org/web/20081013141728/http://en.wikiped...


The current article[0] says:

> Excel 2.0 was released a month before Windows 2.0, and the installed base of Windows was so low at that point in 1987 that Microsoft had to bundle a runtime version of Windows 1.0 with Excel 2.0.

[0]: https://en.wikipedia.org/wiki/Microsoft_Excel


Did Windows 1.0 run without DOS ?

No. No version until NT ran without DOS. Even if you installed Win95 from scratch, DOS was there, bundled.

I remember a version of PageMaker also came with Windows. [0]

"Until May 1987, the initial Windows release was bundled with a full version of Windows 1.0.3; after that date, a "Windows-runtime" without task-switching capabilities was included"

I actually thought it was cut-down, but it only had task-switching disabled.

[0] https://en.wikipedia.org/wiki/Adobe_PageMaker


Most interesting part of the whole thing for me! The later WinPE environments are some of the most overlooked computer environments out there but they were absolutely everywhere. EPOS, ATMs, digital signage, vending machines.

And of course the subject of so many BSOD photos…


>The later WinPE environments are some of the most overlooked computer environments out there but they were absolutely everywhere. EPOS, ATMs, digital signage, vending machines.

Those are probably CE and not PE?

https://en.wikipedia.org/wiki/Windows_Embedded_Compact

or Embed based on CE

https://en.wikipedia.org/wiki/Windows_IoT#Embedded_family


If there was a BSoD involved, it was probably one of the NT-based Windows Embedded versions (NT 4.0 Embedded, XP Embedded, …).

WinPE is the Windows Preinstallation Environment, used as the basis for Windows installation and recovery, and available for custom builds as an add-on to the Windows ADK[1], but AFAIK not intended or licensed for embedded use.

[1] https://learn.microsoft.com/en-us/windows-hardware/manufactu...


Exactly. Windows PE is used for install and recovery. NOT to run ATMs.

It was probably embedded standard based on NT/XP/7

https://en.wikipedia.org/wiki/Windows_IoT#Embedded_Standard


Unrelated to WinCE, WinPE is the version of the NT kernel that the Windows setup DVD or netboot installer uses, since Vista and higher.

You could probably build a really nice UI atop of it if one were so inclined. To prevent people from doing this as a way to bypass Windows licensing, there is a timer that will cause WinPE to periodically reboot itself if you leave it running.


Yes. No way WinPE was used to run ATMs is what I am saying :)

It was probably embedded standard based on NT/XP/7

https://en.wikipedia.org/wiki/Windows_IoT#Embedded_Standard


Nope. Windows CE was more in the old school “smartphone” (pre iPhone) and PDA, marketed along with a baseline hardware profile and called Pocket PC. Also used in a number of industrial PDAs (think postal service and warehouse scanners), set top boxes. And then various and sundry embedded devices, but usually these tended to be smaller, often battery form factor and/or headless. While x86 a target more often than not, ARM or MIPS. Windows CE was early on pushed for video games on the Sega Dreamcast and a short lived smart car OS called Auto PC. Signage, ATMs (if they weren’t OS/2), and test equipment more often ran bonafide Windows NT on commodity x86.

It was probably embedded standard based on NT/XP/7

https://en.wikipedia.org/wiki/Windows_IoT#Embedded_Standard


Triton ATMs very prominently ran Windows CE and then Windows Embedded Compact, the RTOS one that was also under Windows Mobile, not “full” Windows NT.

Actually yes, what I more specifically meant was WinXP Embedded and family.

There is also IoT windows and miniNT used for older NT installs bootstrap

None of that stuff was pre-NT, though. Windows 2.1 was not something you'd want to deploy on an ATM.

FWIW lots of ATM happily ran OS/2 1.x.

Of course, it was a related thought

IIRC, the HP Scanner IIcx also came with a copy of Windows 2.something.

(too late to edit, but the autoderp had its own ideas when I very deliberately tapped out "Scanjet" above.)

I think it needed DOS … just not the Windows “shell”

It came bundled with a stripped down version of Windows 2.x - missing the application launcher (in Windows 1.x/2.x known as MS-DOS Executive, replaced by Program Manager and File Manager in Windows 3.x), so it could only be used to run one application (Excel) unless you fiddled with its configuration.

Yes it needed DOS because pre-3.11 Windows versions actually used the DOS kernel for all file access. When 32-bit file access was introduced in WfW 3.11, that was no longer true-but it was an optional feature you could turn off. In all pre-NT Windows versions, Windows is deeply integrated with DOS, even though in 9x/Me that integration is largely for backward compatibility and mostly unused when running 32-bit apps - but still so deeply ingrained into the system that it can’t work without it.

IIRC, Microsoft tried to sell the same stripped down single-app-only Windows version to other vendors, but found few takers. The cut-down Windows 3.x version used by Windows 95 Setup is essentially the 3.x version of the same thing. Digital Research likewise offered a single app version of their GEM GUI to ISVs, and that saw somewhat greater uptake.


Word had that too.


> Raymond has been involved in the evolution of Windows for more than 30 years. He occasionally appears on the Windows Dev Docs Twitter account to tell stories which convey no useful information.

Way to go Mr. Raymond!


Because, when they did it right, in Windows NT 3.51, the users with legacy 16 bit applications screamed. There was a 16-bit DOS compatibility box, but it wasn't bug-compatible with DOS.

Microsoft underestimated the inertia of the applications market. NT 3.51 was fine if you used it as a pure 32-bit operating system. You could even configure it without DOS compatibility. Few did.


Contemporaneously on Linux, there was a "DOS box" (DOSEMU) running that could play Doom at full speed, with sound. Something the NT DOS box couldn't manage.

Microsoft had the resources and expertise to make excellent DOS compat on NT. They just didn't. The reasons are many: they just didn't want the expense, "binning" (Windows 9x for consumers, Windows NT for professionals and enterprises), plus Windows NT was a memory hog at the time and just wouldn't run on grandma's PC.


> when they did it right, in Windows NT 3.51, the users with legacy 16 bit applications screamed

I mean, i don't think there is anything "right" involved from the users' perspective when all they get is the programs they want to use their computer with becoming broken :-P.

In general people do not use computers for the sake of their noise nor OSes for the sake of clicking around (subjectively) pretty bitmaps, they use computers and OSes to run the programs they want, anything beneath the programs are a means not an end.

(and often the programs themselves aren't an end either - though exceptions, like entertainment software/games, do exist - but a means too, after all people don't use -say- Word to click on the (subjectively again) pretty icons, they use it to write documents)


That may have been Dave Cutler's doing. Cutler came from DEC and did the OS for the VAX. (Not UNIX, DEC's own OS). When DEC went from 16 to 32 bits, the VAX was made hardware-capable of booting up in PDP-11 mode. Nobody used that feature, because the result was an overpriced PDP-11. So it seemed reasonable to think that, once the 386 was out, everybody would run 32-bit software on new hardware.

That did not happen. 16-bit applications hung on for a decade.


>anything beneath the programs are a means not an end.

This.

Absolute backwards compatibility is why Windows (particularly Win32) and x86 continue to dominate the desktop market. Users want to run their software and get stuff done, and they aren't taking "your software is too old" for an answer.


Unfortunately no OS so far cares about backwards compatibility for users. Everyone thinks it is ok to force whatever new UI and workflows their monkeys dreamed up on the userbase.

My desktop (Linux running Window Maker) looks and feels more or less the same today as it did in 1997 - years before i even learned about it :-P. If anything, bugs aside, all changes since then have been for the best IMO.

Of course that's mainly possible because of how modular the Linux desktop stack is.


Yep.

I finally abandoned CorelPHOTO-PAINT 3.0 only when I moved to x64 Vista in 2008.


What did you use instead?

Settled on Paint.NET.

I honestly tried to use GIMP multiple times but it's always felt.. unnatural.

https://www.getpaint.net/

NB: but IrfanView is still my goto picture viewer.


Something the Unix world can certainly learn from.

Sun used to take binary compatibility very seriously. Solaris 8 (and perhaps later releases) still had a compatibility layer for SunOS 4.x binaries. Solaris 11 can still run Solaris 2.6 binaries.

Linux is another matter entirely, if your binaries run at all from one distribution release to the next you're doing well.


Linux doesn't need binary compatibility as much as Windows, lot of source packages will compile right away with a vast array of different operating systems, typically excluding Windows but including Linux, and Linux is a few clicks away from running a fair number of MS-DOS and Windows applications, probably more than any single Windows version. Linux is king in compatibility.

Linux needs binary compatibility every bit as much as Windows. Even among people who are nerdy enough to run Linux on the desktop, very few are interested in compiling software to make it work.

> Even among people who are nerdy enough to run Linux on the desktop, very few are interested in compiling software to make it work.

I would imagine most desktop linux users rely on maintainers to compile and distribute binaries for their particular flavor.


I maintain a package for which the upstream source hasn't changed in about 23 years. I still need to intervene once or twice a year because something else changes in the distro to cause that package to no longer build or run.

As someone who still uses older software, thank you for your service! I still use Dia and xfig (because I’ve built up tons of “blocks” for them) and appreciate people donating their time to keep some ancient software going :)

>I would imagine most desktop linux users rely on maintainers to compile and distribute binaries for their particular flavor.

Which is less ideal than just having general binaries that work.


That doesn't help you at all when you don't have the source and even then, compiler changes break source compatibility all the time.

Not just Linux, Mac too (people forget that they run a certified BSD kernel).

Macs uses FreeBSD as the foundation for their base system, but the kernel itself is a Mach derivative.

Even if you get the source youbend up with incompatibilities. From compiler to libraries. (Especially when reaching GUI/Gnome etc.)

Systems like Solaris are a lot more restricted what sets of libraries they provide (not "package up everything in the world" as some linux distros) but what they provide they keep working. (I haven't touched an Solaris system in a long time, but assume they didn't start massive "innovation" since then)


That's great when you're distributing your software for free and giving away the source code too, but it's a complete non-starter for commercial software.

This discussion feels a bit ancient to me.

Considering that desktop apps nowadays rely on web counterparts to be functional, most commerecial apps will stop running after some time, regardless of whether operating systems keep compatibility or not.


Surely you've just outlined the very best reason to keep alive old applications that don't require a web counterpart!

Personally I want to keep GuitarPro 6 alive (There's no newer version for Linux because binary software distribution on Linux wasn't worth the trouble) and Quartus 13.1 (because I still write cores for a CycloneIII-based device and 13.1 is the last version to support that chip.)


Not every desktop app these days is an electron monstrosity. Especially not if it's a commercial app.

IME Electron is more popular among commercial apps.

To be clear it is not binary formats themselves that change and cause incompatibility. ELF is ELF and hasn't appreciably changed. And it isn't even really kernel syscalls (though that isn't etched in stone, I don't get the impress it's changed that much). The problem is the libc or other shared libraries.

Seems like the way that this is "fixed" is by using containers. But it feels so...bloated.


Same on Solaris. Maintaining userland compatibility was a large part of what Sun did.

The solution that Windows provides for the problem -- WinSxS -- is, ultimately, no less bloated.

These days, Windows actually provides a single standard C runtime library in the OS with strong backwards compatibility guarantees, no WinSxS needed.

> Linux is another matter entirely, if your binaries run at all from one distribution release to the next you're doing well.

Linux binary compatibility is actually pretty good. As is glibc's an that of many other low level system libraries. The problem is only programs that depend on other random installed libraries that don't make any compatibility guarantees instead of shipping their own versions. That approach is also not going to lead to great future compatibility in Windows either. The only difference is that on Windows the base system labrary set is bigger while on Linux it is more limited to what you can't provide yourself.


Doesn't Windows setup still use some weird minimal version of Windows? I remember having to hack my employers provisioning once to install virtio drivers and I ended up in this "purgatory OS" where I could use DOS commands to add drivers.

It's called "WinPE" or Windows Preinstallation Environment. It's very strange. Tools you really think would work like Powershell are completely absent by default, and when added work in unexpected ways.

Adding drivers is also painful, since it doesn't really support all of the regular windows drivers. To sideload drivers you have to do it twice, once for the install image and once for the winpe image.

*My knowledge of this stuff is about 7 years outdated, so it's possible they've improved it since then... Unlikely but possible.

https://en.wikipedia.org/wiki/Windows_Preinstallation_Enviro...

https://learn.microsoft.com/en-us/windows-hardware/manufactu...


I did a bunch of work with WinPE for Win 7,8, and 10. Customizing the environment for automated deployment was a ton of fun. I really came to appreciate the tooling available in WinPE (and it's eccentricities). It's probably what really paved the way for my transition to Linux.

Don't modern versions of Windows do the same? For example, I clearly remember that the Windows 10 installer first launches a Windows 7-like environment.

The installer runs in Windows PE (a stripped-down, live version of Windows). It looks like Windows Vista, because Microsoft never bothered to update the bitmaps making up the fake theme.

I thought I read somewhere recently that they finally updated the installer

Remember very few people install windows in the modern world - most systems ship with windows, and then are only repaired or upgraded.

That is presumably why Microsoft doesn't put much engineering effort into the install-from-empty-disk case.


All versions of Windows NT setup run in the same kernel that the installed OS is in. Pre-Vista versions would stick to a basic text buffer that was a lowest common denominator.

Windows Vista and newer launch a more substantial version of the OS with the graphics system and Win32 services running, but they never intermix versions. Windows 10's DVD loads Windows 10 to run the installer. That they haven't updated the pre-baked Aero graphics since Vista is a laziness problem, not indicative of being "actually Vista/7" :)


>Windows Vista and newer launch a more substantial version of the OS with the graphics system and Win32 services running, but they never intermix versions.

And yet, you can use the Windows 11 installer to install Windows 7 and have it be significantly faster because of that.


Windows Vista introduced a wim-based method of installing where it effectively unpacks a whole prebaked installation image onto the hard disk. It is the X:\Sources\install.wim file on the install DVD.

I'm not surprised that you can mix up versions by modifying your installation image, since the installation method hasn't changed since Vista. As Microsoft ships them, however, you boot Windows 11 to install Windows 11. :)


That's as much just the mid-Windows 10 era shift from the Windows installer installing Windows "one file copy at time" to shipping virtual hard disk images and installing the whole thing like (but not exactly as) a container overlay.


Aren't all modern (>xp) windowses just NT6 under the hood? Is there such clear delineation between 7 and 10 for example?

It feels like NT4, with 2000 on top of it, then a layer of XP, then Vista, then 7, then 8, then 10, and, finally, 11.

It’s not uncommon to do something that lands me on a dialog box I still remember from Windows NT 3.1. The upside is that they take backwards compatibility very seriously, probably only second to IBM.


IIRC there is only a single Windows 3.1 dialog box in Windows 11 (ODBC Microsoft Access Setup -> Select Resource file chooser). Lucky you.

Windows 10 also had a Windows 3.1 dialog in the Fonts folder (when you try to add a new font), but they fixed it in Windows 11.

Like the services control panel that’s still 30% white space, a useless “extended” tab and doesn’t save any of the location or style data. It’s the best.

I also really enjoy there seems to be a lot of things that run on top of a mutant version of MMC. IIRC, that’s also from the NT4 days.

Yes it may not be the prettiest but I can get 20 year old games to run with relatively little troubleshooting.

You can still run 50 year old games on your newest IBM Z16 mainframe. Microsoft is just a baby in this business. If you have a Unisys Clearpath boxes, you can even get get some 60 year old games running, provided you can read the punchcards ;-)

Fun fact: Unisys Clearpath today is just x86 boxen emulating the old Burroughs CPU architecture. You can even deploy Clearpath instances to AWS. And they still run Mr. High and Mighty Master Control, too.

MCP is the most user hostile OS I’ve ever had the displeasure of using. And don’t even start me about CANDE.

To answer your other comment: yes, the MCP from Tron was named after the Burroughs/Unisys OS. Bonnie MacBird, the screenwriter for Tron, is the wife of Alan Kay, who served as technical consultant, and she dropped references both to himself (Alan Bradley) and his areas of interest (Kay loved the Burroughs architecture and in particular its tagged-memory features) in the script for the movie.

Interesting to learn the real MCP was nearly as hostile as its fictional namesake.


It continued as 6.1, 6.2 and 6.3 for Windows 7, 8, and 8.1. But the NT kernel revamped for Windows 10. And they aligned the version numbers at that point. Windows 10 and 11 are both NT 10. The kernel has many differences within 6.x let alone the big leap to 10.

Just like macOS was Mac OS X (10) for a very long time, then they moved to 11 with Big Sur but it's really only in name.

macOS Sequoia is version 15, whoever reaches 20 first wins right!?


Joke's on them, I'm running Fedora

There have been iterative substantial improvements to the NT architecture since Windows 2000 and later with Vista (where the UAC model started, rather poorly).

Really? Windows2000 (which was based on NT) hit the sweet spot for me. Anything earlier than that seemed too buggy, and from there onward just seemed to devolve into a bloated mess of unrequested "features". (The very reason why I have been using Linux for 20+ years now as a matter of fact.) But yeah to be completely fair I suppose not everything they have produced since has been crap. There have been some innovations here and there...

Linux has enough unrequested features as well, and the distributions are a fragmentation mess.

20 years ago, we thought eventually either GNOME or KDE would win, instead became even more fragmented, across all layers.


They're all NT, though I'm not sure how you mean "NT6"; XP was NT 5, Windows 10 was NT 10, and I think 11 is 11.


Incidentally, this is why most drivers broadly support 2000/XP (NT5), Vista/7/8/8.1 (NT6), and 10/11 (NT10) in those specific groups.

Windows 11 is still NT 10.0

The fact that Windows can upgrade an installation in place with relatively high success is impressive. Is it possible to have an install that's been repeatedly upgraded all the way from MS-DOS without needing a reformat somewhere along the way?

I assume that's one of the things tested before any release is made.

It would be pretty easy to set up automation to have a bunch of disk image of old copies of windows (both clean copies, and customers disk images full of installed applications and upgraded many times), and then automatically upgrade them to the newest release and run all the integration tests to check everything still works.



There's various youtubers who have tried upgrading MS OSes thru as many versions as possible and they have taken it pretty far

Including preserving custom user config (colors, background images)

I feel strange about hating on MS after the 2000s


If you need something to hate on them (and they absolutely deserve it for way more than just this, it just stands out as an obvious one) - they put adverts in the start menu, and they were still present in Windows 10.

Oh MS the company deserved the hate - and still does - even if Windows the OS (or at least Win2k SP4, WinXP SP3 and Windows 7) became pretty solid and dependable.

Let me fix that: Internet Explorer 6 existed and far overstayed its welcome well into the 2000s.

And then people instead of learning this lesson, helped Google turn the Web into ChromeOS.

what tech told us is that problem lies between chair and (keyboard|parliament|wheel|...) it seems

On the other hand, IE6 held web developers back from bloating their sites and turning them into "apps" when all the users want are a document with maybe a little interactivity here and there.

Rolling Linux distros must blow your mind.

All your PC needs is some kind of BIOS, whether real or Legacy CSM.

And a fresh SSD that you can set up with a traditional MBR layout instead of GPT layout. Since GPT doesn't work with BIOS, GPT requires UEFI to boot which is prohibitive for DOS.

You install DOS on your first partition after formatting it FAT32.

Just the same as getting ready to install W9x next.

The DOS from W98 is ideal for partitions up to 32GB.

After you install W9x next, it's easy to drop back to a DOS console within Windows, or reboot to clean DOS on bare metal, which is what people would often do to run older DOS games. DOS is still fully intact on the FAT32 volume which it shares with W9x as designed. It's basically a factory-multiboot arrangement from the beginning

The only problem is, W98 won't install easily if there is more than 2GB of memory.

DOS can handle huge memory though (by just neglecting it) so might as well skip W98 unless you have the proper hardware.

Very few people would put any form of Windows NT on a FAT32 volume, and IIRC W2k doesn't handle it, but XP performs much better on FAT32 than NTFS, if you can live without the permissions and stuff.

Do it anyway, when you install XP intentionally to that spacious (pre-formatted, fully operational) FAT32 volume which already contains DOS, the NT5 Setup routine makes a copy of your DOS/W9x bootsector and autosaves it in the root of your C: volume in a file known not surprisingly as bootsect.dos. And then it replaces the DOS boot routine with a working NT bootsector and uses NTLDR to boot going forward.

Most people who migrated from DOS/W9x to WXP started out with XP on NTFS, so users lost FAT32 overnight, and the idea was for DOS, W9x and FAT32 to be taken away from consumers forever. None of the NT stuff would directly boot from DOS bootsectors any more, you ended up with an NT bootsector and NTLDR instead. DOS and W9x don't recognize NTFS anyway, they were supposed to be toast.

So if you install both DOS & WXP side-by-side on the same FAT32 partition like this, you can still dual-boot back to DOS any time you want using the built-in NT multiboot menu (which never appears unless there is more than one option to display) where the DOS/W9x bootentry simply points to bootsect.dos and then DOS carries on like normal after that since it's the same FAT32 volume it was before, other than it's new NT bootsector.

Too bad there aren't any graphics drivers that XP can use on recent motherboards, so what, say hello to generic low-resolution.

I realize nobody ever wanted to skip Windows Vista, sorry to disappoint.

Next comes W7. But you would be expected to install from DVD and there's not really drivers for recent PC's.

What you would do is boot to the W7 Setup DVD, NOT to upgrade your XP, instead to create a second 32GB partition, format that second partition as NTFS and install W7 to there. The NT6 Setup routine will replace the NT5 bootsector on the FAT32 volume with an NT6 bootsector, and add a new BOOT folder right there beside the NTLDR files on the FAT32 volume. The built in NT6 multiboot menu may not appear unless you manually add a bootentry for NTLDR (which is easy to do), then you would be able to multiboot the factory way to any of the previous OS's since they were still there.

W10 is the better choice for a recent PC, do it by booting to the W10 Setup USB.

In this case the very first NTFS experience for this lucky SSD is going to be W10, quick while it's not yet obsolete ;)

I know people didn't want to skip W8 either :)

But if you had W7 or something on your second partition already, you would make a third partition for W10 and it would install like it does for W7. Except there would already be an NT6 BOOT folder in your FAT32 volume, with its associated bootmgr files. You would choose the third partition to format as NTFS and direct the W10 install to No.3. The NT6 Setup routine will end up automatically adding a bootentry for the new W10 into the same NT6 BOOT folder that was there from W7. So you can choose either of the NT6 versions which exist in their own separate partitions, from the factory multiboot menu if you want anything other than the OS that you have chosen as default at the time. As well as the OS's still existing on the FAT32 volume.

Oh yeah, once the NT6 BOOT folder is present, you can manually add a bootentry for the DOS you have residing on your FAT32 volume, then you can boot directly to DOS from the NT6 bootmenu without having to drop back to the (previous if present) NTLDR way to boot DOS. Using the same old bootsect.dos file which NT5 had in mind the entire time

Now this is vaguely reminiscent of how UEFI boots a GPT-layout SSD from its required FAT32 ESP volume.

However UEFI uses only an EFI folder and (unlike Linux) doesn't pay any attention to a BOOT folder if there is one present. But if there is an EFI folder present on an accessible FAT32 volume, UEFI is supposed to go forward using it even if the SSD is MBR-layout and not GPT.

You would have to carefully craft an EFI folder for this, but then the same SSD would be capable of booting to NT6 whether you had a CSM enabled or were booting through UEFI. Using the specific appropriate boot folder, either BOOT or EFI depending on motherboard settings. DOS would only be bootable when you have a working CSM option, not depending solely on crummy bare-bones UEFI.

You just can't boot to an MBR SSD with SecureBoot enabled.

You may even have an extra 64GB of space left over for W11, unfortunately the W11 Setup routine is the one that finally chokes on an MBR SSD.

You would have to do W11 some other way, and add it to your NT6 boot menu manually.

For extra credit.


> You install DOS on your first partition after formatting it FAT32.

DOS until v7 (Win95) didn't support FAT32. It also doesn't support weird FAT(12/16/16B) formats such as only one FAT copy.


There's really something missing today as opposed to the 80s and 90s : back in the days, you had new computer brands coming out every year, with they own OSs and hardware... It really was a perpetual amazement and joy. Of course at that time i didn't have to work with all those things, which made any compatibility issues not that important to me.

Yet i wonder : how can we relive through those enjoyments ? It was a bit similar when mobile phones started, but it settled much quickly, without leaving any but 2 survivors for the last decade. What makes it so hard today to release new hardware+software for the mainstream market ?


I think it boils down to inertia.

Microsoft absolutely played their cards expertly well in the nascent days of the microcomputer, all the way through to the new millennium. They also adopted bully tactics to prevent upstarts (like Be for example) from upsetting the apple cart and had ironclad contracts with OEMs when distributing Microsoft Windows.

It also helps that the IBM PC became such a pervasive standard. The competition, such as Commodore, Atari, Acorn, you name it, couldn’t help but find ways to blow their own feet off any time they had an opportunity to make an impact. Heck… even Apple came very close to self destruction in the late 90s, with Steve Jobs and NeXT being their Hail Mary pass.

In short, now that we’ve had personal computers for decades, it’s very difficult to break into this market with a unique offering due to this inertia. You have to go outside normal form factors and such to find any interesting players anymore, such as the Raspberry Pi and GNU/Linux.

As an aside, I kinda wish Atari Corporation was run by less unscrupulous folks like the Tramiels. Had they realized their vision for the Atari platform better and hadn’t taken their “we rule with an iron fist and treat our partners like garbage” mentality with them from Commodore, they might have stuck around as a viable third player.


I was looking for a blogpost from many years back about features Windows cannot provide, because it would result in logical inconsistencies if multiple applications used the feature. The blogpost gave examples like setting the desktop background or file-application associations "definitively" from applications, and mentioned at least one example of two applications fighting resulting in some chattering in the Windows UI.

I thought it was Raymond Chen's blog but I haven't been able to find it. I thought someone here might recall and have a pointer.


"What if two programs did this?" https://devblogs.microsoft.com/oldnewthing/20050607-00/?p=35...

This is a recurring theme featured in multiple of Raymond Chen's blog posts, though. This is just the earliest one I can remember.

Here's a few more: https://www.google.com/search?q=%22two+programs%22+site%253A...

The "what if two programs did this" mantra I learned from these blog posts helped me greatly in judging whether certain feature requests from users make any sense at all. As soon as it involves touching system configuration, even if there is an API for it, it's probably a very bad idea.


Clearly they should have used QNX for the installer so it would have fit on a single floppy.

I always thought Windows (until 7) looked so lacking in polish around the edges compared to even the earliest versions of Macintosh system software -- especially during install, boot, crash, and shutdown. During boot, for example, even modern Windows boxes [correction: pre-EFI only] show a BIOS screen followed by a brief blinking cursor before the Windows graphics mode takes over. It was much worse in earlier versions.

The Macintosh screen never dropped you into a text-mode console, no matter what. Everything on the screen was graphics-mode, always -- and there weren't glaring design changes between system versions like in Windows (except at the Mac OS X introduction, which was entirely new).

Installing Macintosh system software onto a HDD was literally as easy as copying the System Folder. System installer programs did exist, but in principle all that was happening was optionally formatting the target drive and then copying System Folder contents. So simple. Of course there were problems and shortcomings, but the uncompromising design esthetic is noteworthy and admirable.


>During boot, for example, even modern Windows boxes show a BIOS screen followed by a brief blinking cursor before the Windows graphics mode takes over.

This hasn't really been the case for more than 10 years now. EFI based systems will boot without changing display modes. Some hobby custom PCs might have compatibility modes enabled, but any laptop or prebult system is going to go from logo to login without flickering.


It also has nothing to do with Windows. Apple controls both the hardware and the OS, Microsoft controls only the OS. If the OEM decides to have the system boot up in text mode, there's nothing Microsoft can do.

Microsoft has (had?) quite some power over OEMs and could set requirements on the system. A system not compatible to windows won't see much sales.

However Microsoft values compatibility, which probably is in conflict with requiring more.


Microsoft does set such requirements, which is why today your typical Windows box with a "certified" sticker will boot directly into graphics mode.

That ability to copy system folders and macOS having generally not cared about what it’s booting off of was a lifesaver for both myself and the friends I’ve acted as tech support for several times over the years. It was a bit confusing when I discovered that Windows was nowhere near as forgiving in this regard.

It's easy to do things like that when you control the hardware end-to-end. Text mode makes perfect sense for BIOS that has to deal with so many different kinds of hardware, especially when you remember just how varied a mess graphics hardware has been on PCs well into late 90s.

Windows itself would generally assume the lowest supported hardware, so e.g. for Win95 the boot screen used VGA graphics mode (since the minimum requirement for Win95 UI itself was the VGA 640x480 16-color mode). BIOS had to assume less since it might have to find itself dealing with something much more ancient.


I somehow managed to avoid PCs and Windows as a kid – at home we went from Amiga > Acorn > Mac and my school was 100% Acorn.

I was always astonished going to friends' houses and watching them have to use DOS or Windows 3.1 and weird 5" disks. Just looked like it was from the past. Even Windows 95 looked terrible on boot with all the wonky graphics and walls of console text. I was convinced Windows would never catch on and Amiga or Acorn was the future as they were so much better.


There was never a text mode in macos until version 10.

Yes there was: MacsBug, and later there was OpenFirmware. But you wouldn’t get dropped into MacsBug or OF if the machine crashed or failed to start up.

Neither of those are part of the operating system.

There has never been a "true" text mode in any Mac hardware (except for situations where there is vestigial support in the hardware from the days of Intel on Mac, and I doubt that support was ever available for application use). Even Macsbug was ultimately drawing pixels to a framebuffer.

I never thought of Windows 3.1 as an OS. The other 2 was MS-DOS and Windows 95.

Windows 3.1 is basically 95% of an OS. When you're booted into Windows 3.1, MS-DOS is only used to run MS-DOS software and to handle filesystem access. Windows 3.1 controls the processes, memory, video card, system timer, keyboard, mouse, printer, etc. (pretty much every system resource besides disk I/O) while it's running.

Agree, the terminology in those days was “shell”.

Though Windows 95 was arguably similar running atop “DOS 7” it actually imposes its own 32-bit environment with its own “protected mode” drivers once booted. Dropping to DOS reverted to “real mode”.


I don't think it actually reverted to real mode. I believe Win 95 continued with what they had done in Windows for Workgroups 3.11, which was the first Windows that required an 80386.

What that did is use DOS as a first stage boot loader, then switched into protected mode and created a v86 task which took over the state of DOS. The protected mode code than finished booting.

v86 mode was a mode for creating a virtual machine that looked like it was running on a real mode 8086 but was actually running in a virtual address space using the 80386 paging VM system.

When you ran DOS programs they ran in v86 mode. If a DOS program tried to make a BIOS call it was trapped and handled by the 32-bit protected mode code.

v86 mode tasks could be given the ability to directly issue I/O instructions to designated devices, so if you had a device that didn't have a 32-bit protected mode driver a DOS driver in a v86 task could be used.

For devices that did have a 32-bit protected mode they would not give v86 mode direct access. Instead they would trap on direct access attempts and handle those in 32-bit protected mode code.

I wish Linux had adopted a similar use of v86 mode. I spent a while on some Linux development mailing list trying to convince them to add a simplified version of that. Just virtualize the BIOS in a v86 task, and if you've got a disk that you don't have a native driver use that v86 virtualized BIOS to access the disk.

Eventually someone (I think it may have been Linus himself) told me to shut up and write the code if I wanted it.

My answer was I couldn't do that because none of my PCs could run Linux because there were no Linux drivers for my SCSI host adaptors. I wanted the feature so that I could run Linux in order to write drivers for my host adaptors.

OS/2 did the v86 virtualized BIOS thing, and that was how I was able to write OS/2 drivers for my SCSI host adaptors.


So did the lastest Win3.1 for workgroups, just MS spared all the fanfare for Win95. Not sure if the 3.1 version in the installers does.

Windows 3.1 was just a graphical shell. All the drivers and stuff were still managed by DOS. You still needed to configure your system with config.sys

EDIT it’s coming back to me. Windows 3.1 did have a a subsystem for running 32 bit apps called Win32 I think that’s what you mean. This was very much in the application space though.

It still used cooperative multitasking and Win 95 introduced preemptive.


Bryan Lunduke has an article about this myth, actually!

https://lunduke.locals.com/post/4037306/myth-windows-3-1-was...

It’s backed up by another Old New Thing article at https://devblogs.microsoft.com/oldnewthing/20100517-00/?p=14...

The TL;DR is that Windows 3.1 effectively replaced DOS and acted as a hypervisor for it, while drivers could be written for Windows (and many were) or DOS (and presumably many more of those were actually distributed). The latter category was run in hypervised DOS and the results bridged to Windows callers.

(Edited after submission for accuracy and to add the Old New Thing link.)


One of the major motivations for windows, is that the driver situation for DOS really sucked. Every single office suite had to talk directly to printers. Text mode was reasonably uniform, but printing graphics required the application to know about the printer.

And games needed to talk directly to the video card and sound card if you wanted anything more than PC speaker beeps and non-scrolling screens on one of the default BIOS graphics modes.

One of the major selling points of Windows 1.0 was a unified 2D graphics API, for both screen and printing. The graphics vendor would supply a driver and any windows application could use its full resolution and color capabilities needing to be explicitly coded for that graphics card. This rendering API also supported 2D accelerators, so expensive graphics card could accelerate rendering. 2D accelerators were often known as Windows accelerators.

Windows 3.1 still relied on DOS for disk and file IO, but everything was can be done by VXD drivers, and should never need to call back to DOS or the BIOS (which was slow, especially on a 286)

With Windows 95, Disk/File IO were moved into VXD drivers, and it was finally possible to do everything without ever leaving protected mode (though, DOS drivers were still supported).

Read more about the history of Device drivers here: http://www.summitsoftconsulting.com/WinDDHistory.htm

And I really enjoyed this documentary about the development of Windows 1.0: https://www.youtube.com/watch?v=vqt94b8bNVc


It was specifically the acceleration that made it advantageous, especially once DirectX got off the ground. 2D graphics by itself was reasonably straightforward in DOS once SVGA and VBE were a thing, but all you got out of it was a raw pixel buffer that you could poke into.

Thanks for that it’s very interesting. I had no idea the virtual machine system was so advanced. Device drivers and such were all still real mode but yes I can see how this would make DOS a component of Windows rather than the other way round. All for nothing if the apps aren’t bought in though!


Much later, the HX DOS Extender (https://www.japheth.de/HX.html) had something vaguely similar called Win32 emulation mode. Meaning that you could load a Win32 PE image, and it could call quite a few Win32 APIs, all while running under plain DOS (in 32-bit flat mode), with HX DOS providing the implementation.

It had just enough parts of the API implemented to be able to run Quake 2 in DOS.


Thanks

”Win32s lacked a number of Windows NT functions, including multi-threading, asynchronous I/O, newer serial port functions and many GDI extensions. This generally limited it to "Win32s applications" which were specifically designed for the Win32s platform,[4] although some standard Win32 programs would work correctly”


It was a strange time back then for anyone who wanted to get online. Win3.1 had no TCP/IP stack so many folks used a third party download called Trumpet Winsock. IIRC you might have needed win32s in order to use it.

Looking back, Microsoft were clearly in an incredibly complicated transitioning phase, with very little margin for error (no patching over the Internet!)


Trumpet Winsock works on a 286, but apparently NCSA Mosaic version 2.0 needed Win32s.

So I guess there would have been a time in 1994 where many people were forced to retire their 286es. Though Mosaic was quickly replaced by Netscape Navigator in late 1994 which worked on Win16.

And then Windows 95 came along, and it really needed a 486 with 4MB of ram, ideally 8MB.


1994 wasn't a time where everyone was using a web browser.

I think it’d be fair to call it more than a shell. It was also a set of libraries that implemented the common user interface elements of Windows apps, similar to the Macintosh Toolbox but not in ROM.

Not sufficient on its own to qualify it as an OS though. The VM subsystem described in sibling comment makes a big difference though!

The line is blurry though.

More akin to Window Manager to me. But yes more than a mere “shell” and through my error I have learned oh so much.

I was only 5 or 6 maybe when I used Windows 3.1 so I may be misremembering, but didn’t it have an X on the desktop to close the GUI and return to the DOS prompt?

Windows 3.1 didn't have any "X" buttons. It had the system menu (the one shaped like a spacebar, since the hotkey was Alt-Space). If you quit Program Manager, it would end the Windows session (since Program Manager was your shell). If you had a replacement shell (as some did back then, Norton Desktop etc), then quitting that would exit Windows and return to a pure DOS prompt.

There was a way to “drop to DOS” alright, which is what you would have had to do for games and the like. Can’t remember the exact mechanism but it could have been the x on the “program manager” window.

The raise Windows you’d type “win” and if you wanted to “boot to windows” you would call “win” from your autoexec.bat


That sounds about right. My dad had commands written on sticky notes on the monitor for me.

As a recall I had to in order to play Commander Keen.


My memory is that closing Program Manager exited windows.

Another competitor shell at the time was "WordPerfect Office for DOS". Which I witnessed some people launch from Windows 3.11. I believe it had WordPerfect and what preceded GroupWise for email. https://mendelson.org/wpdos/shell.html

The "mini" Windows 3.1 it came with was pretty much fully functional though, you could literally boot directly into PROGMAN.EXE as the main Windows 95 shell.

Program Manager is a shell and was actually included in Windows all the way through XP SP2 when it was phased out. You can probably run it in Windows Vista through 10 if you copy the .exe over, too.

Unfortunately not, I just tried running the copy included with Windows XP SP1 on Windows 11—it relies on SheRemoveQuotesW which was removed from Shell32 in Windows Vista. (It doesn't seem to be able to use a copy of a Windows-XP-sourced shell32.dll from the working directory, either).

That's because Windows 11 only comes in 64-bit flavors and Program Manager as bundled with XP would be 32-bit. WoW64[1] can't bridge 64- and 32-bit binaries in the context of system components, such as the shell.

Try running that under 32-bit Windows 10, I never tried it myself but I have a feeling it should work.

[1]: https://en.wikipedia.org/wiki/WoW64


can technically be patched with ReeactOS's implementation of the functiom.

Maybe! Is the XP version 32-bit? 64-bit windows never supported 16-bit programs.

Reading these comments is like seeing revisionist history. Windows 95 was the biggest piece of crap. The thing crashed constantly. Windows 11 today looks better and runs better. There is simply no comparison. Anyone who says different has no idea what they are talking about.

I was little, but I remember using Windows 95 on a machine that had it pre-installed and it ran stably from what I recall. They're good childhood memories more than that I'd be a revisionist agent

That Windows Vista and newer look better, even if their design languages are mixed (old and new configuration panels mixed depending on which ones they could be bothered to recreate, for example) and are more complex, I don't think anyone argues. I've never heard of anyone using the simpler classic theme for reasons other than nostalgia or performance


Ah, another user who missed out on Windows ME.

I still have my Windows 95 installation floppies. It was a phenomenal operating system at the time, but I don't miss it one bit.

I remember the talk back then was: "The better way is to install Windows 95 on top of DOS."

If they were going to install a bare Windows 3, they could at least make an effort to ship a bare 95.

Why would they do that? The only thing this could change is a slightly prettier UI, they still needed a 16-bit Windows 3.x-compatible install program. Not worth the extra effort and extra floppies.

If they made a bare 95, the 16-bit installer could just reboot into it.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: