Personally... I absolutely understand exactly how they ended up in this situation. There are actually several good reasons to embed the browser into the OS, the top two easily understood ones are...
---
1. They've put a lot of work into the browser, and they've shared many components of it with other tooling. EX: Internet EXPLORER and EXPLORER the file system browser don't share their name by mistake. They're so similar I can load the same COM addin in both.
2. From a pure usability perspective, it makes a boat load of sense to prevent a user from accidentally uninstalling the last browser on the machine. When dealing with edge cases and possible support calls, making sure that the user has at least one application that can download files from the web is pretty damn reasonable. Safari is also not removable... for similar reasons.
---
Further - I think it's actually a fairly good credit to MSFT that they're bothering with shims at all, and are maintaining a good chunk of compatibility for applications that were written literally decades ago.
Both Google and Apples's approach to this would literally be: "We're so sorry, that's no longer supported and you're f*#$ed. Go upgrade, get your vendors to upgrade, or eat shit."
MSFT isn't writing shims because they still need them... they're writing shims because they have enterprise customers who want them, and they actually give a fuck about that.
Microsoft only cares about legacy support if it means the vast majority of customers. I will give you an example: go to a top tier research institution for biology, and sort your cells with the fluorescence-activated cell sorter (FACS). You might have to now burn that data onto a cd rom, because the software running the six figure instrument was written in 2002 for windows XP, and for reasons only known to microsoft, this software is no longer able to run on modern windows. So now IT has to airgap the desktop sitting by the machine, and the only way they let you get data off according to policy is through CD rom, not usb because that breaches the gap. The vendor for the instrument has no incentive to update that software either; they'd rather you dish out another half million dollars for a new sorter.
There are hundreds of instruments like this in any given research institute, all because Microsoft decided unilaterally that software that ran fine 20 years ago is clearly useless and doesn't need to run anymore on modern hardware that receives security updates.
Would you expect a 20-year-old MacOS to run on a modern MacBook? Why not?
Would you expect a 20-year-old Linux to install just fine on a modern PC? I remind you that this was the era of RHEL 3! I had trouble getting RHEL 6 running in a virtual machine just recently.
Meanwhile XP will run on most modern PCs, it’s just not supported. That means no security patches.
The mistake here is not Microsoft’s.
The mistake is your organisation’s for buying a half-million-dollar device with no support plan from the vendor for its control OS.
I would expect a bash script I wrote 20 years ago to run on a new linux machine I buy today, yes. I'm not asking for windows XP to be installed, I am asking for these modern machines to let me run the code the machine I was sold 15 years ago let me run. This whole march to reduce compatibility is kind of stupid imo. Let me buy a mac that can run 32 bit software still!
Windows does run most stuff that you throw at it still unless the software made some boneheaded assumptions that violated basic API rules.
Software using a lib I wrote in 1998 doesn't work anymore but it's because I took a bad shortcut that made DOS-era assumptions, fixing/recompiling it would be the right course of action but I haven't had the time (also not sure where all the sources are).
On a contrary note, my first USB-Wacom pads STILL work after 20 years, it's not just software but a vendor that cared about it's software long enough to make 64bit drivers that still works.
As you said, the vendor isn't interested in upgrading the software and there is where the blame is, MS tries to keep backwards compatibility but some things has had to go for performance and/or security reasons over the years and those usually broke rules (or were tied to HW, Linux has dropped 386 and 486 after all).
Respectfully you have no idea what you're talking about. These half-million dollar devices (often more expensive than that) were the ONLY option at the time to perform certain kinds of experiments. Not exactly a choice.
Respectfully you have no idea how complex an OS is and how completely nonsensical it is to expect 100% backwards compatibility for 2 decades later. There is a reason software gets sunsetted: most of it is irrelevant 20 years later and holding on to it makes everything much more complex than it already is. Especially for a consumer OS like windows, where you can expect the vast majority of people to upgrade.
If this is some multi-million dollar device then maybe keeping it up to date with windows should have been part of the deal...
I'm not sure what part of my comment you're responding to because I wasn't referring to anything related to backwards compatibility. I was responding to the other commenter asserting that biotech companies have somehow made an incorrect judgement call by purchasing certain types of instrument. In reality these vendors assume you're going to keep the system air-gapped forever, some don't even (officially) support joining their control PCs to a domain. Anecdotally I've found that in a lot of cases unless you're dealing with strange custom drivers usually the vendor-provided PCs can be replaced with new computers running the same control software (sometimes in compatibility mode if needed) and still operate the instrument.
So the vendor refuses to sell a support plan and requires you to purchase a brand new machine? That does sound monopolistic and abusive but I don’t see how Microsoft is to blame.
As a device builder creating those machines (well, industrial CNCs, but similar idea) I don't have many options when trying to build something with a useful life of 20+ years. I just received an order for another machine in a lineup that started with some using Heidenhain controls from the early 90s, we ripped out those controllers and replaced them with models running a custom black box RTOS based on Motorola DSP 56002 chips on ISA cards that were on Windows 98 and Windows XP motherboards, then that company got aquihired and they stopped making those cards, there was a brief foray into PowerPC and now we're installing Arm processors running surprisingly modern Linux 4.9 with preempt_rt. We're talking to these motion controllers using VB6 running on a bunch of Windows 7 PCs with a few compatibility hacks enabled, and I'm installing new fanless PCs with Windows 10 LTSC 21H2, which claims support until 2032-01-13. If it's not running those relatively public, well-documented systems, it's likely a PLC or robot containing a proprietary ASIC running some WindRiver RTOS, and completely subject to the whims and survivability of that product line.
I love my customer's maintenance department, they're unusually thorough and attentive to preventative maintenance, greasing and oiling and replacing wear items, root-causing failures and keeping the machines in good working order for decades. But when you keep fixing each individual small part that breaks, eventually the market moves on and you have to throw all the replaceable parts away and start over. The process requirements haven't changed, the weldments and castings aren't broken, it's just that you can't replace a failed servo drive or Core 2 motherboard or ISA card because the supply chain gave up on them 10 years ago. There's no great advantage of Windows 10 LTSC and a 64-bit Arm controller over Windows 98 with 56002 ISA cards except that I can buy the former today and I can't buy the latter because no one makes them anymore.
There is no control OS or hardware supply chain that lasts as long as military, industrial, or scientific equipment can be made to last. Fanuc and others try with massive vertical integration and a "never deprecate anything" mantra, but they don't run their own semiconductor fabs to guarantee production of chips for stuff that doesn't keep up with the rat race.
Yes, I'll keep supporting my company's gear for as long as possible, but sometimes the answer is that I need to throw away 4 working servomotors, drives, a controller, and a PC, and invest 600 engineering hours into making this machine do the exact same thing it did last week, because that 5th drive blew the IGBT and took out the controller and no one can get a replacement. Customers don't like to issue $100k invoices when a $1k subcomponent dies...because that $1k quote was from 2008, and it's now literally priceless.
The trick is to make the controller board swappable and an independent set of products.
E.g.: If you make 30 different kinds of industrial machines, have one controller board as an additional product for a total of 31 SKUs.
When the controller board is too old and no longer supported, make a new SKU that is backwards compatible with the old machines.
E.g.: Some televisions allowed the entire controller board to be swapped out, upgrading the TV's OS and HDMI ports with new capabilities. The panel and the case stayed the same.
Similarly, a CNC router or whatever is "just" some analogue wires that go into a digital board. Make the digital board swappable.
A similar mistake that I see is customers that are used to "archive grade paper" asking me how long digital backup tapes last. That's the wrong question!. The important thing is to be able to copy from old tapes to new tapes with full fidelity, which digital technology enables, but analogue technology does not. You can retain data indefinitely with digital media that lasts just a few years. Instead of trying to scrounge up some old floppy or CD readers, just copy the data to BluRay, then cloud storage, and then... whatever.
It is, we're now installing CiA 402 or "CAN in Automation CANopen Drives and Motion" servos (which didn't exist at the time the first ones were built) running on an EtherCAT real-time protocol stack. That will hopefully make replacing servos in the future relatively seamless.
Unfortunately, though, just using a CAN or Ethernet physical layer doesn't solve the problem of the https://xkcd.com/927/ standards proliferation of industrial protocols: Some customers require CIP motion over Ethernet/IP (a dog-slow Rockwell proprietary protocol), there's old MACRO fiber optic rings, Sercos III (and prior serial protocols), a dozen high-speed serial standards for communicating with absolute encoders, and, of course, good old-fashioned hardwired digital I/O - which might be either analog velocity/torque or step and direction, RS422 differential or single-ended, etc. etc. etc.
You'd hope that you could just plug something that spoke CAN or Ethernet into something else that has the same physical interface, and the two would work together. Sometimes, you get lucky and that works out. But the ever-forward march of industry leaves behind a graveyard of yesteryear's abandoned products and protocols.
> and for reasons only known to microsoft, this software is no longer able to run on modern windows.
The reason is most likely... because the driver relies on APIs that were insecure and removed in the XP-Vista transition!
> The vendor for the instrument has no incentive to update that software either; they'd rather you dish out another half million dollars for a new sorter.
> There are hundreds of instruments like this in any given research institute, all because Microsoft decided unilaterally that software that ran fine 20 years ago is clearly useless and doesn't need to run anymore on modern hardware that receives security updates.
So it's Microsoft's fault that the vendor won't update it's software?
Didn't this "elite institution" get a support contract for the machine?
Would they expect the driver and software to still work if it had been written for a PowerMac G5, running Mac OS X Panther?
Would a binary compiled 20 years ago even have a remote chance of running on Linux?
It depends on which 20 years you’re talking about. Some of us here have been using Linux long enough to remember the change in the late 90s to use elf format. I don’t think a Linux from after the change can run anything from before.
We're not talking about drivers, we're talking about applications. You can take an application compiled back in 2005 and run it on a modern Linux machine, as long as the machine architecture is the same.
I assumed the machine would require a custom driver. Else it's extremely unlikely the app was broken as early as 2006 (so on Windows Vista) without their research group having some sort of support contract. Software would have been barely 4 years old at that point.
Your vendor didn't provide you with free updates to match changes in the larger OS ecosystem. Then you get mad at the larger OS for not supporting the niche software of your vendor.
BTW neither the vendor nor the OS maker is the problem. You are. Your entitlement that the world should grind to a halt to wait for you is.
Sounds like the writer of the software is equally at fault, and the sysadmin could have been more creative.
I see no reason why the data collection can't be managed by a highly portable console app running inside a VM or dos emulator or the legacy PC can't be kept behind a firewall that only lets data out.
> I see no reason why the data collection can't be managed by a highly portable console app running inside a VM or dos emulator or the legacy PC can't be kept behind a firewall that only lets data out.
Having been in this exact situation at a research institution with expensive equipment and Windows XP only software before - support.
Helpdesk and lower levels of back-end only pay a certain amount so turn over constantly, and “those PCs have had hot glue put in their ethernet jacks and USB ports and burning CDs is the only way to get data out” is a hell of a lot easier to reliably keep going than some esoteric system of VMs and firewall rules.
The people in the lab have way more tolerance for having to burn CDs than they do for firewall changes stuffing up their process and taking a while to figure out and resolve!
The only way to continue supporting all those things is to never change anything. Even if all the APIs are still around, are they going to be bug-compatible?
My friend was working at Microsoft on JavaScript engine bugs for Internet explorer last year. He was telling me they had a contract with a large insurance company to support ie, and that because he found out about the contract his job feel a little less useless.
So it seems like to me they do care about legacy support.
... and Linux (well, Ubuntu specifically). I'm staring down the need to upgrade from Ubuntu LTS 20.04 to Ubuntu LTS 22.04 because some software I maintain now relies on GLIBC v2.32, which is an absolute bear to back-port. The party line is "So just upgrade," but there's no guarantee my hardware is compatible with Ubuntu LTS 22.04 because the vendor doesn't support that config and I'm not really interested in dealing with all the joys of figuring out every fiddly-widget little config on a laptop to make it work myself.
Microsoft is nearly unique in caring that much about backwards compatibility. It's the value-add they bring to the market these days relative to the competition.
> ...but there's no guarantee my hardware is compatible with Ubuntu LTS 22.04 because the vendor doesn't support that config...
So it's your vendor who doesn't care about backwards compatibility - Linux does. Whatever hardware you have, once it was supported, it will stay this way until it is really ancient, and even then you will have special builds that will support it. That's the beauty of open source (or even source available) licenses. No corporate interests that would render your solution obsolete. [0]
[0] assuming your hardware doesn't need some binary blobs (khm nvidia khm) to work
(a) Of course it needs binary blobs. This is the real world where medium- to high-performance machines need binary blobs. The GNU dream never materialized.
(b) If Ubuntu (and I'm going to say "the Linux ecosystem in general") cared about backwards compatibility in the same sense Windows does, a minor-version bump to glibc wouldn't introduce API breakages that mean I can't build a codebase that's doing nothing new and special on my machine right now purely because a 31-subver bumped to a 32. They aren't doing anything new in the code; their JNI dependency just bumped up and so they bumped up the whole codebase's requirements.
That's fine, but it's not The Windows Way. The Linux Way doesn't think about compatibility issues like that in anything like the same way. It's a source-code-and-patch-it-yourself world. The approaches are completely alien to each other.
glibc goes to great lengths to ensure binary backwards-compatibility. If a binary interface has to change it uses symbol versioning to keep around the old interface - for example if you have a program compiled against glibc2.2 on x86-64 that calls timer_create(), it will call through the function __timer_create_old() that provides the old interface when run on a system with a newer libc.
The reverse scenario that seems to be what you have - where you compile against a newer library version then run it on an older one - is forwards-compatibility which is a different kettle of fish. Even on Windows it's not like you can compile against the latest DirectX and run it on an older Windows?
> several good reasons to embed the browser into the OS
It was actually a great feature (memory may be failing at this point), and it seemed like the filesystem viewer was IE presenting a filesystem view. If you typed an http or https URL, the window would turn into an HTML renderer window. This made it possible to build their mail and newsgroup reader apps as filesystem views. Mail was nothing but a folder full of email message files. News was, surprise, a folder full of newsgroups. Those viewers were responsible with synchronizing the folder with remote services.
I would love if Gnome would allow pluggable views in Files: a mail reader for maildir folders, a music player for folders full of audio files, and so on.
> I would love if Gnome would allow pluggable views in Files
This is essentially what Gvfs is for except basically nobody uses it, at least in the ways you describe, because forcing non-FS things into the shape of a FS usually doesn't work very well. So instead Gvfs gets used for relatively simple things that naturally map onto the FS concept well, like mounting archive files, browsing FTP, etc.
Gvfs would allow a folder of messages to show the continents of, say, an IMAP server as a set of files, but what I would like is the reverse - to have a different presentation for a folder full of emails.
Yes sounds right - I remember in the deep dark past somehow getting Steam to show the contents of C:\Program Files\ in the Store window. This was obviously back when Steam was Windows only and before it used embedded Chrome.
christ can you imagine the raw hubris in 1995 to roll this out and inexplicably make it not only uninstallable by the user, but a core and critical functionality of your entire OS such that any attempt to sidestep or evade it would be met with ruination?
>This is a very strange comment.
In around 2001 when MS was being tried for antitrust, they claimed that IE couldn't be decoupled from the OS because it would break the os. I believe that is what the OP is alluding to. I believe an unassociated professor managed to decouple it later.
The era in which Microsoft forced IE into the OS was purely about anti-competitiveness and their own fear of being caught off guard on this whole internet fad.
The only reason we need IE shims at all is because MS botched their overall internet strategy. Had they made a truly competitive browser and supported common standards, and the user's ability to choose their browser, none of this would be needed.
Notepad.exe is forced into the OS. Is that anti-competitive too? If it's okay to have an app be a frontend for an OS provided text edit control why not the same with a webview control?
> making sure that the user has at least one application that can download files from the web is pretty damn reasonable.
And, indeed, you can't remove Windows Update. Nor (in more recent versions) the Add or Remove Features component in Control Panel— er, sorry, the Settings app. So long as you can still re-install Internet Explorer without a web browser (which I think you have always been able to, ever since the days IE came in a box), there's no reason to stop you removing it.
Microsoft did have legitimate reasons to integrate the browser engine in Windows 95, what with the whole Active Desktop, HTML Application, the-web-is-the-future schtick they were going for. But that isn't one of them.
It's not Microsoft as a whole though: the OS team goes above and beyond in maintenance, Office team did nothing of that, Azure team doesn't seem to be keen on that either.
I somewhat agree, but even that is only for capitalistic reasons.
If they didn't have the massive backwards compatibility, corporate users would essentially be free to choose a new OS platform every few years. By maintaining deep legacy software support, they make it easy for their customers to stay addicted to Windows.
MS is not doing this for your benefit, or out of some kind of dedication, it's pure capitalism.
Sure, but I think this is an example of markets working well. Customers don’t want to have to adapt to a new platform every year and are willing to pay for that service. Microsoft recognizes that desire and provides backward compatibility. Microsoft charges a price above what it costs to provide the service but below what their customers are willing to pay. Both parties win. Seems like a good thing.
Rarely have I seen "both parties win" used in describing dealing with Microsoft.
There is tons of documented history of their anti-competitive behavior and deceptive practices. It is not very often that a CEO of a company in a market that is "working well" is called to testify in front of Congress.
You seem to have changed the topic. As the previous poster was saying, them doing this particular thing for money is not bad. It's good. The fact that you're for some reason pulling in unrelated things that Microsoft has done presumably means you agree, but it's a mystery as to why you don't want to say so.
Personally... I absolutely understand exactly how they ended up in this situation. There are actually several good reasons to embed the browser into the OS, the top two easily understood ones are...
---
1. They've put a lot of work into the browser, and they've shared many components of it with other tooling. EX: Internet EXPLORER and EXPLORER the file system browser don't share their name by mistake. They're so similar I can load the same COM addin in both.
2. From a pure usability perspective, it makes a boat load of sense to prevent a user from accidentally uninstalling the last browser on the machine. When dealing with edge cases and possible support calls, making sure that the user has at least one application that can download files from the web is pretty damn reasonable. Safari is also not removable... for similar reasons.
---
Further - I think it's actually a fairly good credit to MSFT that they're bothering with shims at all, and are maintaining a good chunk of compatibility for applications that were written literally decades ago.
Both Google and Apples's approach to this would literally be: "We're so sorry, that's no longer supported and you're f*#$ed. Go upgrade, get your vendors to upgrade, or eat shit."
MSFT isn't writing shims because they still need them... they're writing shims because they have enterprise customers who want them, and they actually give a fuck about that.