Hacker News new | past | comments | ask | show | jobs | submit login

yes it can run things that don't use much of the API surface (just using libc? probably fine)

however try running a game from the Windows 95/98 days and you've got a maybe 50/50 chance of it working

e.g. they changed the return code from BitBlt from 95/98 -> XP, they used to return the number of scanlines but switched it to a boolean

same with the heap management functions, directory traversal functions, etc




Try running something from only ONE year ago on Linux and it very often won't work, unless it's an Appimage or Flatpak, or you're on Nix,

They might not break userland but Qt and GTK do the breaking for you. Python joined the party recently. Random DBus daemons might be missing, etc.


You can run binaries from decades ago on Linux too. This is about a DOS binary. Well, a command line program that just uses system calls is fine too.


Linux is absolutely terrible with this, it sits on the "extremely unlikely" end whereas Windows sits on the other in terms of backwards compatibility. It's not surprising either because the entire ecosystem runs on the myth that shared libraries are desirable when any serious look into them shows that almost no space is saved, almost none of them are actually shared by a meaningful amount of apps, almost no actual security issues are avoided by using them.

Anyone who has ever been interested in having someone else be able to run their program has figured out a long time ago that you have to ship your dependencies for them to be able to do so.


I have never had any issue running old programs.

Shared libraries have nothing to do with Linux. They are an entirely userspace concept.


That would be a great point were it not for the fact that virtually all Linux distros run everything on shared libraries. I like Linux but this is one aspect of it that has never done it any favors. It was probably a decent choice at one point but it has ceased being one. I reckon it has held back the Linux application ecosystem for over a decade for no reason at this point.


There is nothing wrong with shared libraries. If you want to run an old program, you need to also run old libraries, and old network services it communicates with, and maybe old hardware too. Libraries are no different.

In practice it just isn't an issue, because competent application developers don't tie themselves to particular versions of libraries, and competent library developers don't make gratuitous backwards-incompatible changes.

Nobody is forcing you to depend on incompetently-written malware like GTK+.


Only if it is statically linked with everything it needs. Otherwise good luck resolving dependencies.


Even then it might not work because it could rely on a DBus daemon being there. Even a brand new binary might fail because it needs some external program they forgot to add to the dependencies list in the package, so you have to sift through to find the not at all obvious package that provides it.

Or something was compiled without some option for unknown reasons so you're just SOL unless you want to compile stuff yourself.


How is that any different from any other program on any system failing because of a daemon not running? Inter-process communication exists on every operating system. IPC means your program's behaviour can be different depending on what other processes are running. This is not specific to Linux.

Your other point is equally as inane. Anything might fail because it requires some external program, or external data files, or any other external resource. A program might fail because it requires a particular hardware device to be plugged into your computer. None of that has anything to do with the operating system.

>Or something was compiled without some option for unknown reasons so you're just SOL unless you want to compile stuff yourself.

Oh no! The terribly, impossibly difficult task of running a program! How could you ever subject me to such a fate as having to compile stuff myself. You cruel beast!


Compiling stuff yourself is generally much harder than one might want it to be be. The instructions might or might not actually work unless you're on a source based distro. It also might take hours. Or days, but the stuff that's big enough to take days generally works out of the box.

Windows has IPC, but doesn't have as heavy of an influence from the Unix philosophy, and Android seems to have even less. More stuff is just built into the OS, it's always going to be there, probably for 20 years.


Any software that has build instructions that don't work is probably written by morons. I wouldn't want to run it. I certainly have no interest in running software that takes days to compile. Overengineered crap.



just don't try anything that used safedisc


Those issues only apply to Windows 9x linage.

Current Windows versions trace back to Windows NT 3.51/2000 linage.

Naturally 9X => XP don't work flawlessly, they are two different OS stacks.


well yes

however it sort of undermines the "insane compatibility" / "stable API" point if mass-market Windows software produced before the 2001 release of XP mostly doesn't work on modern Windows

NT effectively forked Win32 (introduced with Windows 3.1) into something incompatible

(meanwhile it all runs on Wine perfectly fine)


What? Windows NT never forked anything.

Windows 3.1 introduced Win16 protected mode with segmented memory.

Win32s was a backport from a Win32 subset from Windows NT 4.0.

Windows NT linage exists since 1993.


I remember Win32s, I think Netscape Navigator needed it. I was thinking Lode Runner, but that was WinG.

I always thought it was backported from Windows 95, thanks for the info.


Iirc Win32s existed before Win95.


Windows 95 - August 24th, 1995

Win32s - October 1992.

And I was off by one Windows NT version, it was already based on 3.51, not 4.0.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: