Hacker News new | past | comments | ask | show | jobs | submit login
ReactOS 0.4.14 (reactos.org)
198 points by jeditobe on Dec 20, 2021 | hide | past | favorite | 84 comments



I wonder if this is anybody's ace in the hole yet? I've done plenty of legacy systems support to know there's value in being able to say "this is a currently supported and active project that supports some tech otherwise unavailable on a commercially supported OS." i.e. the people still supporting Windows XP for things.


Is there any known/public system that use ReactOS? Is there a goal for this OS? a use-case? Any dev-story I can read?

I'm not criticizing their work or passion, au contraire, it's a spectacular effort the developers are doing, developing an OS with the same API than Windows 2000 from scratch. For me they are giants.



What an incredibly strange Google-translated read in the comments this was:

https://habr.com/ru/post/208614/comments/#comment_7183314

Where did you find/remember this from?


>Where did you find/remember this from?

ReactOS writeups on Habr are always an entertaining read, similar in nature to console emulator change blogs. Over the years, you just remember the often mentioned cash register anecdote.


Not a SWE, but I imagine ReactOS constitutes executable (and therefore rigorous) documentation for how Windows and its API works. This could be helpful if you are programming directly against the Windows API and the official docs are not complete.

A long time ago, the ReactOS devs gave an example of someone consulting the ReactOS code to understand how some component of the Windows API worked.


It's pretty interesting how it does this too.

Legally they can't copy source code from windows, so they have to do something called block box reverse engineering.

That just means they can't look at the code, but they can try to copy exactly how the code works. This is for legal/copyright reasons.

It's really impressive so far. Windows is a complicated beast that has been built up since the early 90's without much old functionality being thrown out.


I know some communities use the kernel code as reference a lot


In a former job, I was a sysadmin at a company that does process engineering for industrial plants, and sometimes the automation/SCADA people would turn to me for help. It was very exciting to me, because a) I got to leave the back office for a short while and actually engage with a customer and b) I got to see firsthand and learn (a little, at least) about industrial plants, which was fascinating.

I find history fascinating, and I have soft spot for retro/vintage technology, so I got to see and touch some interesting stuff. One customer was doing an upgrade in 2015(!!!) from Windows NT 4.0 to Windows XP (!!!). They could not go further, because they were dependent on some piece of hardware whose vendor had gone out of business, leaving behind device drivers that would not work on any Windows after XP.

For this kind of customer, I think, the prospect of an operating system where those device drivers would continue to work while also supporting more recent hardware would be very attractive.

Industrial installations apparently have lifetimes that are far longer than what even "enterprise" operating systems offer, and replacing that Windows NT 4.0 box with a Windows 7 box (let's not even think of anything more recent) is a huge challenge just from the technical perspective. But it gets a lot more hairy when things like certification and compliance to legal requirements come into play, where you cannot just upgrade a box from Windows XP to Windows 10, because then you lose your certification, and re-certifying the installation is a costly and (I assume) tedious procedure.

TL;DR: I believe there could be pretty lucrative market for ReactOS in industrial applications. It would require a fair amount of up-front capital to get going, but if you can provide support for large-scale industrial customer to keep their systems running for twenty or forty years, there has got to be a lot of money in it.


At some point I'd imagine that requiring the source code for such critical components for a long-lived machine is going to be the only sustainable way forward. At this point you'd end up reverse engineering device drivers or using 'unsupported' configurations to keep going.

Of course the fact that a company that no longer exists can't support such a configuration doesn't matter because they aren't there anymore to care about any of it.


At work we have a glovebox running DOS. The software is written in MS/GW/Q basic and only works with a specific set of ISA IO cards. The manufacturer is long gone but one of their techs bought out the remainder and now gate keeps the software. He has the source but refuses to update, sell, license or release it. Last we spoke to him he wanted $20000 to write a new control program in visual basic using a PCI card and refused to release source code. We flat out said no. The plan is once there are no more spare parts on the shelf the whole thing is getting gutted and a modern PLC system installed in-house.


Such stories are exactly the reason why Stallman created FSF.


I agree. I thought Stallman was a nut (and in most respects I still definitely do) but since working with proprietary systems of the 80s and 90s I've realized his hardline free software stance is a direct response to companies like SCO.

I got into free software relatively early in my life and it was like watching a parody film before the original.


I really don't think $20k is unreasonable to rewrite something for a manufacturer. Chances are they'd change the scope, make requests and he'd need to modernize it and I can easily see something like that taking 1-2 months.. at which point $20k could be very reasonable vs the time and effort it'd take to find someone completely new to your system.

If I was the company though I would make sure that them providing the new source would be part of that $20k though or an agreement that when he no longer offers his services personally, regardless of whether he sells the company or not, that he will also release the source code to them if they are unable to negotiate for it upfront.

I think that would be fair and fulfill both parties concerns.

I personally prefer open sourced solutions as well, but in business that just isn't always practical.


So he should work for free in order for some company to make money? If that is what FSF is for then it is not very usrful for devs. He shoulf provide the source code in 3rd party escrow but considering that company doesnt want to negotiate at all, he should not just give them the source for free.


Free software is not about price, it's about freedom. It can and should be paid for.


I have a lot of experience in the industrial SCADA domain. I largely agree with you, but I will say that historically most installations have had all the relevant source code available to them, or owned by them.

In most cases, the limiting factors for lifespan were: 1) Inability to get replacement hardware 2) Inability to find anyone who can understand the source code.

There's really no way around issue #1. Having the software source doesn't really help that much because most of the time they'll use "migrations" every 10-15 years to rewrite the code using updated understanding of how they want the plant to work. Kicking off a SCADA upgrade is used as a wonderful convenient excuse to drive a lot of meetings/paperwork processes to define "How can we improve safety, improve reliability, make life easier for the human operators, etc?"

Nowadays, the thing time-limiting many SCADA installations are licensing for Windows LTSB and PLC/DCS vendor software. Often times newer versions will require new Dell/HPE servers for compatibility. It's expensive, but also not expensive enough to focus on changing.

The main point is that while licensing artificially limits "longevity" of a machine, closed-source does not. Instead, unavailable replacement hardware limits "longevity" more than "closed source" does.


> Instead, unavailable replacement hardware limits "longevity" more than "closed source" does.

Just look at the price of used serial consoles with Sixel support.

Some hardware can be replaced by software... but not all of it.


My kingdom for a windows terminal emulator that supports both [xyz]modem and sixel and can connect to a physical serial port.


If/when Windows Terminal integrates sixel support, any cheap laptop should be able to do that OOB (some of them with a USB to serial adapter of course)


If TeraTerm would add sixel I'd be really happy since it does almost everything else I need perfectly.


From what little insight I could gain, I agree about the hardware problem. The customer I was talking about was nervously trying to find a reliable supplier of motherboards with ISA slots and parallel ports.

I really love the nostalgia this kind of hardware stirs in me, but I am glad I do not have to deal with that kind of trouble. (A few years ago, I read on another forum about an IT guy getting a call on the weekend from a desparate customer looking for an HDD using some standard that predates non-S-ATA... MFM, I think?)


Luckily for that guy, someone has made a MFM-to-SD-card with arbitrary command and geometry transformation recently, it it has turned out to be great. There are MFM-to-SCSI and even SCSI-to-floppy interfaces. The great thing about SCSI in the middle is that it's the same protocol supported by systems today so you could then just convert using generic hardware and software available right now. The system on the other end is none the wiser and thinks everything is still the old interface with the old parameters.


Does management really understand how long the equipment is going to be used when it is purchased? From what I've seen, upper management has done the cost analysis before purchasing but maintenance is going to ultimately going to be be responsible for keeping the equipment running. By this point in time, those that approved the purchase are likely retired.


I'm not sure what an acceptable time horizon for these types of things is in an industrial setting, but as a middle aged person in management, I'd consider anything that lasts until after I retire a success. I don't deal a lot in mechanical systems, however, but I can certainly see someone who's not fully versed in a technology thinking 20-30 years is a longer lifespan than almost anything they interact with on a daily basis. What is the expected lifespan of system with a microchip that presumably needs to be updated, secured, etc on a regular basis?


I am currently working on a power plant control system that has a modicon 984 plc first released in 1991.

We expect minimum 20 years from PLCs, and unless parts become unavailable or failures too frequent we would generally go 30 years before replacement.

The PLCs are great, it’s the damned computers and their operating systems that keep needlessly changing our working system.

So we virtualize and air gap.


> The PLCs are great, it’s the damned computers and their operating systems that keep needlessly changing our working system.

That's my gripe these days, long term used to be 20+ years. You can still get DL400 series PLC's from Automation Direct (rebranded Koyo) which are from the 90's. Now I see many embedded micros and processors advertised with 10 year long term supply. To me long term is around 20 or more years, basically someones career span. And dont get me started on all this industry 4.0 PC based control crap. Not everything needs millions of lines of Linux kernel to turn a few io points on or off. And the bloated systems that these platforms utilize is nauseating.


I would find it more likely that old device drivers get decompiled and reverse engineered.

Heck I think there's a lot of value in an emulator for old device drivers. Who cares what the source was when you just execute the black box in a highly regulated sandbox? (Note doesn't work for some medical software). I think medicine and astro / aero are the few places that would require full decompilation of an original driver.


How does ReactOS attempt avoiding being affectable by malware? Does it "fully" inherit the "attack surface" of the system it tries to be compatible to? Or are there improvements? Is one supposed to run anti-malware software over it (or keep it networkless)?


Mostly not. A lot of the attack surface has not been implemented yet. :-D

When it has, fewer services are running at startup.

Generally, it doesn't copy the atrocious early security default configs that were later changed.

Don't believe it ships with an old browser or email client, so ~60% of potential issues gone right there.


> Is one supposed to run anti-malware software over it (or keep it networkless)?

As the biggest vector for malware is not an insecure operating system but user negligency (e.g. by opening malicious attachments in e-mails), it is advisable to have anti-malware software on every machine, regardless of the operating system.


> biggest vector for malware is not an insecure operating system but user negligency

This is victim blaming. Windows have been teaching users to install from third parties since 90's, added auto-run features to removable media, hid files extensions making it difficult to detect files that could do harm, took a decade to implement processes isolation, never added a good package manager and spent years making fun of FLOSS.

Windows users may have a twisted view about security. I personally heard a few of them saying things like "linux is safe because nobody uses it" or "you MUST use an anti-virus". They may sound naive of negligent but in fact, they were carefully trained for decades to behave that way.


> Windows have been teaching users to install from third parties since 90's

in this instance ReactOS is more secure than windows from the era it's replicating thanks to it's software center


> Windows have been teaching users to install from third parties

? Installing directly from the source instead of from an intermediary is good, not bad. Walled garden and 1P-only app stores are worse than the problems they fix.

> added auto-run features to removable media

Imagine computers just working. Who would want that.

Seriously, Microsoft has a lot to be criticized for, but none of the things in your comment make the list.


> ? Installing directly from the source instead of from an intermediary is good, not bad.

How many users did actually install directly from the source and not from a highly visible third party download page that collected and repackaged many programs with some drive by downloads involving adware? I remember falling for that a few times in my youth and even sites like Sourceforge hosting the projects directly ended up hijacking installers for a time.

> Imagine computers just working. Who would want that.

The fix was to pop up a window and asking if you want it to run. Nothing broken about that. Auto running software on an OS where everyone is sys admin by default is not a good idea.


I don't see anything wrong with auto running a CD. I physically put it in the computer. And seriously, what security checks do you think a normal user is going to do between the pop-up appearing and clicking "run"?

Edit: three responses, zero examples of checks an actual user would do. Two references to the Sony rootkit, which was resolved after intense press by Sony removing the rootkit unconditionally, not by giving the users a choice, because everybody knows users would have clicked yes.


A mass storage device is supposed to contain data, and possibly software. When you connect a MSD, in general, you probably want to access the data inside it - its filesystem.

If an "autorun" system is implemented, and on by default, MSDs become a hefty vector for circulating malware - it takes little to foresee it. This happened - and out of a really bad idea: "connecting a device" is in general not to be interpreted as "wanting to run software".

About instead the «normal/actual user» (though I do not understand how it is relevant), well, if said user connected a MSD, a data container, and were prompted that some code "wanted" to be executed, the user is supposed to react in terms of "WTF?!". Exceptional classes of cases can be managed - but really, the advantage of avoiding opening the device filesystem and starting an executable is less than negligible. When such behaviour is desired, a system should be specialized for that whole framework (and should revolve around the design concept of "trusting software").


I would say bewilderment that a music/video/data CD wants to execute an auto start exe on it. Other than that you may want to check if the CD actually contains the program you expect before running it.

Then you have Sony which abused auto run to install a rootkit on PCs as part of its copy protection.


These users don't know the difference between an exe and a mp3, come on. They'd figure it was auto starting iTunes.

How do you expect a normal user to verify the contents of a CD?


Not if they are never given the choice.


You want examples of auto-run abuse… I saw many usb keychains with autorun.ini and a lot of hidden files combined with links to other hidden files to simulate the “regular” files after spreading the malware if you click them.

It explores many vulnerabilities: auto-run, hidden extensions, no protection to running not signed binaries, links that are not simple filesystem links…

Windows evolved in a time when solutions for usability problems did not consider security. Now, in the name of compatibility, these vulnerabilities had to be maintained and users were trained to believe that was the right way to do things.

This gave windows users a reputation of being negligent, but most are not. They were trained like dogs to behave like that.


This is a huge security risk. Microsoft actually f ixed it. The system shouldn't do anything that cou ld compromise it without explicit user interventio n.

Want the computer to do something, tell it to it. Inserting a physical media doesn't means you want adware automatically installed.


We had a big hullabaloo about this in 2005 when Sony put a rootkit on audio CDs.

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootk...


The sheer number of original sources and secondhand sources that each need to be individually vetted clearly requires a lot more work and skill than using a well-curated package manager.

And a lot of vendors that should have been trustworthy ended up taking advantage of it by sneaking unwanted software in with the good stuff. Off the top of my head, Adobe did this. There was a glorious time in the 90s and early oughts where seemingly everything was trying so install another toolbar into IE...

There is plenty here to be critical of.


There aren't any well curated package managers for normal users.

Linux distributions like Debian barely make the cut for technical users, folks who are used to going to forums and finding alternative packages. Even so, Linux packaging is filled with drama, contradictory standards, alternate sources required for specialized applications, and occasionally downright bad decisions (like shipping insecure CAs). The only way to scale such a model to normal users is the way Apple and Google have done it on their mobile platforms, and frankly, those stores still have a fair amount of malware in them (Android especially - it's sandboxing that is actually useful here, not a package manager) and come with pretty massive anti-competitive downsides.

And some users liked the toolbars. Just like some users like Facebook. Toolbars aren't actually the problem - it's the way they slurp up your data - it's not a problem that needs a technical solution, more of a user education solution (just like Facebook).


I didn't say that package managers are perfect. I think they are a lot better. They have downsides related to centralized control and contradictory interests, but I believe the good outweighs the bad and you can usually install software from somewhere else if you really need to, with one conspicuous exception.

Most Linux distributions fit for desktop use ship with an "app store" presentation so I think accessibility has been addressed.


> There aren't any well curated package managers for normal users.

GNOME software and similar software are quite close to that.

Flatpak looks reasonably well curated for now.


>> Windows have been teaching users to install fro m third parties

>? Installing directly from the source instead of from an intermediary is good, not bad. Walled gard en and 1P-only app stores are worse than the probl ems they fix.

Consider package managers. Debian repos are full o f wonderful useful ad-free FLOSS.

> > added auto-run features to removable media

> Imagine computers just working. Who would want t hat.

Answered on another reply.

> Seriously, Microsoft has a lot to be criticized for, but none of the things in your comment make t he list.

People believing it is part of the reasons of wind ows security flaws. As I said, you was carefully t rained to believe it.


> you was [sic] carefully t rained to believe it.

Please try not to be rude. I have decades of experience on Linux. I'm giving my good faith opinion. You won't get far in life by assuming those who disagree with you have been duped / trained into their disagreement.

See my other comment - Debian's packaging is just ok, not good, and it only manages to be sufficient because its users are highly technical and can work around breakage. It also is an ecosystem that is several orders of magnitude smaller and thus easier.


You're right. I crossed the line. That was disrespectful. Sorry for that. I'll try to better handle insistence the next time.


You have exactly the same issue in Mac users, though. Yes, Apple has an app store and added confirmation prompts and certificate checks in executables for everyone else, but people still download random stuff and ignore all security warnings.


The problems are nowhere near the same level. AFAIK MacOS users have to, at least, move a package to a certain folder for installing it. It is not something that happens by accident or just clicking yes.

EDIT: For people enlightening me about the other ways to install or run binaries on MacOS: thanks for the info! I have really little experience with MacOS, but my GF uses a MacBook and I know it is not as easy to be used in deceptive ways as Windows is. So, considering the other ways to install or run apps on MacOS, to they run the app inside a sandbox? Do they need the user to type a password? Do they run with limited permissions? Do they need explicitly working around notarization to run?


> AFAIK MacOS users have to, at least, move a package to a certain folder for installing it.

That's wrong. There are multiple ways to distribute/install/run arbitrary programs on a macOS machine:

- Opening a .dmg disk image file and moving the application inside to /Applications will "install" the application

- Opening a .dmg disk image file and directly running the application inside will immediately run it with the current user's permission

- Extracting a .zip archive will yield the application's directory in wherever the zip file is, ready to execute by clicking on it

- Clicking on a .pkg installer will install the program to the path the user chooses (usually /Applications)

- Clicking on a .pkg installer will allow the installer (after a confirmation prompt) to run a "pre-installation" script - Zoom infamously uses that to ease the installation process (https://www.reddit.com/r/programming/comments/ft3ai3/zoom_us...)

The last option is particularly dangerous since users in the admin group usually have passwordless sudo configured, which means that running the pre-installation script in a .pkg gives that script root permissions!


> users in the admin group usually have passwordless sudo configured

I don’t think that’s true. It’s not on by default in macOS, and to turn it on you have to edit /etc/sudoers which isn’t commonly done on macOS (since sudo permissions can be managed via the checkbox in System Preferences).


You don't actually need to install .app containers by moving them to the Applications folder you can run them from anywhere. In fact they are just basically folders with a binary file inside so it's basically the same as downloading an .exe on Windows and just launching them (of course on modern macOS they run in a sandbox and require explicit permission to access any files outside of it).

Apps shipped in a .pkg do need to be installed, though. But from an user standpoint the process is almost identical to a Windows installer wizard.


That is, historically, completely false. Windows before WinXP SP2 was a wide-open door for malware. I still remember the whole summer of malware where the IT had to do an emergency shutdown of the whole building network at the switch to stop Blaster from spreading one afternoon.

So hopefully ReactOS, while implementing Win2k, includes the XPSP2 mitigations?


Blaster was a worm (self-transmitting and replicating without user interaction.) I was in IT when it came out.

XP SP2 had the firewall enabled by default in 2001, which blocked incoming SMB protocol requests and other related ports by default ("file and printer sharing" exception checkbox.)

Additionally, a security patch for Blaster was released July 16 2003. Blaster itself showed up August 11 2003, so you had almost an entire month to evaluate the security patch.

So in order to be affected by Blaster they had to 1. enable sharing of folders on client machines (connecting to servers does not require this firewall exception.) and 2. fail to apply a security patch for a wormable exploit in a timely fashion.

That's not wide-open, that's (if they have control of client machines) IT department failure to act responsibly.


> That's not wide-open

I remember, around 2003, laptops getting infected just by getting connected to the Internet. It can be appropriate to use the expression «a wide-open door for malware».


1. We were a video game studio with a lot of graphic artists passing around a lot of game assets.

2. I’m pretty that I’ve got it wrong and it was Sasser, not Blaster.


This does not answer the question. If you don't know the answer, it's best not to respond.


> e.g. by opening malicious attachments in e-mails

And how pray tell do those malicious emails take over a system if an insecure OS isn’t at fault too?


This seems no longer true with apples imessage 0 click example [0] for instance. Perhaps you can say using iPhone and iOS which has such a crazy bug is user negligency .

- [0] - https://9to5mac.com/2021/07/19/zero-click-imessage-exploit/


This is nothing new, back in Windows 95 days you could run arbitrary code on a machine just by sending network data.

It is also the kind of bugs that tend to get fixed fast once they are discovered.

The biggest attack vector has for many years been user negligence like randomly opening email attachments, following strange links or just click yes on any pop up.


> following strange links or just click yes on any pop up

If a system security can be compromised just by clicking strange links and clicking "yes" to pop-ups, then the system is to blame, not the user.


From the About page:

> ReactOS looks-like Windows

But even Windows does not look like Windows anymore.


Windows BFP (before Fisher-Price) ;)


I’m disappointed that ReactOS seems to have adopted the flat UI design now.


Flat? Where? I think it's just like Server 2003 or XP Classic theme. Is there a flat UI theme?


Look at the gallery, most screenshots use a flat theme: https://reactos.org/gallery/


Oh, you mean these themes, Mizu and Lunar: https://www.omgubuntu.co.uk/2019/09/react-0-4-12-released-do... They look like flat UI but not exactly a Windows 8 or 10 way. Well, at least they are optional.


It is just a theme, it is switchable ^)


Its great to see ReactOS still going.

I wonder how much of the newer Wine gaming stuff could be used for ReactOS


I think I recall reading something about the two projects cooperating in some form, although I am very fuzzy on the details. But it would create ... synergy! ;-) There is an obvious overlap in what both projects try to achieve, and it's not small.


With the amount of pirated XP in the wild in China, China should be funding the hell out of this to be independent of Microsoft. So should Russia, the EU, Africa, South America, Japan, etc. Well, I keep saying the same thing about linux as well. I think Linux should be getting billions in yearly support, heck this should be getting 100 million


I guess native GPU drivers are the biggest thing that's missing to enable gaming.


Well, one of the main goals of ReactOS is in fact to be able to use native Windows drivers.


That's what seems to get a bit weird as time passes. IIRC, they aim at being NT5.2 aka Windows 2003 compatible, but if they don't support newer driver models, that would rule out using any recent GPU, as nVidia doesn't exactly ship XP drivers for the RTX 3080. Curious what their current plans are for this issue.


Windows still has the same NT core, as you most probably know already. Yet, to be able compatible with NT 10, the ReactOS devs need to use the NT 5.2 > 6.0 > 6.1 > 6.2 > 6.3 > 10 path. Without 5.2, it's not possible and feasible to target further.


Implementing the XDDM driver model in 5.2 gets you nowhere towards supporting drivers in NT 10 as XDDM was completely removed in NT 8.2. This is true of a lot of things in the kernel, future versions are far from a superset of the previous ones like the userspace side of Windows often is. Particularly after nearly 2 decades.


Well, I am not the one who should talk about the goals since I am not a ReactOS developer. But I can tell that NT 8.2 do not exist. You probably meant NT 6.2, on which WDDM took over the business.

I assume it's because of the backward compatibility. But it also would be because of the fact that to implement 5.2 APIs, they need XDDM. Skipping right from 5.2 to 6.2 would cause a non operational state for a long time due to lack of APIs implemented and the WDDM. Windows 2000 had about 1400 full time devs and 800 full time testers. And ReactOS probably have 10-20 active contributors maybe. I don't know the numbers but there is a huge difference. And I can understand the requirements of an incremental development approach.


That's going to be an uphill battle. Graphics drivers of the kind old enough to run on ReactOS do real nasty things to the kernel like binary patching core components at load time. Raymond Chen sort of talks about this here: https://devblogs.microsoft.com/oldnewthing/20040305-00/?p=40...


I love watching Druaga1 install these on ancient hardware.


How is this different from ReactJS? </s>


One cannot be too careful there. Lindows was sued into oblivion, and so would “Freendows” or “FreeNT” or anything in that vein…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: