At some point, the efficiency of new hardware completely outweighs the benefits of keeping old hardware running. Obviously there's a valid need to preserve hardware for historical purposes, but for any other reason I think it's a false economy.
The CPU on a Raspberry Pi 4 is nearly three times faster than a 3.2GHz Pentium 4. A Raspberry Pi draws no more than 10W (excluding USB devices), but the Pentium 4 has a TDP of 100W. We could draw a parallel with incandescent vs LED lightbulbs, but I think that would be too kind to the old computer - an old motherboard and a discrete GPU will add substantially to the idle power consumption.
I think that new software bloat far outweighs the benefits brought in by new hardware as well.
You would also need to ask how many people here actually use Raspberry Pis as their daily driver to claim the power benefits that they should be realizing today.
I think that viewing this as a pure power consumption exercise is very reductive at best. Then, in that spirit, you should also be going behind the people running the H100s today and asking why they aren't running their model training on a RPi 4 yet.
> I think that new software bloat far outweighs the benefits brought in by new hardware as well.
While software bloat is undeniably a problem, in my experience it's not that clear-cut.
E.g., a messanger app built on Elector is a pure bloat. At least, from technical perspective -- maybe Electron brings in business value that outweighs the technical downsides.
But an IDE from around 2005 is nothing more than a dumb notepad in comparison to a modern JetBrains IDE. Even though I often find their IDEs extremely frustrating in terms of consumed computational resources, I'd rather upgrade my hardware than go back to something less "smart". After all, hardware is just a means to an end of solving whatever task at hand.
An IDE from even 2001 isn't as dumb as you would think at all. In fact, I was one of those early adopters of Websphere Application Studio (Eclipse) and I thought it was bloated. 20 years later, it's the same argument, same stack (Java).
Cut to 20 years from now, you may remember JetBrains as the best thing you had worked on - and that some other JavaScript IDE in 2043, which requires a bare minimum 128GB of RAM to run - will be the means to that end for those developers.
> But an IDE from around 2005 is nothing more than a dumb notepad in comparison to a modern JetBrains IDE.
Which IDE is this?
I've used IDEs from the mid 90s (Watcom C/C++, Visual Studio 5.5, Delphi, C++ Builder) and from 2005, and they did a lot more than just let you edit code and run make.
In 2005, all major IDEs had autocomplete, for example. They also had jump to definition, jump to help (remember help pages? .chm files?), syntax highlighting, source-level debugging, inline with the source code ...
I mean, what exactly do you think they are missing that makes them a rudimentary text editing tool?
That's a good question. Admittedly, my original comment is informed by the general impressions IDEs gave me back then and now, so I'll try to be more specific this time. Of course, I haven't used every existing IDE from mid 90s, so some of them might actually have had these features. If you're aware of any, I'm curious to hear about them.
# Syntax highlight
Something seemingly as basic as syntax highlighting can be implemented based on syntactic or semantic analysis of the code.
There's even a more primitive way of implementing it, where an editor would just highlight the language's keyword and not much else. If memory serves, back at the time, most editors and IDEs would do the latter -- just highlight the keywords and be done with it.
Nowadays, semantic highlight is usually the default. Each identifier gets colored with the color of the symbol it resolves to. A constant name is rendered as constant throughout the file, not just in its declaration. Same applies for parameter, property, class, function names and so on.
# Semantic "grep"
As in, find all usages of, for example, a function across the project:
- If the function is passed as an argument somewhere (to be invoked as a callback later) this is going to be included in the search results.
- If another class has a function with the same name, this isn't going to end up in the search results.
- If the function is overloaded, it's possible to search for usages of a specific overload or all of them.
- And so on.
I don't remember this being a thing in old IDEs either at all or it didn't work for anything but the most trivial code. Although software like GNU Global did aim to implement this functionality, my memory is it was very limited in practice.
# Semantic code completion
While IDEs from mid 90s did have code completion, a modern IDE improves on that too. E.g., if you invoke auto-completion while writing a bit of code that passes an argument to a function, a modern IDE is able to form a list of suggestions containing only symbols that are currently in scope and match the function's parameter type.
# Refactoring
I think this goes without explanation, would you agree? Modern IDEs offer more refactorings, theses refactorings are way more complex and work much more reliably.
# Dependencies source code browsing/decompilation
Editing to add in this one. How could I forget? Basically, if I want to go to the definition of a symbol that comes from a library, the IDE will either download the source code if available or decompile the binary and and take me to the definition. To me personally, this is huge.
I agree broadly that the modern QoL improvements in IDEs are better, but not better enough to make those old methods even close to useless!
Syntax-highlighting - sure, nice to have based on semantic meaning of code, but keyword syntax highlighting works fine +90% of the time when you are reading code.
Semantic search - once again, now it's better than simple grep, but ctags provides 90% of that functionality too (I still use ctags daily in Vim).
Semantic code-completion - nice, but code completion in VS circa 2005 (and other IDEs, like eclipse) were partially semantic as well. They did not autocomplete a `#define` keyword when you typed the '.' after an instance variable, the completion was still limited to only those fields that were relevant to those instance variables. Once again I feel that the partially semantic autocompletion did maybe 90% of what Jetbrains does today.
As far as decompilation/third-party library support goes, in 2005 the languages did not have package management, so not automatically downloading the 3rd party library is not a fault of the IDE - where would it download from?
In those cases, like VS, where you were using the Windows SDK, VS itself knew where to get those libraries from (because they were stored locally), and I have many memories of the VS debugger in 2005 stepping into the Microsoft SDK libraries.
To me, comparing 2005 IDEs as the equivalent of a dumb text editor when they had 90% of what you have now, is an inaccurate comparison.
(Of course, all of the above is just my opinion, so feel free to disregard it with prejudice :-))
I'd respectfully disagree on VS 6. It was OK for its time, but hardly a piece of art, in my experience.
Please excuse me copying the relevant portion from my other comment.
VS 6's support of C++ back in 2005 wasn't that great, at leat the way I remember it now.
Code navigation was very primitive, and you were lucky if it didn't consider the code too complex to offer any navigation around it at all.
Its built-in debugger often wouldn't let you inspect a string's content because it was just another pointer from the debugger's perspective.
And there was a bug, where the editor would slow down so much it would be littery unusable -- e.g., it'd take a couple seconds to react to a key stroke. The reason was it kept a file with the workspace's (solution in today's terms) code metadata and that file grew too big over time. So you had to remember to delete it regularly.
But VS 6 had a great plugin -- Visual Tomato, if memory serves -- that made things so much better in terms of code navigation/refactoring/etc.
Compared to modern IDEs it won't do very well, but do you remember better alternatives back then, at least if you wanted a "friendly" UI instead of a command line one? Would you choose something different if you go back to 2005? how about 1998?
Oh, back at the time it was a good. IDE. There was also C++ Builder, but it had its own quirks, of course. Picking one of them was, I guess, a matter of personal preference, the project's requirements, etc.
Anyway, I sure see how my comment may sound that way, but I didn't really mean to contrast JB vs others. Comparing a modern IDE to its old version in the context of software bloat is the whole point I'm trying to make.
I think bloat is not the only reason for increased hw requirements. Modern software is often way more capable than its old versions and adding capabilities seems like a good use of added hw power.
I see how my comment may sound that way, but I didn't really mean to contrast JB vs others.
A better idea woud've been comparing a modern version of VS with VS 6, for example.
Anyway, my point is bloat is not the only reason for increased hw requirements. Modern software is often way more capable than the old and adding capabilities seems like a good use of added hw power.
This is a bit similar to the concept of accidental vs inherent complexity in sw engineering. There's accidental bloat and inherent "bloat", so to speak:)
My impression is people don't usually acknowledge the existence of inherent "bloat" in discussions like this one.
Can't comment on Java support in Eclipse back then.
VS 6's support of C++ back in 2005 wasn't that great, at leat the way I remember it now.
Code navigation was very primitive, and you were lucky if it didn't consider the code too complex to offer any navigation around it at all.
Its built-in debugger often wouldn't let you inspect a string's content because it was just another pointer from the debugger's perspective.
But VS 6 had a great plugin -- Visual Tomato, if memory serves -- that made things so much better in terms of code navigation/refactoring/etc.
Writing C++ in Eclipse CDT or KDevelop was simply a pain.
Actually, a plain editor with syntax highlight was probably a better option because it was not even pretending to have "smart" features that would inevitably break on each non-trivial piece of code -- better no promises than false promises.
Delphi, and C++ Builder for that matter, were great as a package: fast GUI builder, editor, debugger, etc. That said, I don't remember them offering much in terms of working with code: finding an identifier usages, refactoring, etc.
As to praising JB, it wasn't my intent to single them out, surely there're other great IDEs. Their products is just something I'm personally familiar with. If something better happens to cross my way, I'll have no problems switching.
There were 2 releases between VS6 and VS2005, so slightly slower than the current cadence of a new release every couple of years but not by much. I think VS6 remained popular for a long time because it was the last release before they switched to the heavier .NET-based IDE.
> I think VS6 remained popular for a long time because it was the last release before they switched to the heavier .NET-based IDE.
The in my opinion much more important reason was that Visual Studio 6 was the last version of Visual Studio that was capable of running under Windows 9x (every version of Visual Studio that was released after required some Windows-NT-based Windows version; Visual Studio .NET 2002 was the last version to run under Windows NT 4.0: Visual Studio .NET 2003 required at least Windows 2000; keep in mind that Windows XP only came out end of October 2001).
> there is a lot of old software that is still updated and not bloated.
The only things that really need more resources are video capabilities and web technos.
For example there are window managers that were considered bloat 20-25y ago and that are considered lightweight. Icewm takes something like 16MB on a 64bit distro nowadays. It would have been considered bloat back in the days but is very far than what you are using with a default kde plasma or gnome desktop running idle without apps. There are still cli or gui lightweight music players, image viewers, word processors, spreadsheets tools, file managers, etc. Until you start a web browser and as long as you are using a lightweight DE you can run many modern linux distros with only 256MB of ram.
> The CPU on a Raspberry Pi 4 is nearly three times faster than a 3.2GHz Pentium 4.
It sure doesn't feel like it. It was never questioned if a Pentium 4 would make an OK desktop, of course it did. The Raspberry Pi 4,... It feels like it should be completely fine for email, Internet browsing, word processing and pretty much any office work, but it's just to slow. People are even maxing out a Raspberry Pi 3 (or 4) running home assistant... Yet an old DOS machine can run an entire factory production line (or multiple).
The problem is that we don't optimize the software (most desktop apps), or don't understand the processing power required to complete a task (i.e. home assistant).
I have successfully ran DNSMasq, WebFS, qBittorrent, VSFTPD and Syncthing at the same time on a OrangePi Zero with 4 cores and 512MB of RAM. That thing is barely bigger than a postage stamp.
I'll switch to a much more powerful device, because I need more hardware acceleration for video encoding and such, but other than that, the same Pi Zero can still handle the tasks needed.
Back in 1999, me and two roommates ran ipchains, Samba and an FTP server on an old PC, it had a 120MHz Pentium and 32MB of RAM. The only reason it didn't run more service was: What services would that even be?
Looks like we did similar things in the similar time frames. I was doing live broadcasting over a simple webcam over Apache, for example.
I think you can still run modern versions of these software on that hardware, albeit with a bit lower performance due to all added features. Kernel would require a bit more resources, too, possibly.
The biggest problem on the OrangePi Zero was to run SSH at full speed, due to required encryption/decryption on the fly. DNSMasq and VSFTPD were essentially invisible, qBittorrent requiring 85MB of RAM to handle all the state data.
While the applications are tied down with cgroups to prevent pushing each other to the reaper of OOMKiller, none of the applications have died because of insufficient memory, which is telling, in a good way.
It shows that my point still stands. Basic functionality on a software is very cheap, but when you add cryptography, or a couple of compute-intensive features, the old hardware breaks down instantly. It's of course possible to optimize these up to a point, but dedicated cryptography accelerators and other instruction sets are really helpful.
>If you run a P4 with modern distributions of linux, you're likely in for a bad time.
What universe are you operating in? I can run multiple domains worth of services just fine. Yeah. Compiling at -j4 at the same time can be a PITA. Running the desktop over X with compression over ssh works just fine as well.
The key to outpacing a pentium with a Pi4 is entirely dependent on structuring your workloads correctly. Given a heavily serialized load, a Pentium'll smoke a Pi 4 before it even gets it's britches on. Throw in enough parallelizable workload, however, and the Pi4'll pull ahead promptly.
Isn't most of this is because people are running their OS from an SDcard, with a heavyish desktop environment? Are people using libreoffice Writer or abiword for word processing? libreoffice Calc or gnumeric for a spreadsheet? Thunderbird or claws-mail for email? What about the DE they are using?
Have you tried running home assistant from an old Pentium 4?
For many people in the world, possibly the majority, whatever old hardware they happen to have, that is what they have, and there is no replacing it.
There is also something to be said for treating any functioning hardware as worth preserving, just because it already exists, just like a human's life is worth preserving, just because they're already alive.
Also, for environmental reasons, because the environmental impact of manufacturing an entire new computer from scratch is much higher than the power saved by running a more efficient one.
Finally, of course, there is simply personal preference, whether due to unique features (as mentioned in a sibling comment), lack of features you do not want, and sentimental value.
That's not how the real world works. A lot of manufacturing equipment and test stands were developed using LabView and used NI cards to control equipment and take data. The people who developed all that stuff are long gone and the new people neither understand the old stuff nor do they have time to spend many months spelunking into the guts of the old software to maintain it. Their managers are also very reluctant to change anything since it means very expensive downtime for an entire assembly line. Downtime can cost tens or hundreds of thousands of dollars per month. Everybody wants to keep the old stuff running as long as possible -- generally several decades.
I have no experience in the banking industry but I do hear the same considerations I see in manufacturing exist in banking software. Any change is fraught with major risk (millions of dollars in loss) so change is avoided as much as possible. I see lots of people on HN talk about how change and churn is avoided in banking software ... the same applies to manufacturing.
The Rasberry Pi comment is laughable. I'm not talking about toys for hobbyists. I'm talking about computers running software controlling cards costing tens of thousands of dollars running physical equipment of equal or greater expense. Nobody will hire a hobby hacker to light a few LEDs with a Pi when doing factory automation.
Also, it's laughable to talk about power consumption when the PC is controlling an industrial system that's using many kW or even MW of power. At that level, 10W vs 100W is noise.
>At some point, the efficiency of new hardware completely outweighs the benefits of keeping old hardware running.
Agreed. That's why the best solution is to run old software on new hardware.
Vernor Vinge figured this out 25 years ago. A Deepness in the Sky depicts a human society thousands of years in the future, in which pretty much all software has already been written; it's just a matter of finding it. So programmer-archaeologists search archives and run code on emulators in emulators in emulators as far back as needed. <https://garethrees.org/2013/06/12/archaeology/>
(This week I migrated a VM to its third hypervisor. It has been a VM for 15 years, and began as a physical machine more than two decades ago.)
OTOH you can get a complete computer with a pentium 4 or core 2 duo / athlon64 complete with keyboard, mouse and sometimes even screen for free or pennies because people want to offload that big tower and a 19" screen they don't use anymore while a 80€ Raspberry Pi 4 doesn't get you anywhere.
Discounting storage[1], you'd have to buy a keyboard, mouse, screen and usb power supply and possibly a case to have a comparable raspberry PI 4[2]. I see PI 400 being sold at 130€ with a mouse these days.
[1] because you would also want to upgrade said vintage computer with an SSD anyway
[2] and in the second hand market raspberry pi 4 aren't going much cheaper, you just usually get an additionnal case and/or sdcard for the price of a single new board.
The problem is that, while the RasPi may be 3x as fast, it doesn't necessarily have the software ecosystem you want.
If all you have is a closed-source Win32 program that barely runs on XP, at best, you can emulate the P4, thus likely eating up much if not all the RasPi's performance increase, and diving deep into the realms of undocumented behaviour and incompatibilities.
I'm surprised we don't see more moonshot efforts into improving decompilation/reverse-engineering toolkits. There's a huge and obvious need for "I have this software, the developer's dead/bankrupt/disinterested, can you get me something close enough to human-readable C that I could compile it against winelib and get it running on a more modern machine?"
yet at some point we tend to throw efficiency at the wall for raw compute.
it was exceedingly uncommon for PC's in the Pentium 4 era to ship with anything higher than a 200W PSU. Yet today it is not uncommon to hear of 850W PSU's. Of course such PSU's are undoubtedly more efficient at doing the conversion. What I mean is that the argument of (power) efficiency doesn't hold when we actually end up using significant more power anyway.
Also, to play devils advocate a bit, that's a 20yo PC: how many times would you have upgraded in 20 years- and what is the carbon cost of all of those upgrades? Some PC manufacturing is pretty bad for the environment. I'd guess once every 3-5 years?
According to "8 Billion Trees":
> the Carbon Footprint of a Computer? A desktop computer emits 778 kgs of CO2e annually. Of this, 85% results from emissions during manufacture and shipping, and 15% results from electricity consumption when in use.
So you have to really ask yourself: when is that upfront cost paid for by the inefficiency of energy use (which can be sourced sustainably unlike many parts of PC manufacture)?
20 years? Sure, but it's still very questionable at 10 years.
If only we didn't force people to upgrade their computers by making more and more features non-optional and slowing down the computing experience for the sake of producing features easier for developers.
---
EDIT: I can't stop thinking abut this; I think I've been nerd sniped:
I can't find good data on this though, it seems like a new laptop costs the planet 422.5kgs of CO2e on average.
The actual running cost of the Pentium 4 (mobo, ram, etc) itself is somewhere around 40w for this generation (not including peripherals, discrete GPU or screen) - it does not have speedstep so can't power down. a high end dGPU from the era runs about the the same as the entire PC. so let's double that 40w. -- we ignore peripherals and screens as those could be the same for both new and old PC's.
80w used roughly for 8hrs per day? That comes to about 292kW a year roughly.
In the US energy generation costs 0.3712kgs CO2e per kWh; so something like 109kgs of CO2e for running this PC for a year.
Meaning even if your energy efficiency of a new PC doubled (using 40w instead of 80w) it would take nearly 5 years to offset the carbon cost of a new PC. Not discounting that energy production can be carbon neutral.
That said, currently: The most efficient x86 computer will consume 10W to 25W at idle.
>it was exceedingly uncommon for PC's in the Pentium 4 era to ship with anything higher than a 200W PSU. Yet today it is not uncommon to hear of 850W PSU's
Exactly that. I just had to upgrade my 850W psu to 1000W because I'm upgrading my gpu to an rtx 3090.
This is why if you want the absolutely most power efficient NAS/home server and you don't need a lot of cpu/ram people use p4 era hardware. The lowest wattage I saw on a German user group about this was around 6W for an idling system.
I myself am currently running a p4 era 1u server (a Fujitsu primergy rx100 s7) with a 4 core Xeon (2.4ghz or so), 4gb ram, google's edge TPU accelerator(dual edge tpu on pcie) and two 1tb spinning disks in a mirror on as a CCTV server(the tpu does object recognition). This box runs at 20%~90% cpu utilisation all the time and it draws 35W on average at the wall. It can peak at 75W or go down to 25W (it never really idles). This server (excluding the tpu) cost me under $100. And if I want I can replace the disks with bigger ones or add 2 more external drives as there are 2 unused sata ports. Also this server comes with remote management card so I can login anytime using a separate network port and do a power down, set temperature alerts etc, see power usage or access the console/bios.
Good luck finding a similar option with similar features, low power consumption and price with modern hardware.
Yes, I'm a fan of retro hw. I have a commodore 64, a 386sx, a win98 and winxp pcs. Some have only sentimental value, but not all old hw is impractical to use seriously today.
Your 1U server seems to be Ivy Bridge-based, which is about a decade or so newer than the Pentium 4 and much, much more energy efficient - and there's a lot of cheap second hand hardware out there from that era as well.
I had to check, because I wasn't aware there is such a time gap between the two. I seem to remember having P4s in many an office desktop while E3 xeons were present in new servers we installed.
My Xeon cpu is Intel® Xeon® processor E3-1220Lv2 with 17W TDP wow! It was released in 2012 while last of P4s were sold new in 2008 (but first were released in 2002). So yes, you're right there is a decade between these two. In my mind (incorrectly) they were in the same "era" as many of my small business clients had lots of PCs with them at the time of Ivy Bridge Xeons.
> This is why if you want the absolutely most power efficient NAS/home server and you don't need a lot of cpu/ram people use p4 era hardware
Calling bullshit on this one, my p4 nas chewed through power, I upgraded to one of the low power modern Celeron boards which drew around a quarter of the power.
My p4 had a big heatsink and fan, my quad core Celeron is passive cooled with a tiny heatsink in comparison.
it was exceedingly uncommon for PC's in the Pentium 4 era to ship with anything higher than a 200W PSU. Yet today it is not uncommon to hear of 850W PSU's.
Sure, but today we're pushing absolutely crazy levels of performance. At the efficiency end of the spectrum, we have something like the Intel N100, which delivers something like 14x the performance of a Pentium 4 on a TDP of 6 watts. An i9-13900F might have twice the peak TDP of a Pentium 4, but it has 150x more throughput.
Big PSUs are mostly driven by GPU power consumption, but it's easy to forget just how insanely performant modern GPUs are. An RTX 4090 has a TGP of 450W, which is admittedly very high, but it delivers 73 TFLOPS at single precision without sparsity. That's the same level of performance as the fastest supercomputer in the world circa 2004. It's kind of goofy that we'll often use all that performance for playing video games, but we can also do important work with that power.
>A desktop computer emits 778 kgs of CO2e annually. Of this, 85% results from emissions during manufacture and shipping
I don't really buy this figure. Using standard methodologies, Dell cite a figure of 454kgCO2e for the full tower version of the Optiplex 7090, with 46.7% of those emissions created during use. For the micro form factor version, they state 267kgCO2e, with 30.8% of emissions created during use.
There's absolutely a valid case for extending beyond the usual 3-4 year lifecycle, there's absolutely a valid case for buying a refurbished computer if you don't need high performance, but at some point an old computer is just e-waste and the best fate for it is responsible recycling.
>A desktop computer emits 778 kgs of CO2e annually.
It's patently false; a desktop computer emits barely any emissions at all; maybe a tiny amount of off gassing as it heats up.
The power to run the device may emit CO2 during generation, but that's a separate question and isn't from the device itself.
The gist is still correct, reduce and reuse are very important overall.
What I haven't really worked out (and the math would be involved and difficult) is whether all things considered running old enterprise hardware is better than buying new power-efficient hardware for the same use case. A quick cost-benefit analysis says buying new is 'better' because the energy use of the old pays for it, but that doesn't correctly account for all the cost of manufacture and if you're in a climate that mainly heats the house, you get "free heat" from the use of the older equipment.
Disclaimer: I don't mean to derail the discussion and I don't intend this tangent to go far in this thread:
I really wish more systems- and particularly tech-minded folks would think like you more often, especially when it comes to manufacturing/logistics as well as end-of-life costs.
In nearly all cases of my life I've found no real benefit to buying new over buying used, maybe it takes a little more research and patience finding the right deal but the effort is always worth it. I find the discussions around electric cars and massive ML models especially problematic because of the complete lack of consideration for these factors on display.
> it was exceedingly uncommon for PC's in the Pentium 4 era to ship with anything higher than a 200W PSU
I'd say that 250-350 watt PSUs were very common in white-box/home-built PCs in the P4/Athlon XP era.
400W and above, yes, that was much less common. I can recall at that point in time, Enermax made their name selling some of the largest PSUs on the mainstream market, and that line tended to be like 450-600W units.
I do sort of agree, it's almost completely about GPUs these days. The spinning rust is now low-power flash. A top Ryzen 9 or Core i9 might top out at 150-250W, but that's not that much higher than Prescott on a bad day, and plenty of people are picking models in the <100 watt power classes. However, the idea of a 400W-by-itself GPU was completely off the field in the early 2000s.
I'd say that 250-350 watt PSUs were very common in white-box/home-built PCs in the P4/Athlon XP era.
...and the vast majority of those PSUs were cheap off-brand units actually rated in unrealistic "Chinese watts", with 150-200 being all they could honestly deliver for anything short of brief spikes.
> The most efficient x86 computer will consume 10W to 25W at idle
Totally untrue. I've got a box sitting in my closet right now that has a full system power of only ~4.5W measured by a kill-a-watt.
Its based around a Supermicro X11SBA-LN4F, a small SATA SSD, and a, 80 PLUS Gold power supply.
My personal laptop based around an AMD 3200U sitting idle right now is pulling ~6W at the wall with the 14" screen on. According to HWMonitor the CPU package is using 0.65-0.9W.
Both of these are x86 computers and they're well under even your 10W idle estimate.
The cost of a Pentium III and the electricity to run it might be much, much less that the cost of labor to move one application over to a Raspberry Pi.
Then the cost of debugging the application on a Pi 4 might be more than the value of the company trying to run it.
You were sold a silly idea: that the reason software isn't portable has to do with work and resources. That's simply not true in a vast majority of instances. Companies intentionally lock software to certain platforms for sales reasons much, much more often than for technical reasons.
OTOH, Windows isn't portable, so if you want to run old Windows software, you don't move it - you emulate it.
The CPU on a Raspberry Pi 4 is nearly three times faster than a 3.2GHz Pentium 4. A Raspberry Pi draws no more than 10W (excluding USB devices), but the Pentium 4 has a TDP of 100W. We could draw a parallel with incandescent vs LED lightbulbs, but I think that would be too kind to the old computer - an old motherboard and a discrete GPU will add substantially to the idle power consumption.