Oh my god. It's astounding what becomes possible as we move further into this era when most digital activity gets archived for future mashups and other archaeology.
My favorite (mis)use of Dosbox was using it to drive Symate for PLC programming. That let me switch to using a Linux netbook instead of dragging a laptop booted in DOS around a factory.
Same here, we had some buggy PLC controller software that would only run properly on increasingly hard-to-find old PCs, or more expensive embedded kits, due to timing issues. This was in an electroplating plant so they corroded to the point of needing replacement every year or so. Instead, with DOSBox set to an appropriate clock speed, we could use any commodity PC.
Did a similar thing with engraver software (which had a parallel port hardware serial key dongle that DOSBox allowed access to) and a dBASE IV program at a client's factory. Amazing piece of emulation that enables "ancient" code to run forever in small businesses around the world.
I used Dosbox to run a program to upload source code on old GE PLCs in order to covert to new systems. Couldn't have done it otherwise. Long live Dosbox!
Does anyone know why they haven't released a new version going on 8 years now even though it looks like it's been continually worked on according to http://source.dosbox.com/dosboxsvn.txt?
basically it's when a bunch of bad but seemingly-innocuous coding design decisions accumulate and compound each other multiplicatively over time, to the point that any change of the code is far more likely to break something than the new feature is worth
Or as I stated at a previous job where I complained about the 40-minute-long test-suite run time that was bogging everyone down, "the road to a 40 minute long test suite is paved with a couple extra seconds here and there with every new code commit"
A few reasons. If you're an employee building a product for a company, management's always going to push for features, features, features, to keep the customers happy, and to keep up with the competition. Time spent refactoring is an investment in the future, but management tends to be more shortsighted than that. By the time that you really can't add more to the program, it's years down the line, and they've probably been considering pulling the plug anyhow.
In my own projects, it's often fun to keep adding things until I hit a roadblock, then spend a couple weeks reworking stuff. It's satisfying to add new features, but architectural maintenance feels like treading water.
So, I'd guess that the answer is somewhere between "outside factors dictate that there isn't time" and "we haven't decided to make the time, even though we could".
No; I was speaking more generally about some potential reasons that software wouldn't be kept in a low-debt state. The same concepts could apply to Dosbox with some remapping.
What company? The Dosbox project itself.
What management? The devs making overall decisions for the direction of the project.
What customers? Anyone requesting new features and fixes in the software.
What features? Built-in support for Munt, new networking support, support for more varieties of peripheral hardware...
It made sense to me to give a more general explanation, since this thread started off with a question about what technical debt is in the first place. I also did my best to address Dosbox's specific reasons by posting the interview with their devs and providing my own interpretation of it.
As others have hinted, the best path to better code is
1) have a good test suite and test coverage to ensure that any code refactoring hasn’t broken other or expected functionality
2) spend the time to rethink the design of some piece and refactor that, deploy and watch for unexpected bugs
3) repeat until the code is up to date
The problem is nontechnical management that doesn’t even understand why it’s important to revisit old code in this way instead of constantly coming out with new features. It is just seen as an unnecessary cost, even though the crufty state of the code has been slowing the whole team down. (Disclaimer: I have been in this exact situation. The founders ended up selling out and leaving.)
Could be various reasons - the original people aren't there, or more likely the amount of free time people have to work on dosbox is not very big.
Then, when people try and develop, the tech debt makes everything slow, even paying off that same debt.
It is a term used by people with hindsight to depict bad decision making in programming that needs a lot of fixing and refactoring later on. And to insinuate they are more cautious.
It's very hard to avoid tech debt if you're under constant and severe deadline pressure. But I don't consider it at all hard to avoid otherwise. To me, avoiding debt can be summarized as not taking shortcuts. This shortcut would save time right now, but you know its a hack that makes an abstraction leak? Don't do it. You know an approach is going to cause problems down the line, but you just can't think of a better way? Again, don't do it. See if something comes to mind later. Wrote and tested a bunch of code that seems to work, but there's an itch in the back of your mind saying you haven't thought the problem all the way through? Don't check it in. One might think this sort of approach is paralyzing or something, but in my experience it's extremely empowering over the lifetime of a software project.
So true. Often technical debt is intentionally taken on as a calculated risk to get to market faster with the intention of fixing it later. Fixing it later rarely happens though. In my experience it either ends up being someone else's problem or you just live with it.
"The Night kernel is a 32-bit drop-in replacement for the original 16-bit kernel of the FreeDOS operating system. It uses linear memory addressing and operates in protected mode on the Intel x86 architecture. The typical user will retain compatibility with their DOS applications and gain protected mode abilities such as task switching between applications..."
At it's infancy but you can add RSS[1] to follow the discussion.
Emulation's difficult, even for well-known platforms. Dosbox emulates a large variety of hardware, quirks in various DOS versions, and runs on a ton of platforms, but it doesn't do any of those things absolutely perfectly, and there's room for improvement in its hardware support. Plus, compiler and environmental changes need occasional maintenance.
Just this month I had to update some (very old) production code for a client, and when we pulled the project out of cold storage we discovered that the compiler wouldn't run on the Windows 64-bit command line.
DOSBox saved my bacon, and it wasn't the first time.
*Which is fine, because after all, Hacker News is all about the comments. I don't mind re-discussing fine pieces of software or topics like these every once and a while.
Not sure myself... It looks like the latest release (0.74) was nearly eight years ago. That's not to say that it isn't an extremely useful and well-made tool, and perhaps doesn't really need a whole lot of updating.
My favorite use (I mean, aside from the more obvious uses, like to run my GOG and Steam games, along with disks from my childhood) is to compile it with heavy debug enabled, and using it in conjunction with IDA to debug game issues.
It's actually got a fairly nice debug environment.
Before dosbox there were dosemu (which let you use hardware but was pain to configure) and things like qemu (emulate any hardware but even more pain to configure). You still had to install OS into VM. Dosbox just changed the rules of this emulation game. It was unbelieveably good even when it didn't do 32 bit yet.
I seem to remember using dosemu on early Linux on the console, and was amazed at seeing my graphics card perform its POST BIOS run on-demand - the first few times, I thought Linux had spontaneously rebooted, but it was just dosemu getting ready to run VESA graphics, IIRC. Or maybe I'm just misremembering?
How many different early-90s display adapters, audio adapters, and such does it emulate? It also means that you'd have to source the OS and drivers. If the goal is to run old DOS games, Dosbox is generally more convenient.
All that does is lie to the guest over firmware (device tree, ACPI, etc.). All of the memory accesses are still uniform as far as CPU emulation is concerned.
That is not actually true. Distributions are not consistent in such things even across a single distribution, and there is certainly no universal rule about only packaging the latest release versions of everything.
I love the fact that people are still making games for DOS. I've been playing a bit of this: http://www.doshaven.eu/game/ptakovina/ (from 2017!)