basically it's when a bunch of bad but seemingly-innocuous coding design decisions accumulate and compound each other multiplicatively over time, to the point that any change of the code is far more likely to break something than the new feature is worth
Or as I stated at a previous job where I complained about the 40-minute-long test-suite run time that was bogging everyone down, "the road to a 40 minute long test suite is paved with a couple extra seconds here and there with every new code commit"
A few reasons. If you're an employee building a product for a company, management's always going to push for features, features, features, to keep the customers happy, and to keep up with the competition. Time spent refactoring is an investment in the future, but management tends to be more shortsighted than that. By the time that you really can't add more to the program, it's years down the line, and they've probably been considering pulling the plug anyhow.
In my own projects, it's often fun to keep adding things until I hit a roadblock, then spend a couple weeks reworking stuff. It's satisfying to add new features, but architectural maintenance feels like treading water.
So, I'd guess that the answer is somewhere between "outside factors dictate that there isn't time" and "we haven't decided to make the time, even though we could".
No; I was speaking more generally about some potential reasons that software wouldn't be kept in a low-debt state. The same concepts could apply to Dosbox with some remapping.
What company? The Dosbox project itself.
What management? The devs making overall decisions for the direction of the project.
What customers? Anyone requesting new features and fixes in the software.
What features? Built-in support for Munt, new networking support, support for more varieties of peripheral hardware...
It made sense to me to give a more general explanation, since this thread started off with a question about what technical debt is in the first place. I also did my best to address Dosbox's specific reasons by posting the interview with their devs and providing my own interpretation of it.
As others have hinted, the best path to better code is
1) have a good test suite and test coverage to ensure that any code refactoring hasn’t broken other or expected functionality
2) spend the time to rethink the design of some piece and refactor that, deploy and watch for unexpected bugs
3) repeat until the code is up to date
The problem is nontechnical management that doesn’t even understand why it’s important to revisit old code in this way instead of constantly coming out with new features. It is just seen as an unnecessary cost, even though the crufty state of the code has been slowing the whole team down. (Disclaimer: I have been in this exact situation. The founders ended up selling out and leaving.)
Could be various reasons - the original people aren't there, or more likely the amount of free time people have to work on dosbox is not very big.
Then, when people try and develop, the tech debt makes everything slow, even paying off that same debt.
basically it's when a bunch of bad but seemingly-innocuous coding design decisions accumulate and compound each other multiplicatively over time, to the point that any change of the code is far more likely to break something than the new feature is worth
Or as I stated at a previous job where I complained about the 40-minute-long test-suite run time that was bogging everyone down, "the road to a 40 minute long test suite is paved with a couple extra seconds here and there with every new code commit"