Write templates or interfaces. Integrate existing code against those, but link the correct module implementation to a dedicated binary for each approach.
C++ templates and shared objects or Dagger multibinding in Java are good ways to do this.
He’s arguing for changing the behaviour via an environment variable, and not recompiling/linking at all:
> The difference between changing a console variable to get a different behavior versus running an old exe, let alone reverting code changes and rebuilding, is significant.
I’ve not tried his way, but he’s pretty explicit about what The Way is.
The context here is a game loop. A console variable, in Carmack's games, wouldn't require a restart, never mind rebuilding a module. Switching the variable for the renderer (or some subset thereof) would replace the renderer for the next frame, so the switching time between codebases is measured in milliseconds. That makes for easy visual diffs.
At the expense of adding a hacky conditional to your execution path. Fine for a one off, but not for any larger workload.
Instead, bind different modules to different binary build targets. If you're smart about your module size, compilation time is a non issue. It will be way more maintainable. And by forcing yourself to do this, you force yourself to modularize your code in an extensible way instead of placing flag hacks everywhere.
> I disagree then. No one does this anymore. The big tech companies certainly don't.
In a way they do. Game development/engine development is more possible for this but it happens in all software, especially widely successful software that must iterate while existing implementations are available along with the new one.
A big tech version might be old verses new reddit using same data store to verify how things work, and allow people to slowly change over and on demand with a url switch. Digg did not do this and alienated everyone famously.
Or when Facebook or others swaps out a new version of the api, the old ones run for a time based on the version switch, as well as apps built that integrate it or libraries used for it.
Or some A/B testing in terms of software flow or usability/presentation.
Or in Unity for instance they had their old GUI available while their new Unity UI was available, allowing people to switch. Same with their animation system to Mecanim animation system, same with their particle systems where they had two. They have to roll all new features in like this over time now that so much is built on the engine.
When you use Unity an example of parallel implementation might be which particle system you use at runtime, maybe both are integrated and you flip between the two to see what looks or works best. Or flipping between legacy animation or Mecanim. We have the ability to switch between ui libraries, particle systems, animation libraries on the fly because of all the different games/implementations we have to support and it is needed and they need to be a baseline support across all apps/games. Same with utilities, they can flip between these systems at runtime to check differences.
Doing parallel implementations can lead to cleaner internals to more easily plug in, and it can prevent the 'version 2' disease of hard cutting over legacy that ends up missing a bunch of features.
This is why I’m curious about when Carmack published this. TFA is dated 2018, but says it’s republishing Carmack’s before it’s lost. The date could provide some interesting context.
Write templates or interfaces. Integrate existing code against those, but link the correct module implementation to a dedicated binary for each approach.
C++ templates and shared objects or Dagger multibinding in Java are good ways to do this.