> Desynchronisation is still possible, when that happens the simulation needs to backtrack (I think?
Sounds more like client-side prediction to smooth things, than actual simulation desync. I have a hard time believing a deterministic game with such a large state was able to backtrack and resync the sim back to determinism. I did not think that lockstep RTS games would need client-side prediction (the indirect and long-term commands in RTS helps hides latency), but I guess if your gameplay lends itself to high APMs then it becomes necessary.
Well you could just have state snapshots taken at regular intervals and then verify that both sides' hashes agree. It's only a couple of thousand entities so it's really not so bad. For you can probably ignore those that haven't deviated from the previous snapshot (and that would account for a state reconstruction taking time).
RTS games have a replay mechanisms at least as far back as StarCraft: Brood War, so a journal of player inputs are likely going to be recorded anyway.
Yes the journal of player inputs sure, but the intermediate states is a different matter. AoE2's state size is peanuts for a modern machine, but I would say that at the time, it was quite significant and it would be too costly to store it on the fly. I certainly did not dare try that in the RTS-like deterministic games I worked on (Commandos and Praetorians)
We're still talking about a few dozen kilobytes of data here. A dozen or so bytes worth of global state per player, up to 200 units per player with a few bytes worth of state (position, order, action, action target, hitpoints), maybe a hundred projectiles, order of 1000 static entities with just hitpoints.
Gotta keep in mind that these games were written in languages that did not have modern garbage collection, so almost certainly stored entity information in arrays to avoid heap fragmentation and malloc costs.
A few dozen Kb is far beyond what you can push over a modem in real time for sure, but memcpy:ing a couple of kilobytes' worth of arrays was still plenty fast in the late '90s/early 2000s.
It's actually a lot more, an uncompressed world state from a recorded game is 1.6MB - compressed (aoe uses deflate) it's only 153kb, but that's still a lot.
I think they have checksums every once in a while over their world state, fog of war state etc, and if these checksums don't match it desyncs. Then it creates an out of sync save, probably for just before the desync occured.
Nah, the desync is when two floating point operations do not produce the same outcome, the checksum is detecting when that butterfly has caused a thunderstorm of diverging game states that is measurable. That can happen fairly late , depending on what is hashed.
The strategy of occasional save-game storage and backtracking only works, if the cause is rare and not deterministic.
I'd need to look into it again but I think pretty much everything object state wise gets hashed.
Edit: The checksum for the player includes the content of each attribute of the player, the object state for each object owned by the player, the master object id of that object, the amount of attributes they carry (which is I think resources that villagers carry for example) and the world x/y/z position.
Sounds more like client-side prediction to smooth things, than actual simulation desync. I have a hard time believing a deterministic game with such a large state was able to backtrack and resync the sim back to determinism. I did not think that lockstep RTS games would need client-side prediction (the indirect and long-term commands in RTS helps hides latency), but I guess if your gameplay lends itself to high APMs then it becomes necessary.