Hacker News new | past | comments | ask | show | jobs | submit login
How fighting games use delay-based and rollback netcode (2019) (infil.net)
297 points by Kinrany on Feb 28, 2021 | hide | past | favorite | 106 comments



This piece was really interesting. That you can hide network delays by building a “fake” model of the other player that replicates what a real player would do and rolling back when this prediction fails. So you can train some neural nets on what players do in such situations and you get a certain accuracy. And yet you can keep making this “fake” player better and better until it’s indistinguishable from fighting a human and we arrive a nice little paradox. These kinds of games and algorithms are really good proving grounds for AI. And I begin to get why the author is so passionate about this stuff and it’s possibilities.


Where did you get the thing about neural networks? I read your comment before reading the article and was really disappointed the prediction "algorithm" (pioneered by GGPO in 2006 [0] and still used today) is literally "assume nothing changed", ie. the opponent is still holding down the same keys as the last frame.

[0]: magazine article by the author of GGPO [pdf] https://drive.google.com/file/d/1cV0fY8e_SC1hIFF5E1rT8XRVRzP...


You shouldn't be disappointed. Nothing has changed is overwhelmingly the correct answer.

It is incredibly jarring to assume a remote player takes an action, display them taking that action, then roll back when you realize they didn't. From your local perspective, it looks like they blocked for a few frames, which makes you assume they're going to block, then they flash back to being defenseless, and your attack weirdly goes through even though you anticipated that it did not.

Even if you can get your fancy neural net to figure out that the enemy is likely to block you - which is a feat worth writing some papers about - you're still going to be wrong about the frame on which they do it. Was their reaction time 160ms? 176ms? 182ms?

If you're right about the action they take and wrong about the frame, that's going to cause each action in the game to have weird timing. You anticipate a block, then it doesn't come so you roll it back, then wait, it actually has come it was just late! The remote player flays around like they don't know what the hell they're doing, and it's not clear to you when you land your hit whether the timer started from when they first telegraphed their block or when the glitch occurred. Your punch appears to land at random.

And blocking is an insignificant action - what if you're playing something like DayZ and the neural net decides that some random other neutral player is likely to try to attack you, say because they happened to mouse over you qucikly.

It looks like they just shot you for a few frames, but weirdly your health goes down and springs back up again, but you're not going to figure out it is the netcode playing tricks on you. Instead you unload your magazine at the other player that's clearly trying to kill you.

And since you are actually shooting now, of course they're going to return fire. Your prediction algorithm just caused two peaceful players to fight to the death.

Just because the algorithm is simple doesn't mean it's possible to do better.


Yeah it probably is an optimal strategy (and certainly relative to return on investment).

I didn't mean disappointment that the tech hadn't advanced, I meant that my expectations were set really high by the grandparent comment ("Neural networks? In 2006? Surely not! But it must be something really fancy, judging by all these flowcharts!" [0]) and by how they kept hyping up the "prediction algorithm" for half the article, when it's just

1. take the data you already had

2. (there is no step 2)

[0]: https://drive.google.com/file/d/1cV0fY8e_SC1hIFF5E1rT8XRVRzP...


I think you're missing the point of the article (and the material you included). It's not that the method of inference is that interesting, its the fact that the game is able to make use of that inference at all. Every thing else about the algorithm is interesting: rollback, reconciliation, (de)synchronization, choice of delay, the the separation of game logic from rest of the game loop, etc. As the article details, it's extremely complex to do this right, to the point where many games just don't bother trying.

Think about the time scale under which this prediction is made: 60Hz. Even the best players do not change input at nearly that rate. So it's clear that the current value is going to be the best estimate for the next value. That realization doesn't even begin to solve the problem though!


Oh right, that makes sense. I was actually going to write a similar reply to the parent comment but got distracted.


This is a lovely illustration of the fact that "not very powerful but highly predictable" is often far better than "powerful but unpredictable" when it comes to tools.


As in this article, the base assumption is that a lot of the lag happens at moments where the input doesn’t matter that much. In fighting games, when characters are moving left and right or locked in a motion, in FPS when just moving around or shooting at hard to hit targets.

Predicting right is only important in short bursts at critical moments, and it’s also the hardest to predict and less forgiving moments, so I’d assume being conservative is the more cost effective and pragmatic choice.


Not sure you would want the prediction to take non-dumb actions. You need to maintain the hypothesis of least surprise for the local player, otherwise you local player could start to act based on the wrongly predicted actions of the remote player, and that's even worst than nothing.

For instance, say the local player tries to hit the remote one. If the prediction for the remote player is to evade, the local player can choose to chase him. However if you now rollback and the remote player did not evade but charge in, the local player has been fooled.

Also, don't forget than in these games, input could be polled every 1ms. So a player pressing down "left" for 1s in fact is considered to have 1000 down inputs on left. Since players don't change inputs very fast, just replicating the last input is in fact 99.9% accurate.


> Since players don't change inputs very fast, just replicating the last input is in fact 99.9% accurate.

Sadly, fighting games, and to the same extent FPS casually break that assumption. 1s is an eternity in a close fight, and players don’t just react, they also read ahead and align inputs based on the situation they expect, regardless of the speed of the game.

Commands will be entered in as low as one to three frames depending on the players, and it will be common to train to do some combos to input them faster. Basically “shooting twice” could actually be “shoot once, go left, go right, shoot again” if doing that has any advantage (canceling the shooting cooldown time for instance). And players don’t do these consistently, or succeed every time.

It’s really complicated :)


>1s is an eternity in a close fight, and players don’t just react, they also read ahead and align inputs based on the situation they expect, regardless of the speed of the game.

This was a pretty obvious result when LinusTechTips did their different frame rate testing in first person shooter games. Higher frame rate benefited worse players more than skilled players. My assumption is that skilled players have learned the pattern. Kind of like martial arts - you practice a flow of moves so that you can execute them without having to think about the next move. (Perhaps this is also how people type very quickly.)


> Sadly, fighting games, and to the same extent FPS casually break that assumption. 1s is an eternity in a close fight.

Sure, but that's nothing compared to the speed of just polling inputs.

I would assume a pro player in a fighting game to have what, say 180 APM at peak ?

That's 3 actions per second, so if we assume a uniform holding time and a 60 FPS game that's 1 input change every 20 polled inputs. Assuming repeated inputs does seem like a good strategy in this situation.

An other way of seeing it, is that if a player with 180APM realistically can only change inputs every 333ms, then with a remote input lag of 25ms (50ms ping / 2) there is just a 1/13 chance that an input change would occur in this time slice.


I actually agree that assuming input didn’t change is the most pragmatical course of action, as even if the input changed I’d assume there’s just no way to efficiently anticipate it at this point. With that strategy the best case scenario is optimal, and worse case scenario is not worse than for any other option.

On the 1 change every 20 polls calculation, it’s true locally, but for a ping of 200ms for instance, 333ms of loss is ‘only’ 3 times the one way trip time. I think momentarily losing 3 times the connection speed happens often enough, and of course the bar for losing an actual action due to lag is yet lower for intercontinental games.


I don't think those predictions are that sophisticated, maybe there is some Bayesian probability, but I can't see a neural net fitting inside a single frame. However, lots of progress has been made in using neural nets for training fully autonomous NPCs:

https://cns.utexas.edu/news/game-bots-pass-turing-test

"In order to most convincingly mimic as much of the range of human behavior as possible, the team takes a two-pronged approach. Some behavior is modeled directly on previously observed human behavior, while the central battle behaviors are developed through a process called neuroevolution, which runs artificially intelligent neural networks through a survival-of-the-fittest gauntlet that is modeled on the biological process of evolution."


Why would you bother training neural nets? They already found a solution that can be computed in literally 0 cpu time that works for >90% of the cases (their theoretical model was an above average active player moving 5 times in a second, which is 5 frames of input you can't predict as being the same as the previous, which is 8% of the time).

Why would you waste time trying to shove neural nets into a solution which has such amazing properties? It really terrifies me that that's the first place you went to.


You also have the issue of performance. If you have a NN churning predicting every frame what your remote player would do, it would add even more load to that 16ms loop of calculating and rendering everything else. At best maybe some very basic ML might help, but the "assume no input changed" seems to be best guess from the sounds of empirical testing in the article.


Makes me even more curious about Stadia’s “button anticipation” code


human can learn what the ai will think they will do and act differently to maximize the reality dissonance for the other player.

this is not fundamentally different than what we already do in these games though, which is to choose tactics that work to your advantage given how the game deals with lag.

client side hit detection? be the aggressor and do the moving into view and get milliseconds advantage to start shooting.


Relevant: GGPO, the age-old gold standard of rollback netcode, was recently open-sourced!

https://github.com/pond3r/ggpo

I haven't used it myself yet - mostly because I'd want Haskell bindings first hah


If your game state is entirely immutable, you're actually most of the way there.

I built a rollback multiplayer system in C#, and most of work goes towards having classes that are mutable, but remember their previous state so they can be rewound to any frame in the past second or so.

This involves enforcing invariants, persistent collection types, generating code to efficiently walk your entire game state, etc.

In Haskell, you just have a list of game states at the end of every frame, pick what you want and go.

Of course, you're not going to get C# performance this way, but you should still be able to build something pretty advanced if you give it a dedicated CPU core.

There are over a thousand people at this moment playing my indie web games built with rollback networking - if anybody has any questions, happy to answer them!


Did you follow a pattern for recording and tracking those class states? I stick to the business realm of our industry and Iv'e used a pattern called event sourcing that sounds kind of similar. Every update to an entity is an event and you can rewind/fastfoward any entity to a moment in time. It consumes a lot of memory/db space and if you don't snap shot occasionally, it's time consuming to build the events into a model so I'm curious what you use. Might give me some ideas.


Event sourcing is exactly what this is. The game state is built from a combination of initial setup parameters (random seed, level etc), and a list of player inputs.

I snapshot after every recent frame. The snapshots are stored in the game entities themselves via generated code. Each user accessible entity has a bunch of shadow slots it can copy from or write to. Then a higher level system sends each entity commands such as "store your state in slot 5" or "copy slot 3 over your current state".

Old snapshots aren't useful and are discarded. The server itself runs a few seconds behind the client states, does no rewinding, and any player that falls behind that will receive a fresh copy of the server's state.


Haha! That's exciting to see something like that shared across industries. I use it for medical health records to ensure we track every single change and know when something changed. Pairs really nicely with dynamodb in aws.


Could you talk a little about that or point to something that details the overall strategy? I'm considering a similar setup for event sourcing in AWS and DynamoDb with change streams crossed my mind.


We use event sourcing in a business application.

Event sourcing, at it's core, is making sure your source of truth is a log of events and your app state is derived from that.

Beyond those constraints, there are many variations of how it can be done, each with very different tradeoffs.


Sure, if you don't mind I'll get back to you tomorrow if that's OK.


Hope you don't mind if I listen in - fascinated by this area too :-)


I've actually used Datomic to do backend development before, and it's really interesting to contrast two frameworks that are built around very similar ideas, but with vast differences in requirements and implementation.


Around the time GGPO's source was published, I wrote a very, very simple fighting game to test how to use it. Maybe of interest to those in here: https://gitlab.com/DixieDev/ggpo_example/-/blob/master/main....


This is a great example! Thanks for sharing.


Super smash bros melee is having a bit of an online renaissance due to a combination of the COVID 19 pandemic and the recent implementation of rollback netcode by a community member that quit his job.


The fact that Fizzi was able to get implement rollback netcode for a game by just directly editing the assembly code without any access to a the game's source code is insane. The delay based netcode for the newest smash game, Smash Ultimate(2018), is so bad that it went from being the biggest fighting game of all time in terms competitive playerbase to basically having no scene at all due to covid killing lan tournaments. No one wants to play ultimate online. I wonder if this comparison is what caused nintendo to cease and desisted an online melee tournament that intended to use rollback.


> I wonder if this comparison is what caused nintendo to cease and desisted an online melee tournament that intended to use rollback.

No, Nintendo famously doesn't bother to understand or pay attention to details like this. Their higher-ups can't be embarrassed because they barely understand the very concept of networking, let alone how it affects their own online multiplayer, let alone the distinction between delay-based and rollback, let alone how such a distinction might manifest in a foreign tournament on the far sign of the world. Nintendo's lawyers, on the other hand, are predictable: advertise anything about a modded game of theirs and the C&Ds come out. It's the same reason they went after Project M.


But they really aren't that predictable. There have been other events after the big house that didn't get a nastygram. Who knows why pokemon showdown is allowed to exist. The Melee community is pretty crazy though I would imagine if nintendo keeps coming at them they might just decompile the whole game and move to new version sanitized of nintendo ip.


Was it done in-game? I figured it was done with the Dolphin (Gamecube) emulator, the same way GGPO was first implemented for arcade emulators.


I'm sure a lot of the work is done through the dolphin fork, but a lot of code also has to be injected into the melee iso as well. Which is why project slippi doesn't allow rollback for all dolphin games (sorry project M)


He's got an interview somewhere where he notes the trick comes down to the fact that lightning mode exists in Melee.

It's all fun stuff - the dev discord is also open for people who enjoy this kind of stuff, some really knowledgeable people in there.


The rollback is truly something else. Although - and I may be crazy - I swear it isn't quite as good as playing local people with delay-based netplay (such that the network delay can "hide" behind the inherent lag of the Gamecube/game itself)

But outside of that edge case, it's outstanding.


The fact that they were able to optimize two frames of input lag away to make the online gameplay have the same latency as offline is itself incredible.

As far as I know, even other rollback-equipped fighting games still add a little bit of latency in order to play online without requiring constant rollbacks.

That said, Melee is also a game that is very hard on the rollback, because movement and attack startup are far faster than in many fighting games.

Dashdancing with rollback leads to some annoying camera judder.


I play Fox and worry that I jump all over the place.

I play Melee on CRT with my wife all the time. Fox vs Sheik.

Rollback sucks in comparison. Sheik teleports all over the place.


Depends on the ping. I would say once you start pushing 80 you can definitely feel it. But >60 ping rollback feels very good if you are playing on a good monitor.


AFAIK they didn't do anything special to save two frames, just a CRT (which is what's used on LAN) has 2 frames of lag.


CRTs don’t traditionally introduce any latency. It’s digital displays like LCDs that use buffers to pre process images and that add latency.


Melee itself has a frame (or two?) of lag due to poor input processing or something.


Yes, we know. And melee is almost always played on CRTs locally.


Melee has 3 frames of lag on a CRT, none of which come from the CRT.

They managed to pare it down to only one frame.


In a way, Melee shows that a couple of artificial frames of lag can be beneficial and allow for seamless online play on-par with LAN. Since the game is already one of the fastest & most reaction-oriented out there.


Prediction code for rollback is somewhat akin to branch prediction code in that the dumbest solution works surprisingly well but there's incremental efficiency gains to be had.

I wonder if any fighting games have thought to train a neural network per player to try and predict the player's actions N frames ahead. The neural nets could be used for smoother netcode but if the accuracy got high enough, they could, eg: allow for play after one player disconnects, or used to estimate ELO by having the neural nets play each other before the match or be AIs you could play against in offline mode.


You probably don't want to do this. Players will get quite reasonably upset if the AI predicts thst the opponent will use an attack, so on their screen they hit the opponent out of the attack, then a rollback occurs and the opponent has actually blocked.

Some games like Killer Instinct have AIs that learn to play like a certain player. It's pretty cool!


Could be accounted for by having different cost functions for each type of misprediction and heavily penalizing the ones that decrease enjoyment in the game.


That's an amazing idea. I wonder how long a player would have to play before you could train a neural net to play like them.

In a single-player game you could also create an enemy NPC that uses that same neural net, for sort of a "Dark Link" effect where you have to play against yourself. Would be awesome for chess also.

Lots of possibilities.


> I wonder if any fighting games have thought to train a neural network per player to try and predict the player's actions N frames ahead.

The entire point of playing a fighting game is to attempt to solve this problem. A good player, by necessity, can't be accurately predicted; if they could, they'd be a bad player.


A good player can't be accurately predicted by a human


First, a good player can't be accurately predicted at all; the conclusion from game theory is direct and clear. This is a case where a strategy involving picking moves at random is superior to any deterministic strategy.

Second, your rebuttal is not especially good support for the idea that we should be trying to solve the problem with a technology specifically designed to imitate humans.


How good are human players, by that metric?


That's a fair question. I know of related research showing that chimpanzees are much better at achieving the correct distribution of strategies in asymmetrical-payoff games than humans are. The obvious implication is that a typical human isn't that good at being unpredictable.

The distribution of people who enjoy playing fighting games will probably look somewhat different, though.


There's only a few key moments where players need to be unpredictable to win a game. Almost all the rest of the time they are executing predictable consequences of those unpredictable choices.

ie: imagine a player running to a ledge spanning a gap. The "naive" interpolation would be they continue running and fall off the ledge and die. A smarter system would realize that almost all the times they've run to the edge of a ledge, they've jumped and the AI could jump for you and then later confirm that prediction was correct. They could even jump at the median of all of your previous jumping choices and then lerp your position over time so you land at the correct point based on your actual jump.


> They could even jump at the median of all of your previous jumping choices and then lerp your position over time so you land at the correct point based on your actual jump.

I assume the interpolation relates to something displayed on the screen? The idea makes me kind of uncomfortable, because it seems like it would confuse players by causing identical jumps to display different results. If you only learn about jumping by watching the departure point and the landing point, fine, but if part of how you get used to jumping is by watching the animation, this sounds like it could make things a lot harder.

(If the player sees position data calculated locally, and the interpolation is just a process for bringing the remote idea of where the player is into line with the local idea of where he is, that sounds much better.)


This is intended for viewing some other (remote) player's jump (during a disconnect). It wouldn't touch your own (local) jump.

It's the equivalent of letting an AI take over the player when the player drops out, with the AI intended to replicate the dropped-player's playstyle until he rejoins. In short enough time-spans (disconnect-duration) you have some hope of being exactly correct.

And if you were 100% correct at predicting the remove player, all of the time, you don't even need the other player --- you could just run the AI and stay offline, and just "pretend" there's another player.


There are many situations where a good player can be predicted because there is a clear best option (or a good option that advance knowledge won't invalidate).


Definitely possible, but I doubt it could be both trained and/or performant on a single frame, which is what would be required. The other option would be to save all your replays, and have it be an option to train it on a server over several days, and then you can share your AI with others that can download it, but probably just for a fancy training AI, which can still be useful.


Well over ten years ago I read a research report that claimed many dozens of players in Quake III using a variety of techniques including replacing linear dead reckoning with a traditional AI for each player.


It's amazing how many comments suggest training a neutral net for prediction. Additional fitting through elo ratings I believe are suggested, as is dynamic tuning.

To me, having played games with rollback, this takes away one of the key benefits of simplistic rollback: predictability and consistency. Sure it's not the same as offline, but your brain gets pretty good at understanding when it happens and even at predicting it. If an opponent is stationary a few more frames than feels right, or keeps moving, you know to correct for an actions they've already sent - and it's a good estimation most of the time.

Over fitting with neural nets IMO removes this consistency without providing much benefit, plus if you've got a strong neural net you might as well train locally against that first.


This is an amazing writeup, and something I've often wondered about!

For those who want to skip to the code, the article links to these resources:

https://www.ggpo.net/

https://drive.google.com/file/d/1nRa3cRBQmKj0-SEyrT_1VNOkPOJ... - a writeup on the GGPO library including code samples of how it works

For an example of what GGPO feels like when implemented in a game from day one, this Reddit thread illuminates what's possible: https://www.reddit.com/r/Games/comments/lolukg/guilty_gear_s... - "I was playing matches with a friend in Denmark from the east coast of the US without any issues whatsoever. It's like black magic. If this kind of netcode is what we have to look forward to with GGS, I literally won't play any other fighting game that doesn't have it from here on."

Taking a step back from gaming applications though: I'd love to see a convergence between those researching CRDTs and those who have implemented rollback netcode in the wild. This article from September 2020, by one of the contributors to Google Wave back in the day, is a great overview: https://josephg.com/blog/crdts-are-the-future/

There's a fascinating confluence of talents needed to generally solve the problem of "how do you keep people in a flow-state when collaborating in the presence of network delay" - which is more applicable now than ever. There's psychology, user experience design, user interface design, deep domain knowledge, fundamental CRDT research, the types of artistry that go into really good gaming netcode, people who have implemented undo stacks in massive desktop applications and know all the warts that arise there, security researchers, distributed systems researchers (the latter two because this will enable decentralized applications in a huge way)... All these people will come together in the coming years to make computing seemingly defy the laws of physics. It's an exciting time to be a software engineer.


Here's another great video on the subject:

8 Frames in 16ms: Rollback Networking in Mortal Kombat and Injustice 2 https://www.youtube.com/watch?v=7jb0FOcImdg

One of my favorite topics, partially because there's not that much literature on the subject but it's so important


Thanks for the share. Very interesting video!


If you prefer the same info in a video format, here's an ~8 min video by the owner of a lan center in Korea.

https://www.youtube.com/watch?v=0NLe4IpdS1w


I was linked here by a friend. There's a lot of thoughts in the discussion about how and if input prediction could be made more sophisticated, maybe using machine learning.

This is actually the topic of a degree project I've been working on for the past year, and I hope to finalize the report next month and publish the code on github soon. The tldr is that it seems to take a lot of additional complexity for the improvement you get, and it is currently unknown how "false positive" predictions (ie when the model predicts a new buttonpress that doesn't happen) affect the user experience (but intuitively it seems like it would be worse than the false negatives we currently experience). In other words, we don't know if it would actually feel any better to play even if, say, the prediction accuracy is slightly higher. That said, this is only a first attempt and who knows how much it could be improved.

I'll probably post more about it as @zeknife on twitter next month.


I'm glad this has been done more recently, and even been expanded to other genres, like the upcoming Knockout City which will apparently feature some sort of rollback netcode[1]. I've been playing Lethal League Blaze, and playing against people with 80 to 150 ping is possible, and doesn't feel horrible which is wild, since in other genres that have to rely on dedicated servers to be somewhat decent like CS:GO or Valorant, anything above 50 or 60 ping (to the server, not the other players) is pretty jarring to experience.

[1]https://www.nintendoenthusiast.com/knockout-city-is-the-wild... (only mentioned in one sentence)


Article didn’t mention conflicts with the rollback approach, but aren’t there are cases where a history re-write causes the local player’s past input to be illegal?

For example, game rules state a player can only make a move when opponents is in state A. Local player sees a prediction of Remote player in state A and executes the move, but really the player was in state B. When the rollback happens, there is an illegal game state of local player executing the move against opponent in state B.

The situation is similar to a write-write conflict in a multi-leader DB.


I think the inputs should be sent at the level independent from context: "player pressed SPACE", not "player jumped".


It sounds like rollback could be used to cheat by artificially delaying your outgoing network traffic: the other player has less time to respond. But I doubt I’m the first to get that idea.


Not rolling back can be used to cheat too, by artifically delaying traffic while you're getting comboed to throw people's timing off.


If your input is delayed then you have less time to respond too.


Does that article really end with:

> Good netcode matters, period. So let’s talk about it.

Because at that point I didn't read anything about netcode, except how important it is.


There are more pages, try clicking the next button at the bottom.


Allow 3rd party resources, it has visual aids and buttons.


Does anyone know if rollback netcode works well in a 20+ player shooter game? I curious because the current craze of Battle Royals seems really taxing to the whole re-simulation of multiple frames within the window of a single frame.

Without being any less subtle, games like Apex Legends is known for really bad server performance, latency, and among other things.


I'm a pretty avid Call of Duty fan and have been playing ther Battle Royale, Warzone, a lot lately.

Prior to this article, I've suspected there was some sort of predictive algorithm that helps make things smooth overall. I've noticed on a few occasions, I'll get shot by someone pre-firing a corner I haven't gone around yet (like sprinting to a corner to make a play). Several times, the kill cam has shown me fully around the corner for the other player.

I also have poor latency (my ISP routes poorly) and playing with friends that are geographically far - with servers even further away from me (though relatively close to them). It's a bummer that it happens, but I'm up to 200ms behind realtime. I suspect the game is predicting my movements - some of which put me in very bad positions, unintentionally.


Most FPS games suffer from “peekers advantage” due to the client side simulation of the player moving. Basically they can pop out or move around a corner and shoot before the movement has time to replicate from the server to you. The worse your latency the more advantage another player has.


I didn't know the term rollback before, but it seems to be exactly the same thing as lag-compensation which is what's the standard in shooter games since Quake. These are usually not peer-to-peer, but use a central server instead; the server retroactively applies inputs from all players (within their latency window for obvious anti-cheating reasons) which are not dead, though some games don't have the latter restriction, which enables players to trade kills even with instantaneous hitscan.

It results in the usual problems (peeker's advantage and being teleported backwards when a high-ping player kills you while you are moving) though it is not obvious how you can work around these.


There's generally not all that much point to rollback-style netcode in first person shooters, because you can just run asynchronously and do client-side prediction. If the prediction was wrong, it doesn't matter that much: you might have missed a shot and they're not quite where you expected they were. In fighting games, that's completely infeasible, because each player's actions are completely determined by the previous ones: either I hit you, and I'm going to continue into a combo that's very timing specific, comparatively, or you blocked the hit, and I'm doing something completely different.


Overwatch uses a rollback model but it’s not inherently or obviously better than other games.


It uses a fairly standard netcode where the state on a server is the only correct one. Battlefield 4 for example has a preference of a client state over the server in hit registration.


Yes but the game is deterministic and trades inputs which are rewound and resimulated. I don’t think the interesting part of rollback net code in fighting games is that it’s usually peer-to-peer. There’s a GDC talk on how it works here:

https://www.youtube.com/watch?v=W3aieHjyNvw


I know how it works. The only major difference for FPS is that there are multiple inputs inside a frame which have to be precisely timed. All the prediction models are either deceiving for a false reaction or the same "keep doing the previous action".


What you described is how sensitive a fighting game is. Which need to be 60FPS because moves are made within that limit with startup frames, active frames and move recovery on block vs whiff. The only FPSs that come close to FG levels of reaction/inputs is Quake, Unreal and maybe CSGO.


All these comments and this article relies on things that will actively break user experience, why not just limit the players to 'good' connections and have somewhat simple netcode? Having another player 'fast forward' with variable animations sounds like a recipe for frustration


Does anyone know how Fall Guy's net code was implemented? It's very smooth with 100 participants.


I don't know anything about their net code, but I suspect this is more about the fact that the game is less dependent on precise timing then something like a shooter or Smash.


Based on Fall Guy's hiring page, which asked for experience with Photon, it's probably this: https://www.photonengine.com/en/pun


Could be using Photon but I don't think Photon solves this particular problem as far as I know. Did you see something in particular that mentions that as a feature?


Fall Guys doesnt heavily use server authority, hence why various hacks exist for it.


60*


I didn’t understand what the author meant by “fighting game” until the end of the first page. Maybe a definition of that term would have helped those of us who aren’t so familiar with different types of gameplay.


The author of any piece should be permitted a certain set of assumptions about their audience.

I think as long as a term has a relatively unambiguous Google result (which is true in this case), it’s fair game to be used in public texts.


The arguments the author made make no sense without some sort of a baseline though. 'The majority always play online' for instance points to a fairly tight definition, and may even be tautological (No True fighting game is designed for offline play).


The author isn't writing for a hypothetical future HN audience. They're writing for their blog audience who, presumably, 99.9% know what it means.


Every fighting game is designed for offline play, and has been for 25+ years. But that doesn't mean that's where the majority of players are, and that's the point the author is making.


The issue with this is if you don't know what a fighting game is, explaining the genre would take up a lot of space in an article targeted at gamers that know the genre. Sure they could have opened up with "A fighting game, like Street Fighter or Mortal Kombat", but you might not know what those are either. This is like an article about React getting posted and complaining that you don't know what a UI or Javascript is. At some point Google is probably better at filling gaps in assumed information than the article itself.


This "Infil" dude just stole the entirety of the article from Ars Technica (https://arstechnica.com/gaming/2019/10/explaining-how-fighti...)


> This article has been cross-posted on Ars Technica. - The infil site

> Ricky "Infil" Pusch is a long-time fighting game fan and content creator. He wrote The Complete Killer Instinct Guide, an interactive and comprehensive website for learning about Killer Instinct. This article was originally published there. - ArsTechnica

Both sites clearly reference where the original source came from. Nothing has been stolen.


Thanks. Don't know how I missed the reference to Ars Technica in the article.


> Ricky "Infil" Pusch is a long-time fighting game fan and content creator.


Turns out one plus one can be one! https://youtu.be/rOOmQEsdaxg


This is sarcasm right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: