Hacker News new | past | comments | ask | show | jobs | submit login

Then I will make my plea a lot more urgent:

Please do not implement protocols on the internet which respond to congestion by increasing the amount of traffic they send, if there is any chance that they might be used by a large number of people. This is poor internet citizenship and causes real problems. It means your code is a DoS attack against the internet transit links, which can be triggered by anybody capable of inducing low levels of packet loss, and the only effective mechanism to stop it is for ISPs to block the traffic entirely.

Packet loss is how the internet signals that you're sending more bandwidth than it has capacity to forward. Internet protocols need to respond politely to this signal.




Not in this case - packets can be dropped by the route and the game player doesn't care as long as his connection is good enough to play. When it gets really bad the player will likely quit and the packets will stop.

If it is really desired you could implement throttling according to packet loss, but not in the way that TCP does it - by buffering and waiting - instead you'd just send packets every N frames. You can't do that if you're just using TCP since you don't know when packets are dropped.


> If it is really desired you could implement throttling according to packet loss, but not in the way that TCP does it - by buffering and waiting - instead you'd just send packets every N frames. You can't do that if you're just using TCP since you don't know when packets are dropped.

It is really desired (and there are a bunch of ways to do it, and plenty of libraries that already implement them). Please, please implement protocols in this way, and not in the way described in the article.


You're going to be really disappointed when you learn how most games have been networked over the past 10 years.


I have good news for you. We game designers design these protocols to have a (low) maximum rate and a (low) maximum fixed length, because our games are even more performance dependent than the internet and we frequently have very poor or limited networks.

For example, one of Glenn's earlier protocols kept a maximum of 32 previous message IDs, and operated at a rate of less than 30 UDP packets per second. Unacknowledged mandatory messages were given up on after 1 second of loss.

And lastly: the amount of extra bandwidth that Glenn proposes consuming in order to provide quasi-reliability on top of UDP is about as big as the TCP header. So you could consider his protocol a degenerate case of TCP that just usually doesn't send headers.


> For example, one of Glenn's earlier protocols kept a maximum of 32 previous message IDs, and operated at a rate of less than 30 UDP packets per second. Unacknowledged mandatory messages were given up on after 1 second of loss.

That seems far more reasonable than what is described in this article. The protocol described in this article has terrible worst-case behaviour and should not be used.

> I have good news for you. We game designers design these protocols to have a (low) maximum rate and a (low) maximum fixed length

That doesn't help at all. You can pick any arbitrarily large or small per-stream data rate - it makes no different to the outcome. Let's suppose the arbitrary internet link under consideration has a capacity of X bits/sec, and your per-stream data rate is D bits/sec under zero packet loss. The number of users that this link can support is X/D. As the number of users approaches X/D, packet loss will begin to occur due to congestion. Suddenly this protocol causes the per-stream data rate to increase towards 120D. This means that X/D users are each sending 120D bits/sec towards the link, for a total of 120*X bits/sec.

Now, recall that X is the capacity of the link, so we have just flooded it with 120 times its capacity. This link is now dropping approximately (1 - 1/119) == 99.2% of packets sent towards it.

Consider: at no point did it matter whether X or D was large or small. The fatal behaviour is that when congestion is detected, the bandwidth usage increases by a significant factor relative to the norm. The factor of this increase will control the steady-state level of packet loss that the protocol converges on, when the network becomes congested.


I don't think you understand what I wrote. Let me try to rephrase. All of these protocols are tightly bounded in size. There is no way for them to get to 120D over time. What we're talking about is, under packet loss conditions, moving from 20 (IP header) + 8 (UDP header) + 1 byte of command data + N bytes of state/overhead, to 20 + 8 + X bytes of command data + N bytes of state/overhead. N depends on the application, but for my implementation, generally it's around 20. So we're going from 49 bytes to, usually, around 50 bytes, and the absolute max for X is, say, 32; which would be 48 + 32 = 80 bytes, or a little less than 2D.

Now, in a fictional universe where a game protocol operating at 20 updates per second and putting down 50 bytes is causing network congestion, I happen to agree with you. That would be terrible! Never, ever, ever happens, ever, not even .0001% of the time, ever; not ever. And further, never will. But I've only been doing this for 30 years.


This is not always true. Some amount of packet loss exists over WiFi and 3G and in this case this packet loss is not indicative of congestion.


Even if there are spurious signals from other sources, it's still true that packet loss is how the internet signals congestion (absent ECN, which my understanding is basically undeployed on the internet proper).


In the presence of packet loss that does not signal congestion, treating it as if it is congestion provides a lower quality result. http://1024monkeys.wordpress.com/2014/04/01/game-servers-udp...


I don't disagree, but that converse is typically even more true. The strong defense of your approach here is "it's a tiny amount of data, period" (which - paraphrasing - you've said elsewhere). I am 100% behind the notion that one should be careful when responding to packet loss with increased traffic.


Yes. With great power comes great responsibility.


This looks promising: http://modong.github.io/pcc-page/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: