Hacker News new | past | comments | ask | show | jobs | submit login
Your Game Doesn't Need UDP Yet (2013) (thoughtstreams.io)
71 points by joshbaptiste on April 1, 2014 | hide | past | favorite | 53 comments



Reliable UDP is the best of both worlds (http://tools.ietf.org/html/draft-ietf-sigtran-reliable-udp-0...), and most decent networking libraries use that over TCP. You can make reliable and unreliable requests as needed and very required for any close to real-time game.

Enet is an open source version (http://enet.bespin.org/). But lots of industry platforms use it like Photon if using Unity or RakNet native direct. A network library not built on reliable and unreliable capablities (RUDP since running TCP and UDP same time is overkill) is going to be slow and overkill and you'll have to rewrite it (unless you are doing turn based). HTTP/S access also needed for other basic messages for states and setup etc.

RUDP is the best base of multiuser virtual environments that we have so far until protocols like SCTP (http://tools.ietf.org/html/rfc2960) but those probably won't happen widespread, so RUDP it is on top of UDP and you don't need to write a library/toolkit/server, there are plenty. It is like writing TCP only you can pick and choose which messages are important/reliable/prioritized.


This article is from last year, it isn't in response to the other UDP/TCP article on the front page published today. I don't know why it was submitted to HN in response to that article, the first one already does a good job describing trade offs and the right role for UDP. It could possibly use one more line: "If it's easier to do so, use TCP to prototype."


Personally it seems to me that much of this is going to be game protocol dependent. I'd like to read some articles by game developers giving examples of their protocol and where UDP was useful and where multiple tuned TCP connections wouldn't have been enough.

Also, my understanding is we live in a buffer-bloated world where packet loss due to congestion is avoided at all costs with the exception of latency. In such cases non-TCP traffic still suffers horrific latency rather than being sanely dropped.

Thirdly, I'm not sure I buy the "Not all networks pass UDP" line. I'd like to think most players will be behind home routers that will NAT UDP traffic correctly, and you get the added advantage with UDP of being able to do hole-punching. Afaik you can't do hole-punching for TCP streams, which means requiring users to set-up forwarding rules themselves (Assuming their router doesn't support UPnP... sigh).


> I'd like to read some articles by game developers giving examples of their protocol and where UDP was useful and where multiple tuned TCP connections wouldn't have been enough.

Here's some articles from professional game developers that contain war stories about networked game programming. Unfortunately they are quite old and you might have already read these. I would love to see some newer ones.

This is the article referenced in the OP. The author of this wrote network code for some MechWarrior games. You can find more of his articles/talks with some war stories: http://gafferongames.com/networking-for-game-programmers/

This is pretty old but talks about real world networking issues that are still relevant today. In particular, they mention having TCP retransmission causing about one minute of latency. That figure has improved but not gone away since those times. "The Internet Sucks: Or, What I Learned Coding X-Wing vs. TIE Fighter": http://www.gamasutra.com/view/feature/131781/the_internet_su...

The two articles above both recommend using UDP (or at least regret using TCP). For a contrast, here's one about using TCP in a "real time" strategy game. "1500 Archers on a 28.8: Network Programming in Age of Empires and Beyond": http://www.gamasutra.com/view/feature/3094/1500_archers_on_a...

The best conclusion we can draw is that the decision between TCP and UDP is a very application specific and advocating one over the other for all use cases is harmful.


That was 13 years ago... networks / algo / network programming changed a lot since that time.


> That was 13 years ago... networks / algo / network programming changed a lot since that time.

Yes, a lot has changed but a lot has stayed the same. In particular, the speed of light is still the same that it used to be and the optimal round trip time across the world hasn't improved. Not a lot has changed with the fundamental nature of TCP or UDP either.

All of the articles I linked have valuable lessons that are still valid today. Also some stuff that isn't really relevant today.

But yeah - I'd love to see more recent articles on the subject.


Mauve wrote some interesting articles [1] back in 2012 on the roll-back principle being used in GGPO, which is a (semi) popular netcode used for fighting games.

In the likely event that you don't play fighting games, I'll break down briefly the needs of the genre. Fighting games are almost always animated at 60 frames per second. Different moves will come out in a certain number of frames, be active for a certain number of frames, and have recovery animation for a certain number of frames. When you have all of that information for a character, it's called (unsurprisingly) "frame data".

Let's say you and your opponent are standing right next to each other. You press light punch, and he presses heavy punch, and it turns out that he was just a little faster and pressed his punch button a tenth of a second before you did; that's six frames. HOWEVER, his heavy punch is a really slow button; it has ten frames of startup animation before the punch even comes out! Your light punch is much faster; it comes out in only 3 frames. So he hits punch, six frames pass, and then you press punch as well. Three more frames pass, and your punch is active, but he's still in the startup animation. So even though he hit a button well before you did, you're actually hitting him first.

There's a lot more to the strategy (if your punch hits him, he's going to go into some kind of hit animation; how long does that last? If you can get another attack to come out and hit him before that animation ends, then there's no way for him to block the second attack; this is the basis behind the concept of a combo in fighting games. Interestingly, it was initially a bug in Street Fighter 2.) but for the purposes of networking, all you need to understand is that frames are crucially important to a fighting game, there need to be sixty of them a second, and delaying even one or two frames can make the different between counterhitting your opponent with a light, or getting counterhit yourself with a heavy. It's worth clicking around on a couple of Mauve's posts, because they tend to be pretty interesting.

[1]: http://mauve.mizuumi.net/2012/07/05/understanding-fighting-g...


I don't think UDP should be regarded as an optimization you add later. If you leave it to add after you've written the rest of your networking code, changing from ordered reliable delivery to unordered unreliable delivery is going to have a pretty profound effect, which you probably will want to have thought about from the start.


It's not a "premature" optimization if you know you're going to need it at some point. The thing that makes an optimization premature is if you're not sure you need it, but when it comes to networking you can generally calculate out pretty quickly how much data you're going to need to transfer and what latency you're going to need.

There are a lot of optimizations that you KNOW you'll need in a game upfront, and it's easier if you get it in your architecture early. (For instance, spacial partitioning trees and sprite packing)

Not to mention that, augmenting networking into a project later on is a tremendous pain. It's one of those things you have to build in from the start; so if you know you're going to need fast packet-based unreliable networking you'll save yourself a lot of pain by doing it from the start.

Realistically though, you should just use a library for networking instead of writing it yourself, if you can.


I'm interested on the author's thoughts on using solutions like ENet, which offer both reliable (TCP-like) and unreliable (UDP-like) "channels".

I live somewhere where bandwidth is decent (12Mbps) but dropped packets are frequent (copper lines). Games that use UDP (TF2, CoD, NS2) work perfectly, in contrast to TCP games (WoW, Minecraft) which stutter to the point of being unplayable.

In my hobby game's engine (Source Engine-inspired), I use ENet. Even with high packet loss, variable latency, and limited bandwidth (simulated) the game remains playable; whereas when all the channels are set to 'reliable', it's the opposite.

Another consideration is NAT-punching. From the small amount I've read, it seems this is much easier to accomplish with UDP.


Your network sounds odd: If you get 12 Mbps when blasting UDP but you experience packet loss high enough to bog down TCP, it actually sounds like your network is congested and TCP is working as designed.

Your copper last mile modem (eg. ADSL2 or DOCSIS) has error correction DSP magic going on that is designed to adjust to the signal quality observed and trains the link paramters so that packet loss doesn't happen, at the expense of bandwidth. And you get a 12 MBps link speed, which sounds like your modem hasn't had to slow it down to a crawl to prevent packet loss. So congestion or other fault/misconfiguration in your ISP's access network?

Custom protocols on top of UDP are also supposed to respond to congestion by slowing down. See eg http://tools.ietf.org/html/rfc5405#section-3.1


How about router buffer overflow? UDP can survive that (because routers feel free to throw excess in the bit bucket) but TCP cannot.


That could mean two things:

a) packets are arriving from a fast link and don't fit onto a slow link. Dropping packets signals TCP or your UDP protocol to adjust to the slower link speed, exactly the desired behaviour.

b) the router buffer is too small to handle uneven bursts of packets even when the destination link has spare capacity. This is a misbehaviour and will cause needless packet loss.

Then there's the "too much buffering" failure mode

c) the router buffer is too big, so that there is constantly a long queue of packets in the buffer at the upper limit of the link speed, causing excess delay. This is also called "buffer bloat" and is a sin often committed by consumer ADSL modems and such.

So, an UDP protocol with retransmission capability could possibly overcome b) at the cost of behaving counterproductively in the presence of real congestion. The remaining problems would be to estimate the actual link bandwidth, and figuring out how to retransmit the large portion of lost packets while retaining good app layer latency.

But that's a rare problem to have in the network because the failure mode is so obvious (TCP stops working properly even though your network is idle). c) is much more common.


But, a) is a buffer issue too. Of course no buffer is big enough if the stream is continuous. But the backoff works exactly because it doesn't overwhelm the router buffer, leaving it time between bursts to empty into the slower network.

When TCP 'backs off' that's failure to a streaming protocol - now we get to spend seconds retransmitting and performing 'slow start' probing to determine an ideal rate. Which blows the latency of time-critical packets to the point its useless (as covered elsewhere in this thread).

And this is almost always the upstream link - your local home or business network is probably gigabit. Your ISP link is very likely slower than that.


TCP would really suck if it went to multi-second coma on every bandwidth adjustment!

Buffer getting "overwhelmed" is exactly what causes TCP to adjust.

Fortunately TCP doesn't go to slow start immediately when it sees congestion in the network.

It first tries slowing down. There's a "congestion avoidance" phase as long as not too many packets are lost, "slow start" happens only when congestion avoidance gives up.

Excess buffering can infact mess up congestion avoidance in addition to causing needless additional latency, since it in effect postpones the timely signaling of link capacity. See the bits about congestion control at https://www.bufferbloat.net/projects/bloat/wiki/TechnicalInt...

After this probing initially happens, TCP will remember the link bandwidth, window size etc even after periods of inactivity.

This is also how UDP based protocols are expected to behave (specced by IETF/IAB).


All cool if you have one connection and don't share it. But real apps on real computers have multiple apps running - while you're trying to communicate, email fires up and scans a dozen servers; your progeny upstairs clicks on a video; ROKU decides to update.

Bandwidth and window size are a fiction over time. TCP believes they will be stable (for many seconds); its tuned for old server-to-server connections, not the extremely dynamic network conditions in the average home/business today.

So you end up in slow start and congestion avoidance, and only after packets are dropped and resent pointlessly adding to congestion (as you point out).

I've spent years now examining traces of TCP catastrophes. It's because TCP is NOT dynamic enough that folks like my company are driven to finding alternatives.

So yes TCP really sucks. For many applications.


Since TCP packets contain more network state information, why would NAT-punching be easier on UDP?


NAT-punching requires 1) knowing a public address for the peer, and 2) both peers able to send traffic to the other.

TCP is directed - one is the server, the other the client. Not true peers. That's an extra thing to sort out with TCP. AND the server cannot attempt to talk to the client before the client talks to the server, which is a problem in 2) above.


TCP gives you a superset of information than what you get with UDP, so consequently you get a superset of options.

There is no method of nat punching that will work on UDP that will not work on TCP. The reverse is not true.


It works; its just far more complicated. Because its not about the IP stack at all. Its about the routers/firewalls and NAT. It goes something like this.

Something called 'pinholes' are created in your router - a map between an internal address-triplet (IP:Port:Protocol) and an external address-triplet. Messages arriving from one are mapped to the other and forwarded.

In addition, a third address-triplet can be used to guard against port-scanning and malicious bots etc. It's used on external (public) packets to test the Source of the packet. Its built from the 1st packet send out that created the NAT entry, from the public address target of that packet.

In other words, incoming traffic is only translated and forwarded to your computer/port if it came from somebody you previously SENT a packet to, such that your NAT entry has that address as its 'filter' triplet.

UDP can send arbitrary packets to anybody. TCP MUST start with a connection exchange where a client first tries to contact a server. But in our case we want them to be peers; both 'clients'.

So, TCP 'pinhole' creation involves both clients simultaneously attempting to open a socket to each other. Both clients must be prepared to accept (listen) for that attempt. Initially each outgoing client connection attempt may be rejected by the peer's router - if it hasn't previously attempted to talk back to that peer yet, there is no NAT entry with a matching filter triplet.

It has to be nearly simultaneous because TCP connection retry can be large, and it takes a minimum of 2 tries (one to create the NAT entry but fail arriving at the peer, the 2nd MAY succeed if the peer has also created a NAT entry at exactly the same time).

Add to that, learning your public address Requires contacting some STUN/ICE style public server, which creates your NAT entry but marks it with the STUN server filter triplet. So the peer connection is actually a 2nd attempt to create a pinhole. If your router doesn't overwrite that NAT entry's STUN triplet with the peer triplet, or create a multi-way associative table entry, or incompletely updates the triplet (IP but not port or IP/Port but not protocol) then the peer's incoming client-connect will be rejected anyway.

Since port-scanning and malicious bots are not in any way an official protocol, every router manufacturer invents their own algorithm for defending against them. There is no guarantee that any particular NAT-pinhole-creation technique will work every time with any particular router.

So no, TCP isn't really a superset; in a real way its a much more restricted set, and p2p TCP works far less often in practice than UDP. Because for UDP you control every packet, and send to exactly the addresses in exactly the order you wish. So you have far more control over the initial exchange content and timing, and thus have more control over pinhole creation.


It is not more complicated.

Let me repeat: Anything you can do to make UDP work will make TCP work.

The problem with NATs is that if you are behind one, you are not at your supposed public IP address. This means effectively you have two options:

1. Forward all packets that come in on a specific port to a specific machine, or

2. Forward all packets that come in that are on an established connection to a specific machine.

Both work with TCP. The latter works with TCP but not UDP because as you point out UDP doesn't have the concept of an established connection.

So basically you can look at host/port, or you can look at host/port/status. The former works for both. The latter works only for TCP, or by a NAT which understands your connection syntax (e.g. an H.323 gatekeeper, though this is iirc TCP).

> So no, TCP isn't really a superset; in a real way its a much more restricted set

My point is the router has a superset of information to address things in TCP, so anything you do to make UDP work will work if you do the same for TCP. You have a superset of options on a pure networking level. I think what you are arguing is something else, because you say:

"Since port-scanning and malicious bots are not in any way an official protocol, every router manufacturer invents their own algorithm for defending against them."

In other words, it is not that TCP NATting doesn't give you a superset of options, it is that routers are typically configured to address these issues by being a lot more lenient on UDP packets than TCP packets. However, this in no way addresses the question as to how things work on a pure network level.

In other words, as I understand your complaint, it is not that TCP doesn't support a superset of natting options relative to UDP, but rather that NATs/routers are more permissive regarding UDP packets because they try to restrain TCP packets to a much larger extent.

If that is indeed your position then I think it is worth bringing up with routers as well.

If routers and nats are natting purely based on the port on the router, and not the IP address/port of the router, then there is significant room for better security there.....


I was being entirely practical, not theoretical. In a theoretic (non-existant) internet then TCP/UDP can work similarly. But in the internet as deployed in the world, TCP is very hard to get through a P2P connection, and works infrequently. Because of firewalls and routers.


Most MMORPG I know use TCP, we're talking about low latency messages and a lot of people in the same zone, so yes TCP is fine for most cases outside of FPS / RTS.


I'm pretty sure the original Everquest used UDP. I remember during beta a friend of mine would log into my account while I was still logged in, I could click on buttons to start elevators and such, and he could see the elevators move when I did that. With TCP, my connection would have been dropped and I wouldn't have been able to do that.


No that's not right. Both TCP and UDP are single source single destination formats. Eg, they both have a definite IP/port specified in their headers. A packet sent directly to your friend cannot get to you or vice versa. So what you experienced was simply the Everquest server sending the same duplicate information to both you and your friends, which could be done with both UDP and TCP.


Not sure about EQ, but I'm almost positive UO did. I remember having to set up port forwarding on our home router.


I'd have to agree with the "yet" part, but most of the article wasn't very good advice.

If you're getting started with multiplayer game code, you should be fine with TCP for starters. Early in the project, most of the connections for testing will happen over a loopback interface and LAN anyway.

However, if you're doing a real time multiplayer game, you can't ship with TCP or your reputation (and sales, etc) will be ruined. No-one likes laggy gameplay.

In the above, I use the term "real time" as in a "soft real time software system", not real time as opposed to a turn based game. In other words, "real time" in the sense that if significant latency occurs, the game will produce incorrect results.

Arguably, most games do not fall under this category. E.g. a real time strategy game server has the option to "stop the world" and wait for all players to get back in sync. They are essentially very fast turn based games when it comes to analyzing it as a software system.

This means that most games can probably use TCP.

If you can't survive one second of latency, you must use UDP. You can bootstrap your project on TCP, but you can't use it over real networks.

> When everything is working perfectly and there is no packet loss (assuming Nagle is disabled), UDP and TCP will perform approximately equivalently; data gets through immediately, it's delivered to the application, there's no need to retransmit.

This is important, both ways. When everything works correctly, either one will do. But in real world, everything doesn't work always smoothly. You won't know if your network code works well before you've tried it with several players across the world. On the other side of town is not far enough.

> And on most modern consumer networks, packet loss is very low.

This is true only if you measure using "ping" in a network with no contention. That is not a real world measurement.

In most consumer networks, the WAN throughput is smaller than LAN, so if you have several computers hooked up, the router will have to drop some packets if everyone attempts to send or receive at their maximum capacity.

Go play Counter Strike and have your little brother turn on a BitTorrent service or your wife watching movies on NetFlix. Packet loss will occur.

> You may never even see the conditions where UDP should be faster.

It is not about being faster. The throughput of TCP should be pretty much equal to UDP when you average over time. The pings should be similar most of the time.

It's all about latency. A single lost packet on TCP will inhibit all the packets sent after it until the situation is lost. This can be one second or more, regardless of the quality of the connection in ideal conditions.

The bottom line is: know whether you will need UDP or not. Do not guess, measure and test it in real world networks. But you can still use TCP to bootstrap your project.

Regardless whether you use TCP or UDP your network game must have a mechanism for dealing with latency, like stopping the world in an RTS game (over TCP) or predicting the movements of characters in an FPS game (over UDP). TCP does not magically solve all the problems in networking, it also creates some.


You say the article is bad advice and then proceed to give extremely nitpicky but ultimately similar advice: UDP is _this_ kind of optimization(latency) and not _that_ kind of optimization(throughput). Which is really attempting to strawman the article based on some notes tucked into its closing paragraphs, when the central point is passages like:

> In testing, on your LAN, on lo0, when there's not a lot going on in the game, delivery is totally reliable, so the code path for recovering from missed datagrams is never covered, the usability where users are dealing with lost traffic is never stressed.

That is some damn sound engineering advice right there: build when you have the test case. Don't whip yourself into a frenzy over it prematurely.


I like to ask "why is UDP perfect for VoIP calls?"

A packet showing up a second late is completely worthless. You don't want it at all. If the connection gets interrupted, well, it gets interrupted, and the people using it have to repeat themselves. But this beats hearing words (as if you would be so lucky as to have each word be its own packet) out of order.


Because voice and language have so much redundancy built in that dropping few packets will not degrade perceived call quality. On the other hand, waiting a quarter of a second to receive lost packet definitely will degrade call quality.


TCP and UDP use the same underlying wire and the same hardware. If UDP would lose a packet or it would arrive late, the same could happen to TCP. TCP will retransmit and you lose a little bandwidth. But in both cases, you can't use the late data anymore - audio has about the hardest timing restrictions you can have when dealing with humans. Lateness means forced loss. TCPs ACK packets don't help there either, you want to keep down latency because your data can not be used any longer if it's dropped or late. Reliability becomes increasingly irrelevant, latency is everything.


Late packets aren't always completely worthless. a "late" rocket_fired type of packet can be very useful if it still shows up before the next packet that refers to said rocket.

It really depends.


VoIP protocols are designed to expect the exact condition described here; i.e., packets out of order and variation in the rate at which packets are received. A buffer is used to account for volatility in network latency. The size of this buffer is one of the more important problems that must be solved when implementing VoIP.

See also:

http://en.wikipedia.org/wiki/Jitter

http://en.wikipedia.org/wiki/Jitter#Jitter_buffers


So WebSockets (TCP) isn't good enough for real-time (twitch) games?

Edit: One would think you would need client/server prediction in both scenarios - to deal with latency inherent in a network transmission.


> So WebSockets (TCP) isn't good enough for real-time (twitch) games?

Depending on your definition of real-time, no, WebSockets probably isn't good enough. In the near future, WebRTC and other future standards may bring some kind of low latency networking to the browser.

It still may be used for different kinds of games, though.


It really depends what type of game in my opinion (which sounds like your opinion too). I've found that RTS can get away with TCP. We used Apple's Game-Center matching for our iPad game (Stratosphere: Multiplayer Defense), and used "GKSendDataReliable" aka TCP.

It's real-time and works fine over 3G even... the real problem is having enough people online at once to even play multiplayer lol...


>Go play Counter Strike and have your little brother turn on a BitTorrent service or your wife watching movies on NetFlix. Packet loss will occur.

Interestingly enough, it seems like FiOS has the bandwidth to handle these kind of concurrent requests. I never noticed any packet loss playing FPS games while streaming on Netflix when using FiOS (definitely noticed it on Time Warner Cable though).


Why UDP isn't good for real-time applications:

  * lack of a connection
  * lack of order
  * lack of retransmission
  * lack of error checking
  * lack of flow and congestion control
  * lack of integrity/security
Why TCP isn't good for real-time applications:

  * latency
  * latency
  * latency
  * latency
  * latency
  * latency
SCTP's only advantages over TCP for real-time applications is unordered data delivery and selective acknowledgement. In this way we can still use the most recently arrived message instead of the correct ordered one, which will reduce latency in one area.

But the 'association' is still be affected by congestion control, especially with multiple failures in the same window. Using Reno SCTP, this may be mitigated, and it is argued that some congestion control is much better than none at all. But without evidence specific to an application's implementation, UDP is still the superior real-time protocol if you don't care about what data you're getting.


UDP has error checking in the form of a checksum. It's one of the few guarantees UDP has: your datagram either arrives complete, or not at all. TCP also has no real security to speak of, either.


>If you re-invent TCP on top of UDP without a thorough prior understanding of TCP, you will get it wrong.

I don't know why this is in there twice for emphasis. The use case for a custom TCP on top of UDP is pretty rare. More likely you will not want the kind of flow control and packet ordering that TCP spends its cycles doing.


And this isn't the only article that suggests that "re-inventing TCP" is what UDP network game code does.

Yes, you need to do some of the same things that TCP does, but not all of them and the ones you do should work differently.

Connection management is similar in TCP and UDP game protocols but you also need to keep track of ping and you probably don't need a fancy three way handshake either.

Reliability and in-order reception is something you don't need in UDP game code, especially reliability through retransmission. You need only semi-reliability, ie. knowing when a packet was lost. The way to handle a lost packet is application specific, but most likely you'd like to send the most up-to-date data instead of retransmitting the state from 100ms or one second ago.

Flow control needs to make sure that no more packets are sent than can be received but the goal is to minimize latency, not maximize throughput.

The whole idea of "implementing TCP over UDP" is just silly - why would you want to do that? There are very good reasons to use UDP and you should know when that is the case.


I think the point is that most of the time what you are doing is reinventing the basic idea of a TCP connection over UDP, with a few optimizations for your environment.

Now, the implementation may be different, but basically you have to deal with connection startup, connection teardown, packet loss, packet loss detection, and so forth.

In other words, what you are doing is taking the basic idea of TCP, and reimplementing a different take on that basic idea optimized for your application. Doing this benefits from a great deal of in-depth knowledge of current solutions, starting with TCP.

In other words, the danger is that you will try to do something TCP does without a real understanding of why it solves a problem in a particular way and come up with a very bad solution.


> I think the point is that most of the time what you are doing is reinventing the basic idea of a TCP connection over UDP, with a few optimizations for your environment.

The idea of TCP is to have a reliable ordered stream of bytes over a connection. The idea of an UDP game protocol is to have a semi-reliable, out-of-order, low latency packet oriented connection. In my opinion that is not the same basic idea.

They both operate over the internet using IP and face similar issues so similarities exist.

> In other words, the danger is that you will try to do something TCP does without a real understanding of why it solves a problem in a particular way and come up with a very bad solution.

This I will have to agree with. Before choosing UDP, it does make sense to try TCP first and figure out if and why it does or does not work for you. And then fix those issues with a UDP protocol if needed.


>connection startup

In some situations you do, but if the game has a matchmaking server you can just go.

>connection teardown

Packets stop when the game ends, no need to do anything special.

>packet loss

You have to deal with this but you don't have to use a similar method to what TCP does.

A minimal UDP protocol to meet the requirements of a game might not do nearly anything TCP does, yet work perfectly fine. A fancier protocol might have a TCP-like channel, but that's not a guarantee. And it's not hard to use UDP and TCP at the same time.


> You have to deal with this but you don't have to use a similar method to what TCP does.

Particularly if you are not using the similar methods, a thorough understanding of TCP is helpful.

Otherwise you run into the possibility of reinventing TCP badly.


>If you re-invent TCP on top of UDP without a thorough prior understanding of TCP, you will get it wrong.

I don't know why this is in there twice for emphasis.

Because far too many people do it? I'll admit, my net-programming-fu is very weak, but every time I've seen someone say "we'll use UDP, it will be faster!", they then start trying to make it reliable and ordered, and they end up with a stinking, unreliable slow pile of crap, when they could have just started with TCP instead and optimized later.

By all means, if you don't need the guarantees that TCP gives, don't use it. But if all you're going to do is implement the features of TCP on top of UDP, why bother? You probably won't get it right the first time, and even if you do, you will have wasted time reinventing the wheel.


You say 'faster', danielweber says 'efficiency', I guess I've just never heard such ridiculous things said about UDP. I've only seen it suggested to avoid lag from the forced-in-order delivery.

And there are scenarios where you want to reimplement TCP, after thoroughly understanding TCP. TCP assumes that all packet loss is congestion and needs slowing. This can lead to very poor behavior, especially when it exponentially backs off. With a real-time connection it's better to never stop sending data entirely, backing off less and shutting down entirely if it can't get enough bandwidth.


And there are scenarios where you want to reimplement TCP, after thoroughly understanding TCP.

Yes, precisely; thorough understanding, then have a whack at only picking pieces of TCP you want. In my (limited) experience, I've almost always seen people with less knowledge of network programming than I have (!) try to reimplement most of TCP on top of UDP once they realize UDP was not what they wanted.


It's a pretty common anecdote that someone will start out using UDP for efficiency, and then end up, over time, re-implementing all the things TCP provides at some other layer where it's more expensive.


For reference it took me about a month and a half to get a udp system working for a game. I had 0 netcode experience.


One time to seriously consider using UDP is when you find yourself implementing "UDP on top of TCP", e.g. implementing what's essentially a small-packet messaging protocol that uses TCP underneath. I find it disturbing when I have to reassemble messages read from a TCP socket and if the messages are small enough to fit into a UDP datagram I feel like I'm better off with UDP plus a lightweight retry mechanism.


> In testing, on your LAN, on lo0, when there's not a lot going on in the game, delivery is totally reliable, so the code path for recovering from missed datagrams is never covered, the usability where users are dealing with lost traffic is never stressed

The advice for this point should be "turn on the Network Link Conditioner" (or whatever the equivalent is for your OS of choice)


A relevant paper from INET 97: Characteristics of UDP Packet Loss: Effect of TCP Traffic

http://www.isoc.org/inet97/proceedings/F3/F3_1.HTM




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: