Hacker News new | past | comments | ask | show | jobs | submit login
Understanding IP, TCP, and HTTP (objc.io)
237 points by danieleggert on March 7, 2014 | hide | past | favorite | 61 comments



How I love it when people without deep knowledge of some subject write authoritative sounding articles.

Without guarantee of completeness, to avoid the spread of misinformation:

- IPv6 fragmentation has nothing to do with some "minimum payload size" (whatever that is) - there simply is no fragmentation being done by routers, the sender still can fragment however it pleases, and presumably will do so whenever it has to send a packet that doesn't fit through the path MTU.

- The end points use Packet Too big ICMP6 messages to determine _path_ MTU, which is different from just "the MTU".

- With IPv4, the sender chooses whether a router will fragment when the packet exceeds the next-hop MTU or whether the router should drop the packet and send a Fragmentation Needed ICMP message - where the latter again is used for path MTU discovery.

- Path MTU discovery is useful because it allows the sending IP implementation to push the chunking higher up the stack when the sending higher-level protocol has the capability (as is the case with TCP, but not with UDP, for example), which tends to produce lower overhead. Unfortunately, some clueless firewall administrators, such as those responsible for AWS EC2, do filter all ICMP because they for unknown reasons consider it to be bad, thus breaking PMTUD, which can lead to hanging TCP connections.

- TCP sequence numbers are for bytes, always, with the special case of SYN and FIN also counting as "bytes" in the sequence, but never for segments.


I love it when people without deep knowledge of a subject try to learn about it and explain themselves to others.


This is an important part of the way I learn. I will read something and then explain it to someone else. It makes me think deeper about the issue as I form the words and it gives me a great chance to get corrected when I am making unfair assumptions. I always preface this conversations with "as I understand it" or "from what I read" or some other disclaimer. I used to have a coworker who would give me soooo much guff about these disclaimers since I'd drop so many of them in one of these conversations. I just felt it was important to make it clear I wasn't coming from a place of authority and more from the perspective of a guy who is bumbling through it and trying to figure out what the hell is going on.


It's the way I learn, too.. But it doesn't make me write things as a guy who's knowledgeable.

I'm a total noob, yet the first few paragraphs made me cringe because I felt there were some odd things. I had a weird feeling about it. It wouldn't have bothered me if there wasn't this "A periodical about best practices and advanced techniques in Objective-C"..

Or using the word "great contributors", etc. I mean, one has to be humble because unless one really knows his stuff, he shouldn't talk that way.

If the writing style was more in the "I'm learning and journaling my progress", it would've been more than okay, and knowledgeable people wouldn't have a problem with it.

I was in forums and learning to design my PCB's, I'd post my design and ask for feedback, and people who'd spent 30+ years would comment on them and point flaws on what I thought was nice and would find a thousand flaws in it. And I got back to wrok, iteration after iteration.. Until these really great guys who do that for a living would say "Beautiful work".

Had I posted something like "advanced PCB design" in the "this is how it's done" way, they'd have ignored me and I would've stayed more ignorant than I still am.

There was a question on the Python mailing list asking how long it takes to say that one knows how to program. People with 40+ years actively programming said: I'll let you know when I'm there.

Humility goes a long way. Heck even when I read things on the nmap mailing list, I don't feel that tone that they consider they know more than you do even though they really, really know their stuff.


Could you post an example of the "I'm learning and journaling my progress" writing style? I'd like to start doing this and I don't want to come off as an expert on things I'm just learning.


One thing is to not publish it -- a learning journal is probably much more important for you to write than for anyone to read. Then give yourself a couple of years or decades of learning time, and if you still want to write about it, what you wrote as a beginner will give you valuable insights into the beginner's mind, things you have probably forgotten.

And of course you can publish it (might be good for feedback), just state that it's a learning journal, not "best practices".


Great idea. I have a notebook where I write down ideas for companies, things I think about. I think it is a really, really good practice to write it down..

The reason I'm saying that is that human beings have selective memory. They tend to remember things they did the right way, they remember their good ideas, times they were right, etc.

I used to note my ideas that would seem genius.. And then I'd look at them a couple months later and it's humbling. How stupid could I be.

But there is a good thing about this: It taught me a valuable lesson.. It taught me to focus on real needs, and not some fancy thoughts I have at 3AM. Like real needs.

And I know that at an early stage, one needs to let go of critical things and be open and not dismiss ideas, etc.. But it's just that some ideas are plain stupid and I had plenty of those.

I write them down, then cross things. Not a real need, not a problem. Now I'm thinking about an idea that I'd use if it were available. And I'm not the only one.


Hey,

This is a project I did a couple of years ago

http://www.electro-tech-online.com/threads/pcb-etching-tank-...

This is another project :

http://www.electro-tech-online.com/threads/first-pcb-stepper...

There was an update on the site, so images are not available there.. Here they are:

http://www.mediafire.com/view/uu4vsqq8e1yq8/PICTURES#6qj25tt...

Bear in mind that this is what my first attempt with PCB design was [PDF] in French, but you can see how ugly it was:

http://docs.com/GH41

I was on the forum chat, and I'd send pictures and they'd help me see, they'd open my eyes and explain things. Why 90° tracks are a no-no, etc.

People are tremendously generous with their time as long as your attitude is okay. I learned orders of magnitude more on the internet than in college, and still do every minute I'm online.




Julia Evans at http://jvns.ca has a distinctive style that I enjoy.


It's nice to put disclaimers in there, but if it's the first time a person has heard the information, the disclaimer is basically ignored. Because what are you going to do when you have to troubleshoot a tcp connection or write an application? Go back and find a book on tcp and learn the whole thing from the beginning? Unlikely, as you already have what you consider to be knowledge about tcp. Even if you don't consider it to be authoritative, you probably have just enough to get in trouble.

That's why I find the whole "blind leading the blind" way of teaching to be counter-productive. Not that it's really serious or anything; nobody's going to lose a leg if you screw up your tcp connection. But when extended to other more serious topics, it can be dangerous to teach things to people if you're not sure about the subject matter. For example, something as simple as jumping a car battery actually isn't simple at all (when done correctly).


I think there are arguments for both sides - if you do know something very well, you might overlook pitfalls when explaining it to someone who doesn't have your background, for example, while someone who has only just got the grasp of it might be well aware of what might confuse a newbie, so I think there is some value in unexperienced people writing about what they learn and how they learn it, with appropriate disclaimers, and assuming that they still try to not state as fact anything they aren't actually sure about.

And I also think, to a degree, it actually is the responsibility of the reader to judge what to use that supposed new knowledge for. Trying to make sense of tcpdump output when debugging some application software bug? Why not? Writing an IP stack? I hope any sensible person would pick up a book and some RFCs first. Unreliable knowledge can still be useful and harmless in figuring things out, you just shouldn't use it to try and build things.

Then again, practically, we can observe that people do build systems without ever having looked into the relevant standards, and I would actually argue the effects are worse than one superficially might think. I mean, if you look at how ridiculously insecure the web/web browsers are, for example - how did that come to be? I would think one major factor is exactly that people didn't (and often still don't) read the relevant standards, something as apparently uncritical as the HTML spec or the HTTP spec, but instead just wrote what they thought was HTML, and wrote books about what they thought was HTML, and so on - resulting in a need for browser vendors to accomodate all this crap out there that isn't HTML or HTTP but that people still expect to be rendered by their browser in some way or another, and so, due to end-user market share pressure, we now have security vulnerabilities in browsers that are there only because fixing them would break stuff that noone really ever had any right to expect to work, but they thought they were just creating some totally uncritical website using the "knowledge" they learned from some other clueless person, and those security problems can have quite serious consequences.


I think it's more likely that mistakes will happen if someone believes they know what they are doing. But how do they know if they know what they are doing or not? That's where "sensible person" becomes subjective to me.

I wrote an IP stack, of sorts, and used Wikipedia to do it. I'm aware that it's probably crappy, but only because it was basically designed to be. If I had tried to design it well, I might lead myself to believe I had done it correctly, for example because I found no problems with it in my testing. But as you're aware, there's plenty of problems with tcp/ip stacks that only come up as edge cases. So even if I was being sensible I might end up with shitty code and push it into a product, and then we're screwed. But if I had learned the stack correctly I couldn't be in that mess.

A kind of solution lies in forums like HN, though. Sure, the posts are fallible and are often upvoted merely because they are perceived as authoritative. But we have the comments section, and knowledgeable persons who can speak up and educate. So it may not matter at all who's teaching, as long as somebody picks up the slack.


I guess my point is: The reasonable (and responsible) thing to do when you actually build something (rather than just learn about something out of curiosity or to be able to use the understanding in troubleshooting) is to read the primary sources, the standard documents, and in particular to be aware that whatever you learned from hearsay is not reliable enough to actually build a product on if there is an option to get your hands on the primary source. Especially with internet technology, we are in the great position that W3C recommendations and RFCs are freely available for everyone, so there isn't really much of a reason not to read them.

That might not be quite enough for a really good implementation, but overall software quality would be a hell of a lot better if everyone did that, it's just amazing when you look just at websites and also emails, how many people just make up how they think things work rather that reading the standards that are only a google search away.


When it comes to complicated subjects like internals of TCP/IP, they may make more harm than good. Volume 1 of "TCP/IP Illustrated" is, I think, 700 pages long, not without reason. When one writes condensed articles like this it makes sense to stay high-level, because the moment you get into discussing SYN/ACK handshake, you are in danger of leaving large gaps in your explanation or making unreasonable stretches to complete the picture.

On a side note, it's funny to see that most of the paragraphs in the original article end with a link to Wikipedia — as a reader, I can go there myself, what good those articles do to me?


From the editorial page of this issue:

"We’ve created a new public repository on GitHub that contains all current and past objc.io articles. If you find any mistakes or have suggestions for improvements, please don’t hesitate to file issues, or even better: submit a pull request!"

Make a pull request so that people like me can learn about networking too.


Anybody writing routing code would be foolish to use this, or any other "simplified" article as a protocol reference. But, I'll agree that it is presented in such a way (and with enough technical detail) that any technical errors should be corrected.

If I had deep knowledge in this area, I'd probably applaud the effort and send corrections, rather than criticize.


> "minimum payload size" (whatever that is)

I'll give them the benefit of the doubt and say he got his terms wrong. The IPv6 RFC states that IPv6 requires a minimum MTU of 1280 bytes. I guess that's what he meant.

https://www.ietf.org/rfc/rfc2460.txt

Packet Size Issues

IPv6 requires that every link in the internet have an MTU of 1280 octets or greater. On any link that cannot convey a 1280-octet packet in one piece, link-specific fragmentation and reassembly must be provided at a layer below IPv6.


Which wouldn't really make me any more confident in the reliability of the whole thing?! Confusing lower-level fragmentation and reassembly with IPv6 fragmentation is not exactly a mistake you'd be likely to make when you understand what that actually means, I would think.


Let's not guess and help correct the article, shall we? Documentation, manpage, textbook and programming books can contain errors.

On the flip side, I like to see more people helping critiquing these articles so newbies like me can get the most out of it (though I already took computer network...).


It would be one you would make as you were learning it, which is honestly what half the blogs that cross here are. It just happens that the neat thing this person learned today was about some networking protocols they use every day.


Thanks for taking the time to comment on the article. I'll update the article later today to correct these. Your help is appreciated.


Just out of curiosity, what do you do? Is this knowledge germane to where you work? I've just recently become interested in this stuff, so I'm curious to get a lay of the land.


I do ... software development? ;-) There isn't really any particular category for what I do, though I tend to do more of the lower-level/backend stuff of projects, and knowing how the stuff that you build on works internally certainly is useful in optimizing and debugging.

As for getting an understanding of how TCP/IP works, I think Stevens' classic TCP/IP Illustrated still is a good book to get started, even if somewhat dated in some details (no IPv6, in particular), but the general principles still apply. Though maybe there are newer equally good books around that I just don't know about?


The networking section of Unix and Linux System Administration Handbook has some excellent explanations of TCP/IP and many other networking topics.


TCP/IP Illustrated has a new edition (volume 1 only for now, AFAICT) which has been updated with much more modern content, including lots of info about IPv6: http://www.informit.com/store/tcp-ip-illustrated-volume-1-th...


Well, yeah, it's a book by the same name, but is it the same quality (well, it's not just the same name, of course, but a new author obviously can make a big change in quality, in either direction ;-)?

In any case, my recommendation was referring to the old edition by Stevens alone, no clue about the new one, though at least the newly covered material seems appropriate to me.

(For anyone who might not be aware: W. Richard Stevens died in 1999, so the new edition is by a different author, though apparently based on the old material.)


I have not worked through the entirety of the original, which I also own, but the new one seems pretty good to me so far. But you may have a different opinion as somebody who knows much more than me about TCP/IP.


I'm also going to cut him some slack with the bit about giving each segment a unique number. While formally the sequence number identifies each byte of data, it really is about providing heuristics to identify and correct for out of order, fragmented, missing & duplicate segments. It is important that it be about bytes, particularly for things like SACK, but if you are trying to simplify things you might describe it as being about the segments.


Sure, nothing wrong with simplifying things, but "Both ends are sending sackOK. This will enable Selective Acknowledgement. It switches the sequence numbers and acknowledgment number to use byte range instead of TCP segment numbers." is just flat-out wrong, and in particular suggests that "numbered segments" is not a simplification but an actual fact about how the thing works.


Thanks for taking the time to point this out. I'll update the article.


And regarding dropping the ICMP message about fragmentation... good firewall implementations have the firewall discover the MTU behind it and express THAT.. even better they might hide the hops behind it.


There is no such thing as an "MTU behind it", there is a separate path MTU for each and every ordered address pair, more or less (and that's not even static, obviously).

And obviously I was talking about packet filters, not about some kind of application firewall, which obviously doesn't have anything to do with filtering of packets anyhow.


What an unpleasant attitude. The author clearly made a lot of effort and as far as I know it's all accurate. If you know better then say so pleasantly.


I've heard that a good way to gauge a person's general technological literacy is to simply ask "what happens when I type a URL in a browser and hit Enter?" Obviously, the question is deliberately open-ended, and any step in the process can be broken down into more detailed steps (up to a point). I'd like to see an article that initially shows high-level steps (e.g. DNS request, HTTP request, server processing, HTTP response, parsing and rendering), but allows each step to be expanded progressively with increasing detail.


[Deeper]

The Pauli exclusion principle prevents electrons with the same quantum characteristics from entering the same space, this interaction occurs across the amassed copper atoms which form the majority of the metallic wires that approach one another in the internal structure of the keyboard ...


If you want to further your understanding of network protocols, there's an excellent open textbook available here: http://cnp3book.info.ucl.ac.be/


At Uni we had a book called "Computer Networking - A top down approach". One of the best teaching books I've ever read. The amount of detail is very nice balanced, and as the title says it's a top down approach where one layer at a time is discussed. Very interesting.


We're using that same book, sixth edition, in my networks course right now. It's overly verbose like every other textbook, but the content is solid.


Also https://www.coursera.org/course/comnetworks is helpful. I use this to prepare interview, just so I don't have to dig my textbook.


Wow...what a great resource! I just downloaded it and will start reading it tonight. Thanks!


> There’s a misconception that restarting the (HTTP) request will fix the problem. That is not the case. Again, TCP will resend those packets that need resending on its own.

But that's not true if the connection is interrupted at the socket level, right?

For example, if the device switches from 3G to Wi-Fi, or from Wi-Fi to wire, then I believe, its hardware address changes, its IP address changes and the socket becomes stale. But the TCP connection, would it be closed right away or would it hang until some timeout? (And does it depend on the OS?)


The layers are conceptually independent, and in a way even the concept of "switches from 3G to WiFi" is a misconception.

The TCP socket doesn't know anything about any "interfaces" or "links" or anything like that, it only knows about its and the remote IP address (and port), and the IP stack will deliver any packets to it that it receives that are addressed to that port on that address coming from the corresponding remote address and port, no matter which link it was received through (possibly subject to reverse path filtering on end hosts as a security measure). Similarly, each outbound packet is routed independently, so if the routing table changes half-way through a TCP connection, packets simply will be routed via a different link (the end host really just does the same as any other router does, and the fundamental idea of packet switched networks is that routers to not know about connections, they simply forward each packet independently, potentially switching links as needed at any time).

It perfectly possible, for example, to bridge between WiFi and wired ethernet, and have a gateway that routes some IP network onto that Ethernet/WiFi, then, while connected to the WiFi, establish a TCP connection, disconnect from the WiFi, connect to the Ethernet via cable, using the same IP addresss on the Ethernet interface as you previously used on the WiFi interface, and the TCP connection will survive that just fine (it might take a moment for the router to time out its neighbour cache entry and re-resolve your IP address into the new hardware address, but that's just a matter of a few seconds). You could even connect to both, configure things such that the kernel only replied to ARP/ICMP6 ND on WiFi, say, and route outbound packets through the cable, then the outbound packets of the TCP connection would go through the cable while the inbound packets would go through the air.

The only thing that actually breaks a connection is when packets addressed to the address that your TCP connection is using cannot reach you anymore, or when packets you send using that address can not reach the other side anymore, for example because you send them through a link that does not allow you to use that address. The latter really is mostly what kills TCP connections on mobile phones: the default route gets changed from WiFi to G3, say, and your mobile provider won't allow you to continue sending through their network packets using the address you got assigned by the WiFi - so the connection hangs even if the WiFi interface might actually still be up and able to receive packets addressed to that address.

One important thing to notice in this: There isn't really any way how a TCP implementation could detect right away that any of this has happened, as it cannot know what the filtering policies of your provider(s)/network(s) are or whether you disconnected only temporarily or whether you will reconnect to a different access point to the same network ... - so, when some mobile platform kills TCP sockets when you "change from 3G to WiFi" that really is a dirty hack that makes a load of assumptions about some typical setups that don't necessarily hold true.


If you want to learn about IP, TCP, UDP and some of the protocols below this I would highly recommend reading Richard Stevens book TCP/IP Illustrated, Volume 1: The Protocols.

For two reasons: It's probably one of the best introductions to the subject that has ever been written, and it's a model example of how a technical book should be written.

I'd be hard pressed to find a reason not to go this route at least once in your life. I know the material pretty well but I still re-read Stevens books every few years just because it is so good.


"I'd be hard pressed to find a reason not to go this route at least once in your life. I know the material pretty well but I still re-read Stevens books every few years just because it is so good."

Then again, that's a lot of effort to spend on something that the vast majority of us don't need to know in much detail. The main reasons for knowing all the details are

- to write a new networking stack, or working on an existing one;

- to write or maintain server software or routers or caches or other software directly involved in networking;

- to break or exploit existing software.

(obviously 'because it's interesting' is a valid, but not practical reason to know)


If you write anything that communicates over a network (e.g. anything using HTTP), you need to know at least some of this stuff, otherwise you're not going to be able to explain why (for example) your service call latencies have a big spike around 200ms.


TCP/IP Illustrated books are super detailed. If you don't want to dive quite that deep I recommend "Computer Networking: A Top-Down Approach Featuring the Internet".


It's nice to see this recent increased emphasis on Web/mobile developers understanding the technologies that link it all together. The next thing I would add is a high level overview of the sockets API. While these topics aren't critical to most day-to-day lives of developers, they are certainly useful to understand.


This is a very readable online book on networking and sockets: http://beej.us/guide/bgnet/output/html/multipage/index.html

Talk about understanding the sockets API ;-) here's the content section for chapter 5:

  5.1. getaddrinfo() — Prepare to launch!
  5.2. socket() — Get the File Descriptor!
  5.3. bind() — What port am I on?
  5.4. connect() — Hey, you!
  5.5. listen() — Will somebody please call me?


I love beej's guide! Its where I learned socket programming. It is an art to make such a dry subject as entertaining as he does.


Beej is where I learned socket programming. A great guide.


Beej's guide is great! Thanks!


I had one developer with over 6 years of server-side experience who made a server/client setup where the server would open a connection to the client, pass connection info to the client, close the connection, and then have the client open a connection back to the server to return results.

When I explained how TCP worked, in that the client could connect to the server and maintain an open socket to pass info continuously he was blown away. He had no idea this was possible. Explaining UDP was a lot harder.

So, I welcome any education on basic TCP/IP functionality!


I'm still curious about an explanation why do we have both TCP and UDP.

For example if you do peer to peer, you need low latency, and UDP is best for that.

I think it's because TCP is hardware optimized, but it's designed to transmit a file in a stream, so if a packet is corrupt, it just waits to send that packet. In that fashion, TCP tend to be slower, but on average it's more efficient for single files or webpages.

You don't have good granularity with TCP, but if you want to work with UDP, you need to add redundancy and other mechanisms to make sure all is good.

ENet is an example of using UDP for gaming, so the goal is to have the lowest latency possible.


Bittorrent is also peer to peer, and it doesn't need low latency. Really, it's about latency, nothing to do with peer to peer.

TCP has head-of-queue blocking, as it guarantees complete and in-order delivery, so when a packet gets lost in transit, it has to wait for a retransmit of the missing packet, whereas UDP delivers packets to the application as they arrive, including duplicates and without any guarantee that a packet arrives at all or which order they arrive (it really is essentially IP with port numbers and an (optional) payload checksum added), but that is fine for telephony, for example, where it usually simply doesn't matter when a few milliseconds of audio are missing, but delay is very annoying, so you don't bother with retransmits, you just drop any duplicates, sort reordered packets into the right order for a few hundred milliseconds of jitter buffer, and if packets don't show up in time or at all, they are simply skipped, possible interpolated where supported by the codec.

Also, a major part of TCP is flow control, to make sure you get as much througput as possible, but without overloading the network (which is kinda redundant, as an overloaded network will drop your packets, which means you'd have to do retransmits, which hurts throughput), UDP doesn't have any of that - which makes sense for applications like telephony, as telephony with a given codec needs a certain amount of bandwidth, you can not "slow it down", and additional bandwidth also doesn't make the call go faster.

In addition to realtime/low latency applications, UDP makes sense for really small transactions, such as DNS lookups, simply because it doesn't have the TCP connection establishment and teardown overhead, both in terms of latency and in terms of bandwidth use. If your request is smaller than a typical MTU and the repsonse probably is, too, you can be done in one roundtrip, with no need to keep any state at the server, and flow control als ordering and all that probably isn't particularly useful for such uses either.

And then, you can use UDP to build your own TCP replacements, of course, but it's probably not a good idea without some deep understanding of network dynamics, modern TCP algorithms are pretty sophisticated.

Also, I guess it should be mentioned that there is more than UDP and TCP, such as SCTP and DCCP. The only problem currently is that the (IPv4) internet is full of NAT gateways which make it impossible to use protocols other than UDP and TCP in end-user applications.


you made some very interesting posts. what I'm missing from this is 0mq. I believe it introduces layering mechanisms, so that one can re-use patterns to build cool stuff (anything really), without knowing specific details. do you have a email where I can reach you?


> The improvements of using HTTP pipelining can be quite dramatic over high-latency connections – which is what you have when your iPhone is not on Wi-Fi. In fact, there’s been some research that suggests that there’s no additional performance benefit to using SPDY over HTTP pipelining on mobile networks

Excellent summary but i think pipeline has been oversimplified. HTTP pipelining is a FIFO queue. The responses have to be delivered in the same order as the requests. So if the first(or an early) response took longer to generate, all other requests in the pipeline have to wait. Something that SPDY is not susceptible to.


I prefer The Unix and Internet Fundamentals HOWTO:

http://en.tldp.org/HOWTO/Unix-and-Internet-Fundamentals-HOWT...


David Wetherall teaches this course @ Coursera.

https://www.coursera.org/course/comnetworks

He pretty much wrote the book.


There's a minor typo below the HTTPS section. It should be TLS not TSL ;)

Edit: By the way, it was a nice article. I especially liked the tcpdump explanation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: