Hacker News new | past | comments | ask | show | jobs | submit login
Linux Network Queues Overview (github.com/leandromoreira)
250 points by dreampeppers99 on April 22, 2019 | hide | past | favorite | 8 comments



Nice post. Some more references:

1. AWS re:invent talk on Network Performance: https://youtu.be/LjeXZItav34

2. Twitter thread on network optimization by colmmacc, Principal at AWS Network and Edge Engineering orgs: https://threadreaderapp.com/thread/1099086415671877633.html

3. drewg123 on Jim Roskind's QUIC vs TCP: https://news.ycombinator.com/item?id=19461777

4. Google's BBR congestion algorithm for TCP and QUIC: https://ai.google/research/pubs/pub45646

5. The sophisticated HFSC (hierarchical fair service curve) qdisc: https://www.cs.cmu.edu/~hzhang/HFSC/main.html


This is very informative and nicely presented too.

And this article was linked to: https://blog.cloudflare.com/the-story-of-one-latency-spike/

I think that sort of thing should be mandatory reading for junior sysadmins and developers.


Nice writeup and diagrams. Ironically I was just cleaning some notes I have for using iproute2 netem qdisc to simulate network conditions, that I use periodically.

Here's a good overview (not written by me) http://wiki.linuxwall.info/doku.php/en:ressources:dossiers:n... that I think ties in nicely with OP's page.


This is a solid guide but might benefit from explaining why these parameters can be important. I bet if you asked 100 people who considered themselves professional Linux sysadmins to explain softirq squeeze, you will get blank stares. But squeeze is in fact an important indicator of system overload, despite its complete obscurity.


I think the mere word "squeeze" might be unfamiliar. But if you were to describe what "squeeze" summarizes then perhaps you might not get as many blank stares.

I'd never heard of squeeze before. But after reading it, I think I've got the gist of the performance implications.


"Sometimes people are looking for sysctl cargo cult values that bring high throughput and low latency with no trade-off and that works on every occasion."

That behavior was somewhat self inflicted by Linux distros that for a long time, shipped with lousy defaults. Not just for kernel settings, but for web servers, database servers, etc.

Such that changing the defaults did indeed, work magic.


And on the scale of "echo xxx > /proc/net/thing" to "Spend a month understanding what /proc/net/thing does (mostly) for versions in [3.y, 4.z]"

I'm guilty of choosing the easier one from time to time


Great overview!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: