Nice writeup and diagrams. Ironically I was just cleaning some notes I have for using iproute2 netem qdisc to simulate network conditions, that I use periodically.
This is a solid guide but might benefit from explaining why these parameters can be important. I bet if you asked 100 people who considered themselves professional Linux sysadmins to explain softirq squeeze, you will get blank stares. But squeeze is in fact an important indicator of system overload, despite its complete obscurity.
I think the mere word "squeeze" might be unfamiliar. But if you were to describe what "squeeze" summarizes then perhaps you might not get as many blank stares.
I'd never heard of squeeze before. But after reading it, I think I've got the gist of the performance implications.
"Sometimes people are looking for sysctl cargo cult values that bring high throughput and low latency with no trade-off and that works on every occasion."
That behavior was somewhat self inflicted by Linux distros that for a long time, shipped with lousy defaults. Not just for kernel settings, but for web servers, database servers, etc.
Such that changing the defaults did indeed, work magic.
1. AWS re:invent talk on Network Performance: https://youtu.be/LjeXZItav34
2. Twitter thread on network optimization by colmmacc, Principal at AWS Network and Edge Engineering orgs: https://threadreaderapp.com/thread/1099086415671877633.html
3. drewg123 on Jim Roskind's QUIC vs TCP: https://news.ycombinator.com/item?id=19461777
4. Google's BBR congestion algorithm for TCP and QUIC: https://ai.google/research/pubs/pub45646
5. The sophisticated HFSC (hierarchical fair service curve) qdisc: https://www.cs.cmu.edu/~hzhang/HFSC/main.html