Hacker News new | past | comments | ask | show | jobs | submit login

Retransmits and congestion are not bad by nature - they're a symptom that the network is being heavily used.

Edit: never mind, misread the test, his tools clearly show goodput (good, desirable throughput) going down in congestion. Lowering initcwnd (as Chrome 29 did) eliminates this on slower connections, improving user experience. I would like to see page load time though, as a proxy for time to screen. It's intriguing that the 6s total page load time did not seem to change.




I wanted to support the big picture idea of your comment about packet loss not being inherently bad. Too many strategies are based on the principle that every packet is precious rather than total system goodput. Indeed TCP congestion control is really premised on loss happening - it keeps inching up the sending rate until a loss is induced and then it backs off a bit.

OTOH, TCP really performs poorly in the face of significant levels of loss. So high levels of loss specifically in HTTP really are a bad sign, at least as currently constructed.

Also worth being conerned with: losses that occur late in the path waste a lot of resources getting to that point that could instead be used by other streams sharing only part of the path. (e.g. If a stream from NYC to LAX experiences losses in SFO it is wasting bandwidth that could be used on someone else's PHI to Denver stream). A packet switched network has to be sensitive to total system goodput, not just that of one stream.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: