This thing is cool. I saw a live demo at IETF 118 in Prague last month. It totally eliminates buffer bloat, which makes it awesome for video chat. I saw the demo and was like "woah... I didn't think this would ever be possible."
It requires an additional bit to be inserted into IP packets, to carry information about when buffers are full (I think?), but it actually works. It feels like living in the future!
Actually somewhat better, even: you can let it control the rate factor of your video encoder directly, getting perceptual fairness instead of simple naive bandwidth fairness.
More particularly, L4S is an advancement to the existing ECN (Explicit Congestion Notification) extension to TCP/IP, allowing for more advanced algorithms to cut down latency further.
The main problem with ECN was the remarkably widespread behaviour by middleboxes that either cleared that bit or straight up dropped the packets. Maybe that situation has improved now?
The simple answer is that there are more than just one flag. From what i gather there are three flags. One flag that the sender sets to inform the routers that it can handle ECN. A second flag is used by the router to tell the recipient that the router was congested. And a third flag is set in by the recipient when it sends an ACK package back to the sender.
For more details, here is the relevant section:
* An ECT codepoint is set in packets transmitted by the sender to indicate that ECN is supported by the transport entities for these packets.
* An ECN-capable router detects impending congestion and detects that an ECT codepoint is set in the packet it is about to drop. Instead of dropping the packet, the router chooses to set the CE codepoint in the IP header and forwards the packet.
* The receiver receives the packet with the CE codepoint set, and sets the ECN-Echo flag in its next TCP ACK sent to the sender.
* The sender receives the TCP ACK with ECN-Echo set, and reacts to the congestion as if a packet had been dropped.
* The sender sets the CWR flag in the TCP header of the next packet sent to the receiver to acknowledge its receipt of and reaction to the ECN-Echo flag.
L4S is not really an express lane. It is a way for applications to know when their traffic is congested, enabling them to scale DOWN their traffic to alleviate the congestion. Less congestion means less latency.
TCP congestion control relies on packets being dropped to signal that a link is congested.
L4S actually includes an extra bit of information in IP packets that routers can mutate to explicitly say when they are congested.
This means that you (a) don't need to play exponential backoff games, (b), don't need to re-send redundant packets, and (c) don't need big buffers in routers.
You need big buffers in routers because otherwise exponential backoff goes crazy. But when you add big buffers, you get latency, which is another kind of suck.
In order to avoid latency, you need to avoid buffers, which is hard unless you avoid exponential backoff. To avoid exponential backoff, you need routers to actually communicate their congestion, by sending more information. L4S does that by using an unallocated bit in IP packets.
I'll need to read up on this, but one potential misuse of this is to just always/often set that bit on traffic you want to suppress.
Which feels much easier and much less heavy-handed than what you can to today. Which technically is a great thing but just wondering about misuse aspect.
This signals congestion explicitly, by a device declaring the link congested and asking others to slow down. TCP congestion control works by detecting when packets are dropped because devices can't keep up.
Also, when the congestion signal disappears you can try to push the transfer speed up immediately, rather than slowly ramping back up like with TCP.
The slowest part is likely to be the network component, and there you can look at the experimental deployment by Comcast, that others have linked to in comments. So far I have not heard of moves by other network providers. Apple already has experimental L4S support for QUIC and TCP: https://developer.apple.com/documentation/network/testing_an...
Congestion control with TCP will eventually still need to send the same number of bytes down a pipe, albeit with added latency. After a while an application could notice and make a change, but it would be long enough for a user to notice poor service.
They already did that, L4S or no. "Fast lanes" usually come in the form of peering links or colocated cache servers, both of which involve actual new capacity. Prioritizing individual flows of traffic over ordinary transit links based on monetary value is something IP is uniquely ill-suited to do.
> "Not sure where this leads but I guess ISPs will start charging toll for express lanes"
Doubtful IMO. I think latency becomes another competitive differentiator, much like throughput/speed is today. (this is a personal comment but I work at Comcast)
If you are interesting in learning more on L4S, there is a webinar series starting today on understandinglatency.com. Some of the authors of L4S, the head of Comcasts L4S field trail and some critical voices are speaking
While it's a step in the right direction, there's a problem if there's at least one 'malicious' actor, who ignores the congestion feedback and just wants a larger share of bandwidth. Then all other actors will retreat and the unfair actors get what they want. Unfortunately it is hard to know for a good actor if the other actors are playing nicely or not. Only if a good actor knows that there's fair queuing, they can trust L4S to treat them fairly.
To clarify this - most ISPs implement per customer bandwidth allocation, so a malicious actor should not be able to take share from other customers.
The FQ thing is a part of a larger dispute. Without FQ is is already the case that, irrespective of L4S, fairness is implemented by end hosts, and an end host (eg a server) can ignore congestion responses and take more than a fair share. This is not an issue which L4S introduces, but some argue that L4S "makes it easier" to take a larger share.
The people behind FQ argue that the network should guarantee fair sharing, but not everyone believes they have chosen the right fairness metric. In particular one of the main proponents of L4S does not, as can be seen from his paper linked here: https://news.ycombinator.com/item?id=38598023
> most ISPs implement per customer bandwidth allocation
This really should have an asterisk (*). There is generally a limit on what an ISP will advertise, and what they will provide (usually ~110% of advertised).
However, it's also extremely common that they overprovision segments on their network.
In the case of a Coax network like Comcast, or Spectrum, they will overprovision the actual last-mile capacity so that _most_ times of the day, you'll receive your ~110% of advertised speeds, but during peak (mid-evening), it's extremely unlikely that you're going to receive even your advertised speeds, usually only ~70%.
In the case for L4S, it would absolutely help "perceptively" resolve these kinds of congestion points, but the "evil take" would be that ISPs can extend their network upgrades further.
That's a different issue though. I should have been more precise with my terminology. The way it usually works is that there is a scheduler at the bottleneck. Let's for simplicity assume that the customers at a particular bottleneck have all got the same advertised rate, but the bottleneck is less than the sum of these. Say there are 10 customers with 100Mbps each but the bottleneck is only 500Mpbs. Then if each of the 10 customers are maxing out their usage, they will each only get 50Mbps, which is less than the advertised rate on their service. What I meant was, playing games with congestion control won't reduce any other customer below that. (There are different options for how the scheduler could work if the customers have different limits; it could just cap them to their limit, or it could weight their share according to their limit).
I guess you are right that buffer bloat problems could pressure ISPs to avoid overprovisioning, and any solution to bufferbloat could take the pressure off. But you can also get bufferbloat and other latency issues without overprovisioning, so it doesn't seem to me to be a good reason to hold off implementing solutions to them.
What does this mean in practicality as a user? Will e.g. video calls be closer to real-time? There's usually about 0.5-1 second delay which leads to a lot of hiccups and interruptions when speaking with each other. What other application uses will be significantly improved?
This only resolves one source of delay in one ISP's network. Internet video chat is a mess because it's "best effort" at every level.
The need for <3 Mbps bitrates means tough trade offs between quality, bitrate, CPU time, and latency. Bitrate is the hardest constraint. Commodity laptops have slower CPUs or if they have 6 core CPUs they keep them clocked down when on battery. Hardware accelerated video encoding is not universal. So quality and latency are sacrificed.
Wi-Fi adds latency, especially when a laptop is on battery
To deal with NAT many video chat services relay through cloud servers adding latency.
It makes new cloud-based apps realistically & reliably workable - think cloud gaming and cloud AR. It also makes interactive stuff like gaming and video conferencing perform a lot better w/o lag. But really anything interactive (user & device) should be better given how many round trips it currently takes to paint a web page or stream video to handle an AI assistant (Alexa) interaction.
It is independent of that. The L4S standards change the IP layer to provide a more accurate ECN congestion signal , any transport protocol can then take advantage of it. There are versions of TCP and QUIC that do so, in theory a version of uTP could be made to do so as well.
However, from a brief look, uTP is designed for background transfers for which latency is not important, so there is no particular need to do so.
Im confused, everyone here is talking about improvements to video conferencing and streaming, but those applications use UDP instead of TCP so I don’t understand how this will change anything.
I think the key point is the bottleneck link is a shared resource. Many TCP flows traversing the link will drive it to a relatively high queue occupancy which causes higher delay for all traffic regardless of protocol.
Only skimmed the proposal but looks like it isolates traffic using the new protocol by giving it a dedicated buffer, and the explicit congestion notification protocol would then keep the size of this queue much smaller at steady state when the link is saturated.
This standard is a change to IP, which TCP and UDP (and transports implemented on top of UDP) are both implemented on. So it applies to all of them. Each transport has to implement its own way of using it.
How does the feedback loop works ? I.e. the routers need to tell the source (upstream) to back off , but this used an IP header bit, so there is No guaranteed Return Stream....
With TCP the receiver has to send an ACK back to the sender. If the receiver sees that the congestion bit is set on a packet it gets from the sender then it will set the same bit on the ACK packet it sends back to the sender to acknowledge that the packet was received. This ACK is sent anyways, since it's part of how the sliding window is designed with TCP.
There are built in ways for the TCP protocol to handle congestion, but it doesn't allow a router to signal congestion. The router just has to hope for the sender to detect the congestion fast enough.
BBR is a congestion control algorithm, L4S provides a congestion control signal. So BBR can be updated to take advantage the L4S signal. Apparently there are some plans to do so.
BBRv1 doesn't take ECN into account. BBRv2/v3 do, and it's mentioned in the RFC:
Scalable variants are
under consideration for more recent transport protocols (e.g.,
QUIC), and the L4S ECN part of BBRv2 [BBRv2] [BBR-CC] is a
Scalable congestion control intended for the TCP and QUIC
transports, amongst others.
I’m having trouble determining if my 3.1 cable modem supports the draft spec. Is there a way to tell based on serial number? Are there hardware limitations that would prevent older 3.1 modems from receiving a software update to enable support?
Several D3.1 modems support it now but most will need to be updated. Many of the vendors have been testing at quarterly L4S interop events, so I would expect them all to have production grade s/w next year.
I would assume the downvotes are because the comment is GPT-generated. People come here for the community's comments and insights, not for GPT's summarisations, even if you yourself find them useful.
I personally agree with the downvotes - I don't want to see every HN post littered with "this is what GPT-4 has to say about this".
Nothing's stopping users who want an AI summary from feeding the content into their favourite GPT. But it's not contributing anything meaningful to a HN discussion.
The archive link is useful, because it provides you a way to access the page itself. Nobody here has subscriptions to every single service, so if there's something paywalled, that link is useful to almost everyone person. (And I wish HN did it automatically) A lazy gpt summary is not that.
There's literally an abstract at the top of this document which provides a summary. If you want more, you can feed it into chatgpt (or many other services) yourself, same as everyone here. There's no reason to post a summary as an unsolicited comment.
I'm pretty sure we often see the abstract posted as a comment for a quick summary, and I don't see it downvoted as hard. Not everyone goes through the link.
But sure, they could have just posted that instead of going through GPT. Doesn't really matter much imo.
IMO, an LLM-generated summary is almost never a useful post without further comment.
I don't trust current LLMs to correctly summarize complicated and nuanced text. Now, if someone with the relevant expertise wanted to carefully read an article, feed it into an LLM for a summary, verify its correctness, and post that, I'd be alright with it.
Or if the summary is interesting in some other way - like is it super wrong? or does it make interesting leaps? or maybe it is startlingly correct? - then sure, share it, but also share why it's interesting.
It requires an additional bit to be inserted into IP packets, to carry information about when buffers are full (I think?), but it actually works. It feels like living in the future!