You do know that this is not how TCP works, right?
It's not that a node "says how many packets they want to send", and the endpoint decides how many they are going to consume.
In reality, each node just sends the packet and waits for the ACK from the other endpoint. So, if A has 1000 packets, it's going to try to send them, all of them. So will B and C.
The problem is, if the endpoint is only able to handle 1000 packets per unit of time, some of them will be dropped. But there is no way for an endpoint to say "Oh, A is clogging the pipe, I'll drop from it". The endpoint just gets what it can and ACKs.
As the article makes pretty clear, the problem is not bandwidth, it's congestion volume.
Generally speaking TCP is an endpoint protocol that runs over IP. Switches don't need to know anything about TCP to rout TCP traffic. Anyway, I was not talking about TCP, I was referring to the simplest way for a high bandwidth ISP to deal with oversubscribing their network without pissing people off. They don't need to know anything about the IP packets your sending be the UDP, TCP or whatever.
His suggestion is that people who use less bandwidth on average should have better bandwidth when things get congested, but your network is only effected by the last few seconds of traffic so letting people have all the bandwidth for new connections seems like a bad idea. (Comcast was doing something like this and it ended up messing with a lot of low bandwidth apps.)
PS: Networking equipment has internal buffers so they can buffer 20 and only 20 packets from each user / network and then round robbin transmission of those packets down the line. The advantage to this is if your sending a small stream of data your going to have low latency and if your trying to flood the pipe you and only you will get a high number of packets dropped.
It's not that a node "says how many packets they want to send", and the endpoint decides how many they are going to consume.
In reality, each node just sends the packet and waits for the ACK from the other endpoint. So, if A has 1000 packets, it's going to try to send them, all of them. So will B and C.
The problem is, if the endpoint is only able to handle 1000 packets per unit of time, some of them will be dropped. But there is no way for an endpoint to say "Oh, A is clogging the pipe, I'll drop from it". The endpoint just gets what it can and ACKs.
As the article makes pretty clear, the problem is not bandwidth, it's congestion volume.