Hacker News new | past | comments | ask | show | jobs | submit login
A Fairer, Faster Internet Protocol (ieee.org)
23 points by naish on Dec 4, 2008 | hide | past | favorite | 9 comments



Seems to me the simplest solution at the local level with an over subscribed pipe is a round Robbin transmission per IP address. If A wants to send 1000 packets, B wants 100, and C wants 10, A get's 10 while B get's 10 while C get's 10, then A get's 100 while B get's 100, then A get's 890. This ignores how the users a A are using the network and still gives low latency bandwidth for those applications that need it.

PS: I don't think this is how cable companies handle there local loop.


You can't really do this on a packet-switched network. But in essence, what you have just described is called time-division multiplexing (TDM). This is how phone networks have operated for over 40 years. In fact, before the growth of the internet, this is what everyone thought of as 'switching.'

There are economic reasons this doesn't work for the consumer internet. Providers are able to hit a consumer price point by over-subscription, but they compete and advertise based on the top burstable speed they can deliver to an average customer with a low overall level of usage. Happily this corresponds to the desires of most consumers who want their short bursts of usage to be delivered quickly.


That's called fair queuing; it's pretty cool: http://en.wikipedia.org/wiki/Fair_queuing

It's actually possible to implement fair queuing without knowing bandwidth demands or even the lengths of the queues using deficit round robin.

AFAIK there hasn't been good fair queuing equipment for ISPs to use until very recently, and many ISPs have been suckered into buying much more expensive DPI equipment instead.


You do know that this is not how TCP works, right?

It's not that a node "says how many packets they want to send", and the endpoint decides how many they are going to consume.

In reality, each node just sends the packet and waits for the ACK from the other endpoint. So, if A has 1000 packets, it's going to try to send them, all of them. So will B and C.

The problem is, if the endpoint is only able to handle 1000 packets per unit of time, some of them will be dropped. But there is no way for an endpoint to say "Oh, A is clogging the pipe, I'll drop from it". The endpoint just gets what it can and ACKs.

As the article makes pretty clear, the problem is not bandwidth, it's congestion volume.


Generally speaking TCP is an endpoint protocol that runs over IP. Switches don't need to know anything about TCP to rout TCP traffic. Anyway, I was not talking about TCP, I was referring to the simplest way for a high bandwidth ISP to deal with oversubscribing their network without pissing people off. They don't need to know anything about the IP packets your sending be the UDP, TCP or whatever.

His suggestion is that people who use less bandwidth on average should have better bandwidth when things get congested, but your network is only effected by the last few seconds of traffic so letting people have all the bandwidth for new connections seems like a bad idea. (Comcast was doing something like this and it ended up messing with a lot of low bandwidth apps.)

PS: Networking equipment has internal buffers so they can buffer 20 and only 20 packets from each user / network and then round robbin transmission of those packets down the line. The advantage to this is if your sending a small stream of data your going to have low latency and if your trying to flood the pipe you and only you will get a high number of packets dropped.


People love high numbers. They don't want to hear that they buy cable but their packets get the same priority as anyone else. They will sue saying "You promised me 1mbps I can't reach that".

Remember how time warner doubled or more their up/down speeds, but put major caps in place making users cap out their monthly quota in a matter of hours if downloading at the given rates?

Prioritize by pay. If I pay more, my packets are of higher priority than someone else's but as long as noone starves the internet will be fine. If the server is 50% free, and I have enough bandwidth to use the other 50%, why not? HOWEVER if someone needs it, they will bump down my usage.

What I never understood is why people like computers to be IDLE? We have a server that runs at 50% capacity, we want 50% capacity free just in case... No, we want a system where every bit of capacity is used, just tasks that are low priority get bumped down when resources are needed. It makes no sense, we are not dealing with water buckets where filling it prevents more water from going in. HOWEVER upper management likes to see the fact that they are always "safe" even if safe is stupid, expensive, pointless, and not safe.


The article is too dense and the font too tiny. Anyone have a two sentence synopsis?

However, I love the pixel art. We need more pixel art in this world!


Check View -> Page Style if you're using Firefox.


hold down ctrl & scroll your middle mouse button upwards




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: