I'm a developer and not a networking guy, so I'm curious; is dropping packets like this a good idea or is there a better approach? It would seem to be a bad idea to me.
Traffic shaping shaping implies dropping packets. You could try to buffer some of them to send them later when bandwidth is available, but you would run out of buffer space very quickly. Buffering and flow control is best left to the endpoints. Packet loss is common on the internet and protocols are designed to handle it well.
In this example I would rather be worried about the additional latency/overhead of copying all the packets to user space and back.
I don't think latency is the primary issue here, as you're probably really far from your users and the hop is a mild cost compared to transiting the internet.
In my mind the bigger issue is limited throughput, packets per second are fundamentally limited by this approach (tun devices, at least in linux, are generally limited to a single packet per read). Ted even alludes to a 25% reduction in traffic, though it's unclear to me what the link speed is (likely 100M or 1G). In my experience, even with the latest and greatest in linux, you're not going to see reasonable packet rates, and you're still better off doing this type of stuff in the kernel.
That said, tun devices are super cool, and a great tool for anyone to have if they need to do weird stuff like low-level network testing, so I think this is a really cool post. You can read more about it for linux here: https://www.kernel.org/doc/Documentation/networking/tuntap.t...
Thanks for the response. I knew that TCP handles dropped packets. But it seems like using an exceptional case for dealing with errors shouldn't be the norm. Is this also how more traditional traffic shaping works?
I presume that under circumstances such as this you end up generating a lot more network traffic than you need to, because of the overhead in retransmitting failed packets.
I know you couldn't buffer indefinitely because your buffer would overflow, but could you maybe delay the acknowledgements of packets, e.g. To simulate a slower link.
Not saying there's anything wrong with the approach or article, merely curious from the perspective of a non-networks guy.
In general TCP deals with dropped packets (and unstable links, be that due to traffic shaping or other causes) -- while eg: UDP assumes some packets will just vansish -- you have to (should) deal with it yourself...
I've also always thought it would be cool to have a drag-and-drop connect-the-dots UI like OS X's Quartz Composer for network shaping stuff. It probably would be manageable to write one for this system.
Are there any similar systems that give you packet-level control instead of byte-level control?