Hacker News new | past | comments | ask | show | jobs | submit login

The only sensible prioritization you can do is to shift connections on the axis between (high latency, high throughput) and (low latency, low throughput). This basically reduces down to a scheduling problem: ISPs might be able to deduce the type of traffic by just examining the patterns in the amount of traffic. This would constitute a "natural law" of networking that application writers could expect to rely on.

If you want your application to communicate something that needs to be fast, like VoIP or network gaming datagrams, the amount of data needs to be small. And if your application is downloading large files, it can expect to be prioritized below everything else in the network. Building a video service? Let the user choose whether he wants to wait for some time till the high-res video is being buffered or start watching immediately but at a lower quality.

Any other kind of throttling or prioritization is going to skew the internet experience in a fundamental way.




Yeah that's the only thing that actually makes sense. If QoS is just a matter of asking "How fast do you want your traffic to be?" of course everyone is going to say "As fast as possible!" which means the ISP is then forced into deciding the QoS itself, badly.

If you make it a trade-off then end-users can decide.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: