Centralization will always be more efficient, generally speaking. Access to a centralized channel for distribution can be prohibitive for many reasons - cost and censorship being the foremost in my mind.
I'm doubtful of this premise, especially in the context of peering disputes. Here in the states, Netflix had/has serious issues with Comcast/Verizon/Centurylink not peering with their upstream providers sufficently to handle the number of video streams that needed to be carried.
This was not an issue with the last mile connection itself, but a middle mile peering problem. From the testing I have done, on certain carriers that play games like this (Deutsch Telekom in Germany, Centurylink, Verizon & others in the US, and most Asian/South American ISPs) torrenting updates to users is significantly faster than an HTTPS download from Amazon, Google Cloud & the like. Once you get a handful of seeders internal to the network, the other downloaders in that network start to get the torrent significantly faster than they would over traditional HTTPS.
If the content isn't extremely time sensitive (eg: live sports games, video calls) and is of a noteworthy size, using P2P tech to speed up downloads is not a bad idea.
Theoretically BitTorrent should be more efficient, since peers nearby can send data directly (e.g. within an ISP or regional node), whereas a central server will always have to transfer a longer distance and involve more infrastructure.
E.g. with my ISP, the bottleneck isn't in the last mile (that's all GPON fiber) but rather in the national network and their transit to the internet. If more transfer was made to local-to-local, it would reduce the amount of traffic that needs to go cross-country or out on the internet.
Unless bittorrent understands the topology of a network, it will never be more efficient than a structure that takes advantage of the topology.
E.g. Netflix makes servers that can be distributed to the ISP's and can serve content to those consumers directly from the ISP itself, and not over the ISP's outgoing link.
E.g.#2 Multicast clients register to receive multicast streams and the routers know how to propagate multicast.
I think it can be done simpler than that. When the client/server has a choice of multiple peers to connect to, it can pick the one with the lower latency. That way you end up picking peers that are closer and will help lessen the load on the overall network. Not perfect, but a pretty good 90% solution
I can make a very low latency connection but only delivers at most 1 packet every 10 seconds. It responds in 5 milliseconds say, but it won't let you have another packet for 10 seconds.
Or I can have a high latency connection that delivers a huge amount of bandwidth. It might take 10 seconds to start the firehose, but once it starts, it's a firehose. This is ironically the problem that the internet has with "Bufferbloat", and high bandwidth radio connections in space.
A topological solution would be counting the number of hops. There are internet protocols that do this, it's just that I don't think that BT takes advantage of it.
When bandwidth was a big concern bittorrent trackers prioritized peers geographically and ISPs were using retrackers to force peers within the same ISP to save bandwidth. It was very efficient, each torrent was downloaded pretty much just once for an entire ISP from the fastest sources. Good times.
But none of this is necessary, fast clients simply send more data and dominate download bandwidth. Doesn't matter why they are fast, whether they are simply very close or just on a path with more capacity. Everything is naturally efficient.
Bittorrent is not an effective means of serving data from a 4G device. In fact, if anything the bandwidth on 4G arguably costs more than bandwidth from your Fiber ISP.
We use services like Youtube and instagram to push our data to others. And the efficiency from the phone perspective is the same as Multicast because the phone sends data exactly once over the upstream link.
You could say, "Well, duh, nobody should use bittorrent over 4G because of bandwidth issues." And that's exactly the point. If Bittorrent were so efficient, we'd be using it on 4G instead of sending it to youtube, etc.
You can't have it both ways, in other words, you can't say, "Bittorrent is the most efficient protocol man ever created, but only if you don't use it on 4g."
On the other hand, multicast with fountain codes (e.g. RaptorQ) sure seem to be a super efficient approach that might work everywhere.
> I can make a very low latency connection but only delivers at most 1 packet every 10 seconds. It responds in 5 milliseconds say, but it won't let you have another packet for 10 seconds.
That's fine. So you connect to that peer and get one packet every ten seconds. You also connect to the other 1000 peers with the lowest latency, some of which are likely to be faster and in any event will be fast in aggregate, and none of which require traversing intercontinental fiber.
> Netflix makes servers that can be distributed to the ISP's
Distributing servers to every ISP doesn't seem very efficient. It may work in the US where I you can count the number of ISPs on one hand, but where I live there are 100+.
> Distributing servers to every ISP doesn't seem very efficient.
Actually that's the way most CDN's work. Instead of having 40 hops to your customer, you may only have to have 10 because the first thirty hops are from you to your server.
Less congestion issues, higher uptime, lower latency and faster bandwidth usually.
> Has multicast ever worked?
In my experience with military networks, yes. Especially when one multicast packet can be received by 20 listeners without repeating the packet 20 times on the physical layer (especially radio links).
Sending data via bittorrent would saturate links you wouldn't want saturated.
IPTV services are usually based on multicast and seem to work quite efficiently. But it's obviously useless for services like Netflix/Youtube/etc. because you have to be sending the same content to everyone at the same time.
No, multicast for IPTV is not cost efficient for ISPs. I don't think there was ever a time when multicast was anything more than hardware vendors trying to profit off of ISPs with broken promises of efficiency.
The client can understand and choose based on that. Back when we had different caps for international/national/in-isp traffic, there was a fork of eMule which intentionally prioritized downloading from IPs in ranges from the same ISP, and then country, before pulling from outside.
That doesn't make it more efficient. It might make it faster at the expense of using your ISP's outgoing bandwidth multiple times over.
The most efficient network distribution is one where the data only travels each network link once for all destinations. This is the way multicast works, e.g.