Hacker News new | past | comments | ask | show | jobs | submit login

No, fuzz is 100% wrong. Multicast makes more-effective use of fiber, where each new fiber link branching off provides its own, private pool of bandwidth, added on to all the others.

Once you get to radio spectrum transmission, each pair of endpoints competes with every other pair in range, in a strictly limited pool of bandwidth. Adding another terminal takes away capacity from all the others. A multicast stream going to your satellite and then distributed to 100 terminals by multicast burns 100x the fixed available bandwidth as each terminal gets.

The only way to get any benefit from batching delivery over radio spectrum is via broadcast. For a packet network, you have the extra chore of making those broadcast packets actually useful to as many terminals as possible. In practice that requires caching of broadcast content in the terminal, so that when it is actually asked for it is already there.




>Once you get to radio spectrum transmission, each pair of endpoints competes with every other pair in range, in a strictly limited pool of bandwidth. Adding another terminal takes away capacity from all the others. A multicast stream going to your satellite and then distributed to 100 terminals by multicast burns 100x the fixed available bandwidth as each terminal gets.

Multicast is indeed problematic for wireless networks, even beyond what you describe (e.g. FEC, ACK storming, retransmits, etc). But if the phy layer is using OFDMA or similar, its entirely plausible to dedicate a subcarrier for multicast traffic. This doesn't solve the O(N) terminal problem you describe, but it mitigates the impact for concurrent transmission of unrelated unicast traffic while the multicast delivery is happening.

There's a fairly large academic body of work on this specific topic re: OFDMA multicast. And though I think your note on caching is a necessary component of this (a la Open Connect), I still think there is a role that multicast can play here.


FEC is not a problem, but the solution to a problem. FEC is "forward error correction", sending a few percent of redundant data, enough so the receiver can fix up corrupted packets on their own. (FEC was once a patent problem, but those have mostly expired.)

ACK storms aren't a thing with multicast; if FEC doesn't get you what you need, you wait a random time and send a NAK up the line, and the sender re-sends the requested bit. If the bit you were about to request shows up again before you get to asking, you discard the NAK you would have sent. So, if a lot of receivers miss the same packet, only one NAK usually goes back.

The Starlink transceivers are SDRs and extremely versatile. So, instead of playing games with modulation and subcarriers, it probably uses its whole band on each packet, effectively broadcasting all packets, and receivers just pick off whichever packets are meaningful to them. The ones actually intended for more than one terminal are the "broadcast" packets, differing only in how they are processed by the terminal.

The actually tricky thing is that broadcast streams cannot be like the regular end-to-end packet streams, where identical images are delivered in absolutely unlike packets. You need a higher-level negotiation identifying streams of interest by name and playback position, so that the uplink terminal knows when it is sending (or will have sent) the same stuff twice. If it decides some of its traffic should be broadcast instead, it can substitute packets from an equivalent, shared stream. The receiving terminal, meanwhile, is told that the stream it was getting is ended, and it should start extracting and caching content from the substituted broadcast stream.

The usual DRM nonsense will make this annoyingly more complicated: the participating broadcast receivers get a copy of a key to decrypt the broadcast traffic. Everybody else throws it away as indistinguishable from noise, like all traffic not meant for them.


> ACK storms aren't a thing with multicast. if FEC doesn't get you what you need, you wait a random time and send a NAK up the line, and the sender re-sends the requested bit

Interesting. This has not been my experience during my time so far in building a 802.11-over-mmWave ISP for the last 7 years. BUM traffic is routinely an issue, one that we measure frequently, and multicast ACK storming is absolutely a side effect of times when things are bad. This is usually because multicast, at least with 802.11, has to be sent at the lowest possible MCS because you don't know the rate capabilities of the STAs on the other side of the fence (also true for broadcasts) and as such the delivery is inherently unreliable. When things are bad, ACK storms are the measurable sine qua non of these events.

Its possible that, for Starlink, this isn't an issue for them because its not a conventional PMP infrastructure as you point out. They know the rates of the terminals on the other side and they're not using non-proprietary modulation.

FEC (or even network coding) can certainly help in these situations, but it has been far from a silver bullet in most testing I've been a part of.

> The Starlink transceivers are SDRs and extremely versatile. So, instead of playing games with modulation and subcarriers, it probably uses its whole band on each packet, effectively broadcasting all packets, and receivers just pick off whichever packets are meaningful to them.

Even if its an SDR, and presumably using something like phase-key shifting for modulation at the PHY layer, there is still MAC layer multiplexing it could use to offset the impact of multicast. E.g. in 802.11ax with MU-MIMO, you could dedicate an entire spatial stream to one kind of traffic, or alternatively you could dynamically put STAs into MU groups based the bulk of the traffic they're sending to blunt their impact on the shared spectrum. This is even more interesting in the SDR context where you can build a more adaptive algorithm for grouping (e.g. like using geospatial context to ensure minimal interference etc).

>The actually tricky thing is that broadcast streams cannot be like the regular end-to-end packet streams, where identical images are delivered in absolutely unlike packets. You need a higher-level negotiation identifying streams of interest by name and playback position, so that the uplink terminal knows when it is sending (or will have sent) the same stuff twice

I think this is where your point about the proprietary SDR is the most interesting (and maybe even for the DRM case as well). Would be interesting to see how they could handle this being in the middle of it all, without causing a major performance impact to end users. But thats mostly an implementation detail.


I should have said ACK storms aren't a thing if whoever specified the protocol you are on was not an idiot. If you don't have any choice about the protocol you must use, you must endure whatever horrors have been inflicted upon you.

Starlink will not have that problem, anyway. They could invent their own horror story, and then have no one to blame but themselves.

A more complicated modulation scheme could enable cheap hardware in the terminals to see only a fraction of traffic, which might be worth the RF complication. That probably depends on how much of the protocol work they are willing to do in the terminal's FPGA. Certainly the CPU cores in the terminal won't be up to sorting through the whole pile.

An FPGA configured to watch for a small number of packet headers of interest, and decrypt and deliver just those packets, ought to be able to relieve the CPU core enough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: