Hacker News new | past | comments | ask | show | jobs | submit login

> ACK storms aren't a thing with multicast. if FEC doesn't get you what you need, you wait a random time and send a NAK up the line, and the sender re-sends the requested bit

Interesting. This has not been my experience during my time so far in building a 802.11-over-mmWave ISP for the last 7 years. BUM traffic is routinely an issue, one that we measure frequently, and multicast ACK storming is absolutely a side effect of times when things are bad. This is usually because multicast, at least with 802.11, has to be sent at the lowest possible MCS because you don't know the rate capabilities of the STAs on the other side of the fence (also true for broadcasts) and as such the delivery is inherently unreliable. When things are bad, ACK storms are the measurable sine qua non of these events.

Its possible that, for Starlink, this isn't an issue for them because its not a conventional PMP infrastructure as you point out. They know the rates of the terminals on the other side and they're not using non-proprietary modulation.

FEC (or even network coding) can certainly help in these situations, but it has been far from a silver bullet in most testing I've been a part of.

> The Starlink transceivers are SDRs and extremely versatile. So, instead of playing games with modulation and subcarriers, it probably uses its whole band on each packet, effectively broadcasting all packets, and receivers just pick off whichever packets are meaningful to them.

Even if its an SDR, and presumably using something like phase-key shifting for modulation at the PHY layer, there is still MAC layer multiplexing it could use to offset the impact of multicast. E.g. in 802.11ax with MU-MIMO, you could dedicate an entire spatial stream to one kind of traffic, or alternatively you could dynamically put STAs into MU groups based the bulk of the traffic they're sending to blunt their impact on the shared spectrum. This is even more interesting in the SDR context where you can build a more adaptive algorithm for grouping (e.g. like using geospatial context to ensure minimal interference etc).

>The actually tricky thing is that broadcast streams cannot be like the regular end-to-end packet streams, where identical images are delivered in absolutely unlike packets. You need a higher-level negotiation identifying streams of interest by name and playback position, so that the uplink terminal knows when it is sending (or will have sent) the same stuff twice

I think this is where your point about the proprietary SDR is the most interesting (and maybe even for the DRM case as well). Would be interesting to see how they could handle this being in the middle of it all, without causing a major performance impact to end users. But thats mostly an implementation detail.




I should have said ACK storms aren't a thing if whoever specified the protocol you are on was not an idiot. If you don't have any choice about the protocol you must use, you must endure whatever horrors have been inflicted upon you.

Starlink will not have that problem, anyway. They could invent their own horror story, and then have no one to blame but themselves.

A more complicated modulation scheme could enable cheap hardware in the terminals to see only a fraction of traffic, which might be worth the RF complication. That probably depends on how much of the protocol work they are willing to do in the terminal's FPGA. Certainly the CPU cores in the terminal won't be up to sorting through the whole pile.

An FPGA configured to watch for a small number of packet headers of interest, and decrypt and deliver just those packets, ought to be able to relieve the CPU core enough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: