The reason I didn't talk about it in the post is because the basic ideas that go into reducing the latency of any HLS/DASH implementation are the same. Smaller segment sizes, chunked transport, and smart encoding and prefetch/buffering implementations.
But ... while there's lots of terrific engineering underpinning the Twitch approach, there's no way around the real-world limits to that approach. As a result, on good network connections you'll usually get ~2s latency. But not so much on the long tail of network connections. If your use case can gracefully accommodate a range of latencies for the same shared session across different clients, that's fine. If it can't, it's not fine. Plus, I don't think you can get down very much below 2s.
(I don't have access to any Twitch/IVS internal data, so I don't know what latencies they see globally across their user base. But I've done a lot of testing of this kind of stuff in general.)
The reason I didn't talk about it in the post is because the basic ideas that go into reducing the latency of any HLS/DASH implementation are the same. Smaller segment sizes, chunked transport, and smart encoding and prefetch/buffering implementations.
But ... while there's lots of terrific engineering underpinning the Twitch approach, there's no way around the real-world limits to that approach. As a result, on good network connections you'll usually get ~2s latency. But not so much on the long tail of network connections. If your use case can gracefully accommodate a range of latencies for the same shared session across different clients, that's fine. If it can't, it's not fine. Plus, I don't think you can get down very much below 2s.
(I don't have access to any Twitch/IVS internal data, so I don't know what latencies they see globally across their user base. But I've done a lot of testing of this kind of stuff in general.)
And, yeah, Apple. Agreed.