Hacker News new | past | comments | ask | show | jobs | submit login

Having worked with AWS IVS, which is using the same infrastructure design as Twitch (or the other way around), it only accepts RTMP (well RTMPS to be fair) as an input.

To me it would seem that the industry is going the opposite way, and doubling down on RTMP. An RTMP connector is planned on the Chime SDK as well (a framework & infrastructure based on Chime to create your own custom meetings), to be able to stream the meeting directly to ingest service (like IVS/Elemental). On the AWS side at least, there doesn't seem to be a plan to migrate from RTMP, as it seems that services are being launched that only supports it.

On a sidenote, the official AWS recommended way to stream a Chime SDK meeting to IVS, is to use this[0] docker container, acting as a bridge between WebRTC & RTMP. I find the hackiness of it amazing, as it's mostly just an X11 framebuffer, an instance of Firefox loading the webview for the meeting, and ffmpeg capturing the framebuffer and sending it as an RTMP stream to the configured endpoint.

Also, AWS IVS as a streaming platform is really impressive. It's a fully managed service, and out of the box you get around 4-5s of latency between the source and the player, with an upper limit of 1080p60 @8.5Mbps. It will also downsample the stream to 720p, 480p, 360p & 160p.

[0]: https://github.com/aws-samples/amazon-chime-meeting-broadcas...




>to use this[0] docker container

In a similar vein, Jitsi's video recording solution is pretty much the same thing[0].

>It works by launching a Chrome instance rendered in a virtual framebuffer and capturing and encoding the output with ffmpeg. It is intended to be run on a separate machine (or a VM), with no other applications using the display or audio devices. Only one recording at a time is supported on a single jibri.

We are severely lacking stand-alone webRTC implementations - the best implementation is in the Chrome codebase.

[0]: https://github.com/jitsi/jibri


> out of the box you get around 4-5s of latency between the source and the player

In the world cup in 2018 I was watching an RTP feed direct from the IBC in Moscow on my desktop in VLC. The goal went in, and strangely I recognised it was a goal for England (I don't really do football). Window was open, and I cheered.

A few seconds later the neighbours cheered - they were watching it on TV.

I remember watching a 4K FA Cup final too in parallel to an off air TV feed. Watched the goal go in on the TV. By the time it had gone in on the 4K feed I'd actually forgotten it was a goal, so I guess that wasn't too bad.

That's a problem with streaming. If you're watching a popular live event like a big football match, and your neighbours are too, you need to be receiving it at the same time to avoid "spoilers".


How do you handle hevc/h265 and resolutions above 4K?

It’s not in the rtmp spec, so not supported by either ffmpeg or gstreamer. But chinese companies ship cameras that hack it and run rtmp for it anyway, leading users to peskering me about our sw being broken :/


Is 4-5s latency considered good? It’s an awful experience for video chat.


For live streaming distribution to loads of viewers, outside of a video meeting scenario, then yeah. That is considered good. Plenty of commercial services with latency 5-10 times that.


It's quite amusing when I see 2-3 seconds being declared "ultra low latency" on the distribution side

In broadcast contribution, where you have someone in the studio talking to a person on the screen, anything over a second brings complaints. Typically aim for under 500ms of processing delay for low bitrate contributions, and at 25fps with a bog standard blackmagic card going SDI-IP-SDI, and add in timing, you're looking at 500ms of your budget eaten by the hardware framebuffer. OBE do a better capture card - one which allows access to the data on a line by line basis. If you go for something more hardware based and say a J2K codec you can get your latency down to a couple of frames.

I had a problem a few years ago with lives from Kiev -- the ISP we had kept dropping our packets for 125ms at a time. Didn't matter if we sent 20mbit (so 2000 packets per second) or 2mbit (200pps), the number lost in a row matched the 125ms outage.

Network people laugh when I complain about 125ms outages on the internet, but it meant that standard FEC wouldn't work (maximum of 20 burst packets, even if it recovered every lost packet that would mean errors above a transport stream rate fo 1.6mbit.

Now you can use RIST to dial in resends, but with a 100ms rtt you're looking at needing 300-500ms of buffer to cope with that type of outage (to realise the packets are missing and not just delayed (say 50ms), to ask for the retransmit (50ms), and to get them (50ms, then smear them over time as you can't do an instant retransmit)

Alternatively you can transmit twice and offset, but that still adds 150ms of delay.


Might have mixed up Chime & IVS a bit.

To clarify, the 4-5s delay I'm talking about is for IVS, which is a live streaming solution, and doesn't have anything to do with video chat. It has a RTMP input on which you send a video stream, re-encodes it and then distribute it as an HLS stream. And getting sub 5s latency on an HLS stream is pretty fast.

I brought up Chime and video chat mostly as an example on how AWS is pushing for RTMP to connect those services.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: