What do they mean with WebSockets hole punch on port 443 and Let's Encrypt? The first thing that comes to mind is running a TLS web/WS server on the endpoints, but I didn't think you can do TCP+TLS servers with WebRTC. Or can you?
A new open source server side WebRTC implementation would be very welcome, though this does seem to use some of the old C++ libraries too. I wonder how much exposure of C++ to hostile bits they managed to reduce?
That entire section reads like nonsense to me (and I understand NAT hole punching), seems like some kind of confusion. Why do you need special TLS certs for WebSockets, but not WebRTC? WebRTC is end-to-end encrypted like TLS, using a Diffie-Hellman exchange so that the signaling server doesn't see private keys.
To summarize, you are correct WebRTC is end-to-end encrypted via DTLS and WebRTC handles setting all of this up. WebSockets, however, are not encrypted by default, and when attempting to connect to a WebSocket server from a secure origin, a valid TLS certificate is required. Because each user has their WebSocket server running from their Rainway instance, we need to create unique and valid certificates for each user to avoid having a shared private key. This is how we avoid needing a TURN server and maintain low latency.
I apologize that it wasn't clear in the blog I was describing how we handle WebRTC failing.
My first guess is 2012 lacked easily accessible hardware accelerated h.264 encoding on the host and the cheaper clients lacked easily accessible hardware accelerated h.264 decoding. Now you can find hardware accelerated decode in your $20 IoT device.
Piggybacking off that, OnLive never seemed to know if they were a hardware company, a software company, or a cloud company, and depending on which exec was talking and the day of the week seemed to have a different focus each press release.
Today you have major players that already cover all three bases (Microsoft and Sony) interested and playing in the space. You also have more opportunity to see minor players enter the space with less of need to be hardware companies (because more users already have capable hardware devices), or cloud companies (because more commodity streaming cloud options are available than ever before on AWS/Azure/etc), or even software companies as more platforms including the web platform have increasingly more high level APIs for a lot of the building blocks of game streaming.
Maybe pricing arbitrage? Every gamer I know has like 20-30 titles in their queue they want to try out. To purchase all of them only for a few moments play would be prohibitive. Pay per use makes more sense if it falls below current gpu cloud cost (~$1/hr).
What's amazing is that if you can deliver video games at 4K 60fps. You can deliver all but the most intensive any application this way.
Even for servers? I would've guessed most game servers ran on Linux. Seems like the licensing costs for windows servers would be fairly restrictive for that.
The author here, I apologize for any confusion from my writing. We do use WebRTC in a server-like fashion. The browser attempts to establish a P2P connection with the host computer, and the peer negotiation is similar to if you wanted Chrome to talk to Firefox for a video call. The network of either end could both cause connectivity to fail which is when we fall back to direct WebSockets rather than a costly TURN server.
I will avoid getting into how our streaming performs vs. theirs because we have a lot of resources on why our tech is so fast[0]
That being said, as far as I know, Parsec's new browser-based client suffers from all the issues I mentioned in this blog and at the end of the day, our user experiences are completely different. Rainway being self-hostable you can put it on everything from Azure[1] to your home computer. We have a very unique "Games First"[3] approach and view ourselves more as a gaming service than a piece of game streaming software. As we continue to fledge out our voice, I think you'll be pleased with the results.
A new open source server side WebRTC implementation would be very welcome, though this does seem to use some of the old C++ libraries too. I wonder how much exposure of C++ to hostile bits they managed to reduce?