Welcome to Project Lightspeed. This is a project that allows anyone to easily deploy their own sub-second latency live-streaming server. In its current state you can stream from OBS [1] and watch that stream back from any desktop browser.
This has been a super fun project which has taught me more than any other project I have done. It uses Rust, Go and React and can be deployed fairly easily on a very lightweight server. For example, I have been doing my test streams on a $5 Digital Ocean droplet and the CPU usage is at around 20%. Granted not a lot of people are watching however it is more lightweight than a solution such as Janus.
The point of this project is twofold. First I wanted to learn more about WebRTC and real-time communication in general. Second, I wanted to provide a platform where people can setup their own little live-stream environment. Maybe you just want a place where you and some friends can hang out or you are sick of the main-stream platforms.
Anyhow as of writing this post it is v0.1.0 and considered an MVP release. In the coming months I (and hopefully some of you :)) will be adding more features and trying to flesh this out into as much of a full featured platform as possible. Feel free to take a look at the repo and let me know what you all think :)
Awesome! I was looking for something like this when trying to play a local multiplayer game via the Internet in an early lockdown.
There are, or were, no good turnkey solutions for this. Twitch and Youtube have 5-10s latency, which is often not good enough. Mixer promised (and presumably delivered) ~1s latency using the FTL protocol you use, but they had a wait list of a couple of days or weeks, and of course now, they don't exist anymore. Even Steam Play Together, ostensibly built for this purpose, wasn't low latency enough in my limited experience (this really surprised me, so maybe I'm doing it wrong).
The easiest solution, use the share desktop function of whatever video conference tool, almost works, but they universally seemed reduce the frame rate, which is ok for presentations but unsuitable for games (also, no audio). My solution was to output OBS to a virtual webcam device and use Jitsi Meet. A bit roundabout, but it worked wonderfully.
Ideally, I'd forgo the DO droplet, and just run everything locally. 20% of a small droplet is even less of a modern desktop computer's CPU. Which leaves upload bandwidth for broadcast, which depends on your connection and how many people you need to be able to stream to.
Yes, I forgot about Parsec, that's a good suggestion. I remember trying it, and not getting it to do what I want, unfortunately I don't remember why. I think I was stuck in the "Arcade", when all I wanted was to share my desktop or one window. It certainly looks like exactly what I was looking for.
In home streaming with Parsec for me with a MoCA/Ethernet connection typically has 1-2ms of network latency. Over wifi in-home is more, closer to 20-30ms with a mediocre laptop wifi card. Playing online with my brother who lives 35 miles away using an Ethernet connection I typically see 15-25ms latency, not much worse than a 'meh' bluetooth controller. It's likely worth noting that my brother and I both have the same cable internet provider, but we also sometimes play with my brother-in-law who lives another 40 miles from me (~60 miles from my brother) and we can all play games like Streets of Rogue together from my brother's PC without issue.
Wow, that is better than I'd hoped! Thanks so much for your response! (I know it's just an anecdote, but bc I'm looking to see what it's perf limits might be, even one data point like this is very helpful.)
> Even Steam Play Together, ostensibly built for this purpose, wasn't low latency enough in my limited experience (this really surprised me, so maybe I'm doing it wrong).
I've had good experience with Steam Play Together, mostly playing Unrailed (a hectic game in the style of Overcooked). I definitely forget about the remote connection while playing. We were 1000 km apart, but had quite a good connection (100 Mbit, 15-20 ms ping).
I have a pretty low-latency setup for that but it wasn't completely turnkey. First you set up nginx with the rtmp module[1]. Then you can use OBS to stream your desktop to the RTMP server. I set OBS to send a keyframe every 1 second.
On the client side you have two options:
1. For low-latency game streaming, I would suggest watching through the RTMP stream. The RTMP module for nginx will re-broadcast your RTMP stream to all the clients that connect. I was able to get a latency of around 1 second by watching through:
I would expect better latency from a webrtc solution like Lightspeed but 1 second latency is pretty good for only having to install nginx.
2. HLS/Dash. The nginx RTMP module will also expose the video stream as HLS/Dash which is just cutting the stream up into files and serving them over http. Personally I set my segment size to 1 second and my playlist size to 4 seconds. Through this I get approximately a 4-second latency. Not great for competitive multiplayer games like Jackbox but if you're playing something like a world building game with friends then its acceptable. The real benefit to HLS/Dash is you can easily watch it through an html5 web video player or even a chromecast[2].
Bits you can add on top:
- I put my HLS/Dash directories in a tmpfs mount for speed and reduced wear on the drives
- I put the nginx stream module in front of my rtmp module so that it can handle TLS (making it RTMPS)
On FreeBSD it was just a checkbox in the nginx port, so the work involved may vary by distro.
[2] I haven't attempted to play the RTMP stream through chromecast so for all I know, that might be supported too. All I've tested so far on chromecast is an HLS stream using the "castnow" CLI program. The Shaka player, which is a web player, will support chromecasting an HLS stream from your browser but I've only tested their demo videos, not my personal streams, and I had to use official google chrome, not chromium, but it worked on both android and linux.
I confirm that Mixer delivered on the sub-second latency claims. As far as I know, FTL's performance is in line with WebRTC's performance. As long as the servers in-between are fast, a good WebRTC implementation should match it.
Unfortunately, with Mixer's death, I don't think there are any major turnkey players left with low latency sub-second streaming. I'd probably use Discord as a primary alternative which uses WebRTC with Discord's servers in-between.
If your home connection has the bandwidth to support the load of multiple users, a service that does direct P2P like Parsec will probably give the best performance.
From my experience it largely depends on the stream - for some I can easily get <2 seconds, others will be >10s. I'm not sure what causes this difference (ingest server?).
We were playing the Jackbox series of games together, the other folks were participating in the game with their phones. There are various minigames with 5-50 second timers, so 10s latency is a lot. Some of the games have a special streaming mode which extends the timers, but not all of them and it's best played with regular timers anyway. Obviously for true action games, you absolutely need sub-second latency, preferably <100ms.
Discord mostly works for my friend group to play Jackbox games, though sometimes it's still noticeably slow, so OP's project is definitely an improvement.
Except for Jackbox Party games it's by far the best fit. It's even the recommended way to play online by Jackbox themselves and I've hosted Jackbox via both Zoom and MS Teams and it worked perfectly fine that way.
Other online games wouldn't fair so well but the dropped frame rate in Jackbox Party games does not hamper the playability of their games at all.
They recommend doing it that way, because what else are they going to do, post a tutorial on how to do it via OBS? I don't think so.
Maybe Zoom and MS Teams offer(ed) a better fidelity. For one thing, Zoom lets you share desktop audio along with the screen. In fact, apparently these days, Jitsi can do that too, that definitely wasn't possible when I tried it early last year. At that time, at least, the experience OBS -> Jitsi was definitely much better than just Jitsi. (And note that all of this was in Linux.)
You scoff but they did link to a tutorial on how to do it via OBS in their guide[1]. They just made video conferencing the first suggestion.
Also in that guide was Discord and Steam Remote Play. It's a surprisingly technical guide (but in a good way) considering the average audience that might read it. It feels to me that some genuine thought did go into that document.
> Maybe Zoom and MS Teams offer(ed) a better fidelity.
Maybe. Anecdotally I've not had any issues with Zoom whereas Google Meets often feels like it's both heavier on the CPU and feeds seem worse. However that's running Meets on Firefox (Linux), it might perform better in Chrome.
> They recommend doing it that way, because what else are they going to do, post a tutorial on how to do it via OBS? I don't think so.
I think you're being rather unfavourable there. The Jackbox developers have been pretty responsive listening to user feedback in the past. For example Linux support was added after several requests on Steam forums. They've also added other features like subtitles specifically for streaming via video conferencing solutions. So if Zoom / Teams / etc didn't work well then you can bet they'd have posted another workaround and/or a game patch since the alternative is they'd lose a lot of potential business in 2020.
As I'd said, I'd used it fine over both Zoom and Teams (multiple times on both in fact) and the only reason I even bought Jackbox Party games was because several different work colleagues (I think it might have been 3 different people) recommended it to me after they had played their own games (individually) over Zoom and other conferencing solutions.
I don't have any experience with Jitsi so maybe the issues you were having were Jitsi specific? Maybe, being a techie, Jitsi was already "good enough" but you thought you could improve upon it a little and ended up over-engineering a solution? (we've all fallen into that trap -- when you spend your entire life building enterprise solutions it's sometimes hard to take a step back. Particularly when it's something as fun as OBS). Or maybe there was some issue with Linux? All I know is that myself and everyone I know has had zero issues hosting using Jackbox's recommended approach.
The best streaming experience for both streamer and viewers is when they can interact, and any latency over 500ms or so makes that a true challenge if you're trying to have a conversation where context is important.
Being an introvert that doesn't like it when people pay attention to them at all for any reason, I haven't really experienced this, but that's what everyone says.
There is no realtime interaction on stream anyway if you have more than a handful active chatter. And lag is high by default if the streamer is doing more than just chatting, like playing a game, building some stoff or reacting to a video.
I don't think you know just how low latency WebRTC is.
CPU usage on the streaming PC does not increase latency unless the PC is severely under spec. CPU usage increases CPU usage. That's it. Encoding usually happens on a GPU, and scene composition happens on the CPU, which is either a zero-copy routine or a very fast memcpy.
My point is that from what you're saying, it seems clear to me that you are not aware of just how good WebRTC is at this kind of thing.
I hardly ever see use cases for live streaming where's latency doesn't matter... the only one that comes to mind is non-interactive television? But this is the Internet and people usually want live responses and chat with the audience... the difference between even two seconds of latency and subsecond latency for this fundamentally changes how the audience interacts with you.
I'll bite. I play D&D remotely with my friends. I need to be able to have low latency voice and video communication, but I also need control over the audio codec and bit rate. Zoom and other video conferencing solutions use codecs optimized for voice, which makes music and sound effects sound like a Himalayan AM radio broadcast. Twitch and Youtube give me control over the video and audio quality, but the latency is 5+ seconds even on low latency mode. I tried running voice over zoom and video and music over youtube, but then drawing on the map is 5+ seconds out of sync with me saying "look here".
When you are in voice with (some of) your viewers having a 10s delay is shit for everyone. A lower delay is just a better experience, for any kind of viewer input. That aside, it's easily possible so this whole "why do you want this, you don't need this" smacks of apple tech support - it's nice that you don't need it, but evidently you are not the only use-case on earth.
This is patently gaslighting. It is on me to try to read you in a positive light and it is on you to write language that supports what you are thinking.
I think it was a matter of what libraries were available where. Namely, Lightspeed-webrtc uses the extremely popular & robust Go library Pion[1] for webrtc. It's a little over 500 lines.
The Rust Lightspeed-ingest[2] server is also ~500 lines of code, and primarily handshakes the FTL protocol used to communicate with OBS.
There is a Pion port to rust[3] that is in progress. I am not sure the state of this work. Pion is used quite extensively by many many projects; I'm not sure if the rust webrtc-rs port has any notable users yet. As I began by saying, I expect the trustability & extensiveness of Pion is what lead to lightspeed-webrtc being written in Go.
Hoping to see OP answer this and in particular what I would like to see them comment on is how they divide their codebase between these two languages. Which parts are being implemented in which of those languages and why.
You should know the OBS team plans on deprecating FTL since Mixer was the only major player who ever used it, not to mention the fact that the server side of the tech is closed. The FTL implementation in OBS is buggy, and keeping it maintained is not worth the effort for a non-standard transport protocol.
Yes I am aware however there is a new service called Glimesh that is utilizing FTL so I dont think it is going to disappear tomorrow. Also I have implemented the server side so it does not matter that it is closed.
Well, I think it's possible FTL will be gone from OBS by the end of 2021 so I guess they need to figure out what they are doing sooner rather than later. There will probably be a post about it on the OBS Github soon.
The ingress components is interesting to me. It takes the OBS stream, via the FTL protocol, & converts it into something for WebRTC to use, yes? What drove you to use FTL protocol for ingestion? Did you consider alternatives like RTMP, which I believe OBS also supports?
I hand't heard of FTL before. Apparently it was a protocol used in Microsoft's now-defunct game-streaming service, Mixer. I found some discussion of the various streaming protocols here[1], which included some description of FTL.
I guess it comes down to latency. That would still have left SRT on the table, yes?
Great project. Such a key area of connectivity for us all. So glad you did this. Thanks.
I went with FTL instead of RTMP for the sake of latency. Also FTL gives me a stream of RTP packets which can go directly into WebRTC meaning I have to do 0 processing of the packets where as with RTMP I would have to convert them into RTP packets. Also SRT is interesting however it is wildly complicated and does not use RTP meaning I would have to figure out how it works and then convert whatever it gives me into RTP packets for WebRTC
This project is pretty cool, I’ll tinker around with it tomorrow.
As an aside, I’ve noticed you’re building out your own stream protocol stack (FTL/LightSpeed). What’s the reasoning there? Seems slightly inconvenient to have to “hack” OBS to make the output stream work. Will FTL support be merged into OBS in the future?
If you’re just trying to avoid the latency of RTMP then I might suggest considering the existing SRT protocol[1]. It’s been open source for a while and is well-established(native support in OBS core and optional in FFMPEG). Seems to already solve a lot of the transport-level stuff that you’re working on with FTL.
So FTL is supported by OBS and was used by Mixer. I’m interested to move to SRT in the future since you’re correct, FTL support will be going away eventually
Also the work required to adapt what I have to SRT is non trivial and I would rather have something that works right now and then build in SRT support in the future
Got it. I’m fresh on FTL, first I’ve really dug into it so apologies for the ignorance. I’m in the industry of stream transport and mostly work with SRT.
Yes to the first half of this at least (KCP is not something I've heard of).
Having recently attempted to use SRT for an event's backend restreaming stack, the low latency was nice but it's a pain in the ass. It's really not designed for links where latency isn't known and consistent. You have to bake an expected latency amount into initial protocol negotiation or you'll end up with problems, and OBS' support is quite poor (failure to establish a connection for whatever reason is likely to freeze it up completely, it sucks up a lot of cpu vs. rtmp, etc).
And the other end of the stack is either pretty immature and kind of wonky (haivision's own software, srt-live-server) or requires you to pay to use it or is very closed source.
WebRTC of some sort is definitely the future of this, imo. Even if the stack kind of sucks right now the results are fabulous (discord's video streaming for eg. is webrtc based and is easily the lowest latency free screen sharing I've seen outside share-my-desktop stuff like parsec or rdp).
It looks like this uses the already-built-into-obs support for the webrtc-based ftl protocol mixer used and microsoft killed. That's actually really clever and honestly I think this is far more appealing than SRT.
What sucked about the ‘WebRTC stack’? I think the situation is much better these days. We have multiple options. SRT only has one with lots of bindings. With WebRTC you could use any of these!
Any way to push non-OBS content to Lightspeed? I've been trying to run a sub-second latency game-livestream (ala twitch plays), and I'd rather run it on a headless server without OBS.
It should work, as long as you're sending it to the server over FTL[1], which is a pretty new and uncommon protocol developed by the now-defunct Mixer.
Could it be possible to use WebRTC to have listeners become seeders/repeaters so you could have more listener without impacting too much your own CPU ? Something like AceStream.
A very interesting project, can you elaborate more on how
it is getting sub second latency and why youtube/twitch seem to have more than a few seconds of delay.
HTTP video streams are split into segments and those segments are delivered whole. Larger segments are easier to cache and scale to bigger number of viewers. The other factor is that youtube and twitch spend more CPU time on compression to achieve lower bitrates.
YouTube and twitch use RTMP which operates over TCP. This means that each time we send a packet we need to ensure that it’s been received which adds latency overhead. Lightspeed uses the FTL protocol which operates over udp thus reducing the latency overhead
So assuming you have packet loss - you just get a paused/blank stream until the flow continues? How does it handle any network issues.
Would it ever be possible to route two stream via different paths to the client and let the client just accept the first packet from either and drop the other in order to add some redundancy to delivery?
This wouldn’t actually solve anything since WebRTC can handle packet loss. The loss is going to be coming from OBS -> Server which unfortunately there isn’t much I can do about that since it uses UDP
Multiple streams is BAU with RTP - using smpte-2022-7, or most of the time just firing the packet different times on different routing tables.
Sometimes network paths die. This could be a dodgy router in a third party network that drops streams for 150ms at a time, or a bgp recalculation that knocks it out for maybe a minute or so.
In both cases you need to have multiple routes to keep your latency low.
About the choice of license (MIT): could you share your considerations about it? (if any)
It's a topic I'm interested in. Mainly to see examples (and learn from them) about when people prefer lax licenses on the style of "do what you want with this, including a closed-source proprietary business" (MIT, Apache) which has allowed in the past that big Co's. benefit from it without giving back (lots of cases in HN), vs. strong copyleft "do what you want except closed-source proprietary stuff" style (generally speaking, the GPL family -- EDIT: I know that this, being mainly a server-side thing, would be more affected by AGPL than by GPL. Bear with me.).
If there are no business / commercialization intentions in the foreseeable future, would it make sense to use a GPL license (one of them) to build a strong open-source community?
Or it doesn't matter in practice? (i.e. it doesn't really affect the chances of building a good and strong developer community)
Would I like a strong open source community? Absolutely, could you fork this and make money off of it right now? No not even close. Maybe in the future I would consider a different license but at the end of the day if you take this and make it profitable good for you :) my goal wasn’t to make money
Realize how I oriented my question around how it might affect building an OSS community.
A MIT or Apache license might detract contributors, because essentially these licenses say "your contribution is OSS today, but it might end up resulting as unintended free labor that helps build a commercial product of which you won't be part of".
However some see GPL licenses as a guarantee that external contributions are actually contributions to the overall Open-Source community as a whole, because they cannot be turned into other non-OSS product without explicit permission of all people who built it.
So yes, copyleft is more restrictive than non-copyleft. But only in the subtle sense of ensuring that no further restrictions will get added by any third party, ever. Which is a strong promise that should encourage contributions.
Ultimately, my objective is to learn if this theory matches the practice.
> because essentially these licenses say "your contribution is OSS today, but it might end up resulting as unintended free labor that helps build a commercial product of which you won't be part of".
Contributions to non-copyleft OSS licenses _are_ actual contributions to the open source community as a whole.
There's nothing preventing your GPL'd project from being used to make a commercial product that you are not a part of either. You're talking about enforcing perpetual conditions on the future of the product/license for your free labor. This is the reason why projects like MAME have contributor agreements now.
Even if through some process the license changes in the future, your contributions were still to an open source license and that version of the code will remain open forever.
I haven't seen anyone discouraged from contributing to OSS-licensed projects that are not GPL unless they are an extreme ideologue. I myself generally prefer to license my work with something like MIT or Apache (read: the ISC license, generally) and that choice is absolutely fine.
Doesn't matter. Copyright, without the licensing exceptions, won't allow EvilCorp Inc. use your code even if you publish it in the clear on the internet.
Also, more relevant to the discussion I was responding to, the GPL license doesn't allow re-licensing without consent (or a pre-arranged copyright assignment - a contributor's agreement). And it's copyright law which enforces this.
DCOs don't give copyright ownership to anyone else or even any more rights to anyone else. It is simply a declaration that you have the permission to license the code you wrote under the license you're giving it for.
True. Although in practice, the end-user freedoms that are guaranteed by copyleft practically destroy all, if not for a couple exceptions, of viable business plans.
So even if theoretically the most aggressive licenses still allow for commercialization, the practical effect is that commercializing a copyleft codebase won't travel too far on its own as a business plan.
This has been discussed a lot of times in HN, and the conclusion always is something on the lines of "don't do copyleft if you want to have any aspirations of commercialization on the software itself as opposed to peripheral elements such as support services or similar".
> So even if theoretically the most aggressive licenses still allow for commercialization, the practical effect is that commercializing a copyleft codebase won't travel too far on its own as a business plan.
Red Hat was acquired for $34 Billion.
It really depends on what you're selling and how good your product is.
> "don't do copyleft if you want to have any aspirations of commercialization on the software itself as opposed to peripheral elements such as support services or similar"
Cop out argument. This comes from the same people who do split licenses that spit in the face of the spirit of OSS. It's people who want all of the benefits of writing OSS (outside contribution, growth in use of your product, less investment in sales) but none of the risks (someone else using what you offer for free and having a better product than you). It's purely anti-competitive.
Canonical and SUSE are still around and have millions in revenue.
Turns out that Linux is a good product that needs a lot of stewardship and support that's worth paying for. Support was the most compelling feature of Rackspace's product (vs their competitors) as well until the ease of use and rapid-iteration of cloud services crushed them.
There's also hundreds of thousands of software developers and consultancies offering their services that make use of GPL'd (and other OSS-licensed) code. Probably 90% of this board is selling their services/labor of GPL'd projects. It's been my whole career.
No, what it sounds like is we're talking about people who want to write OSS but also want ownership and exclusive rights to make money off of it. My argument there is pound sand.
Note Canonical was created by an already millionaire. He already managed a Venture Capital at that time, and I assume had other businesses around. fwiw Canonical might as well be all lost money (millions of revenue, but how about profits?).
No idea about Suse.
In any case, even if they are great success examples, these can be counted on the fingers of one hand... so I'd say the sample size is not precisely "enough", not at least to prove a point.
EDIT: (reply to an edited part of the comment)
> what it sounds like is we're talking about people who want to write OSS but also want ownership and exclusive rights to make money off of it
I agree. On the other hand, most of the OSS that exists is created by such people, and what do we prefer? Idealistic but non-existing OSS software? or compromised but existing and useful OSS software? That's the question that I feel is behind all the conversations about this topic.
Wealthy people invest in things that are successful, not piss their money away for no reason. Canonical's revenues are around 110 million a year, taking in about a 10% profit margin since 2018 (which tracks with about how long it takes big tech companies to be in the black). SUSE's around 300 million.
You miscategorize Canonical like it's some vanity project. Even vanity projects can be profitable enterprises! Look at Koenigsegg cars! They're literally a millionaire's vanity project that is a profitable enterprise employing hundreds of people.
> I agree. On the other hand, most of the OSS that exists is created by such people, and what do we prefer? Idealistic but non-existing OSS software? or compromised but existing and useful OSS software? That's the question that I feel is behind all the conversations about this topic.
Free software has been around since the 80s, at this point. With or without the GPL. It turns out that there are many, many successful products and businesses that use other OSS licenses. I don't think we're in any danger of people with our skillsets not being able to eat. We're basically all potential millionaires.
The GPL is _not_ the only option available. Heck, Apache is absurdly popular and the bedrock of many enterprises...
> Even if through some process the license changes in the future, your contributions were still to an open source license and that version of the code will remain open forever.
Well, that's a good take. You're right of course: the exact version of the codebase you contributed to, was licensed as OSS and it will always stay like that.
So yes, copyleft is more about providing guarantees about the future, while non-copyleft can only provide guarantees about the present.
I think this way of putting it is clearer than my previous way of saying it. Thanks for bringing up this point of view.
> This is the reason why projects like MAME have contributor agreements now.
Hence articles such as [1] and [2], which have been posted in HN before, albeit with pretty much zero conversation.
Still, just like I'm interested in the topic of OSS licenses and the thought process of people who choose them for their projects, I find also interesting the matter about CLAs and whether to go on with one or instead opting to use a DCO [3], which some well known OSS projects have preferred in the past: [4], [5].
At least in the case of GPL, "user" refers to the end user, and yes, the license is all about some freedoms that are guaranteed to percolate towards the end users, indeed empowering them to become developers themselves, if they wanted:
> It is absolutely essential to permit users who wish to help each other to share their bug fixes and improvements with other users.
Restricting the freedom to use your work to restrict others' freedom, is not nearly so onerous or hypocritical as GPL bemoaners try to characterize it.
> For me, "freedom" doesn't mean "restricting what users can do."
> GPL restricts what users can do, and thus restricts freedom in my eyes.
A few sibling commenters have already rebutted, but to add to that, a way of looking at this is that all licences necessarily restrict users (with the possible/arguable exception of things like CC0).
The difference with GPL vs MIT is that MIT gives users the "freedom" to further restrict other users in future, whereas GPL restricts users from ever further restricting other users
As I said, I'm not trying to push for this or that license, just want to have a perspective of the choice for new OSS projects.
Mostly the usual lesson is that authors typically don't really think through much about the licensing choice when they are hyped about an upcoming first release. Technical concerns are, understandably, what gets most attention at the beginning. Your position is the most common one I've found for small-ish projects.
This is awesome! I wanted to make something like this too after Mixer shut down and I didn't have good options anymore to stream low latency gameplay to friends. But I was too lazy to do it properly.
Redirect the "Record" button of obs to make ffmpeg output to an http endpoint, which is a small server that just forwards any bytes it receives to connected consumers. They can then play it back with
I've found that most of the latency is actually coming from the player. I.e. VLC has way more latency, even if you tell it to not buffer. TCP is a bit of an issue since ffplay doesn't speed up playback when it falls behind, but it mostly works and you can just restart ffplay if you fall behind too far. When it works (good network conditions) it also achieves sub second latency.
Will definitely have to give this a try over my hack.
If you want casual people to use this, I would look into simplifying the setup. Either by providing compiled binaries instead of having to set up go, rust etc oneself. Or by containerizing something.
For anybody thinking about contributing this, this is sadly as close to not open source as open source gets. I tried to contribute to this, but it's impossible to build without the webrtc binary and the docs are deliberately opaque about going about building your own version of the binary. You are encouraged to pay the (almost sole) contributor of this repo for his build of webrtc. For this reason it's not really surprising that this version is both quite out of sync with the mainline OBS and also has only one or two contributors. Fortunately, for me, most plugins built with the mainline OBS do work with this.
I get that the maintainer has put a lot of work into this and wants to try and monetize some part which is hard when OBS is GPL, but depending on a binary which is nigh-on impossible to build and then charging for it on your website just feels a bit of a shitty way to do it.
Very cool - mind showcasing/telling us some use-cases? I'm thinking... if you pair this with a tool to control input (mouse/keyboard) you could have a ligth-fast (pun intended) VNC server with HD resolution?
Wow that is a use case that I haven’t considered haha! That sounds awesome! This was made with the intent to be used as a traditional live stream server similar to Twitch or YouTube but honestly it can be used for whatever your heart desires :)
This looks really cool! Maybe we'll switch over our weekly Jackbox games to this.
What is the relationship between viewers and CPU load? I assume it is not linear? And does it do any sort of bandwidth optimization or is the outbound bandwidth stream size times number of users?
In other words, how many viewers do you think a single DO droplet could handle?
Could this mix multiple obs streams from different origins? My friends and I put on a NYE stream party but it was actually a massive faff finding something that could mix multiple remote streams into one. Would love to be able to self host something to do it
A simple switch is fine in this case, no overlays or anything
i've dabbled in OBS' code and actually this should be possible. you might need to modify the "Media Source" element to add the GUI controls for FTL-specific stuff, but all the protocol support seems to already be in there and the actual media transport is RTP, which is already supported because the Media Source supports RTMP streams.
you'd have each of your different origins stream to their own private FTL paths, then in your OBS you'd create media sources from each of those paths then push your final, mixed stream to the public FTL path.
Do you mean on the broadcast side? Personally I'm using another instance of OBS to broadcast a few scenes as individual streams from my Linux workstation and my MacBook (right click scene or source -> add filter -> NDI Broadcast)
I'm sure Android etc have similar. NDI is (more closely related to?) a protocol rather than a program though, there's bound to be something that speaks it for what you need
OBS runs on windows as well, this only runs on Linux? Can I run this in a VM of Linux easily - and have all deps there, including OBS? (I dont know anything about OBS just yet - I just looked at it and DL'd it now...
Incidentally - I met the the producer of the podcast "moneypot" on the plane a few days ago and we talked a ton - I asked about how would one get into podcasting/streaming if they havent yet - and she mentioned some great resources - I cant recall them off the top of my head - Ill have to ping her and get them again - but this is something I am going to look into learning during this pandowntime.
If you have resources to point at - that would be appreciated.
The idea is OBS runs broadcaster side and thus what send the stream. Lightspeed runs server side (like YouTube, Twitch, Facebook, etc). Your viewers then connect to that server running Lightspeed rather than your home machine broadcasting the stream.
lightspeed seems to build and run on windows just fine. i haven't gotten a stream through quite yet but it looks to be my networking configuration's fault.
We usually use Discord's share screen feature, but having more options are always nice, and I assume I can up the resolution/bitrate as I need since it's based on an OBS stream.
Yeah! The bitrate shouldn't exceed 8k though and remember the more bitrate you use the more bandwidth the server will use which can mean incurring costs with some providers.
This is fantastic! I needed this yesterday evening while trying to bring a remote family member into a Jackbox game, with no account or software needed on their end. Great project!
This has been a super fun project which has taught me more than any other project I have done. It uses Rust, Go and React and can be deployed fairly easily on a very lightweight server. For example, I have been doing my test streams on a $5 Digital Ocean droplet and the CPU usage is at around 20%. Granted not a lot of people are watching however it is more lightweight than a solution such as Janus.
The point of this project is twofold. First I wanted to learn more about WebRTC and real-time communication in general. Second, I wanted to provide a platform where people can setup their own little live-stream environment. Maybe you just want a place where you and some friends can hang out or you are sick of the main-stream platforms.
Anyhow as of writing this post it is v0.1.0 and considered an MVP release. In the coming months I (and hopefully some of you :)) will be adding more features and trying to flesh this out into as much of a full featured platform as possible. Feel free to take a look at the repo and let me know what you all think :)
[1] Open Broadcast Software (OBS): https://obsproject.com/