I randomly learned OBS a while ago for doing some twitch streams in the evenings. I'm so glad I did.
I run a 6,000 people company during the day and have OBS setup to push into Google Meet. I've done townhall with live on-screen Q/A voting, hosted podcast discussions, done PIP product reviews. I use its video record feature to react to figma prototypes and post the MP4s in the respective channel for discussion.
OBS is an amazing tool and its worth learning. Even simple things like adding a compressor to an audio stream can make a huge difference to the quality. As one of our coaches recently said "Video quality is the new presence in 1:1s".
On windows its reasonably easy to output OBS to a virtual camera for video conferencing software through a plugin. I've posted a bounty of $10k recently to make this a native feature and it's getting lots of traction.
My daughter wanted to record some videos of her playing a web-based game. I found the interface to OBS unintuitive. I managed to figure out how to capture a specific area on the desktop, but it was unexpectedly difficult to resize the output to match the input. I found some way of doing it that I can't remember.
A few months later I had to do it again and that time I couldn't find the option to resize the output to match the input.
I'd love to find some resources to help her learn how to setup OBS for recording or streaming.
Osiris, there are a lot of tutorials around learning OBS. One of the best ones that I've come across is EposVox's OBS Studo Master Class 2018. It helps you figure out what you want to learn and covers a large swath of the various OBS functionalities.
Instead of streaming the entire desktop (or a portion of it), you'll actually want to set up the ["Game Capture" source](https://obsproject.com/wiki/Sources-Guide#game-capture), and configure it to automatically attach to any fullscreen application.
I'll say that StreamLabs was better before they changed their monetization model. I was lucky to download and install a few sets of overlays a few months ago that, today, would cost a minimum monthly fee to access.
Check out "Gaming Careers" on YouTube, it's a channel dedicated to teaching all the various aspects of streaming from the ground up, assuming little or no previous knowledge.
Are there any existing plug-ins for virtual microphone?
I want to use OBS'S realtime noise suppression and noise gating in another app (mainly online lecture platform Echo360). I got it working using VoiceMeeter in what seems like a hacky way, but only with high latency so far.
I'm not aware of any plugins, but in case anyone is curious about how to replicate that setup on Linux systems with PulseAudio, you can create virtual outputs with:
If some program filters out output monitors from its input list, you can usually use pavucontrol to force-change it. Or, you can create a linked virtual source:
Use Equalizer APO with a noise-reduction VST. This is by far the best option on Windows if you are concerned about latency×compatibility×underruns as a figure of merit. It is far superior to using "virtual audio cables" and a VST host (like Lighthost or SAVI) or voicemeeter.
It's not well known, but it really works spectacularly well compared to those other options. For me it never has had any audible buffer underruns (unlike Lighthost), no noticeable latency (unlike SAVI and Voicemeeter, even with small buffer sizes), no problems regarding exclusive mode (unlike voicemeeter) and it works with every single application.
The UI is not terribly clear about this, but it can drive multiple devices independently, simply by adding several "Device" blocks to the configuration.
This was great advice! The key thing about Equalizer APO is that it does its processing of the sound before the Windows sound API, so any program recording from my mic gets the processed version, and there's no virtual microphone needed (well, the actual microphone becomes virtualised - there's no separate device). I have that all working, and mostly followed this tutorial: https://www.youtube.com/watch?v=J3fBx2ftaBs
I adjusted the numbers for my setup but his were a good starting point.
ReaFIR works quite well and has very low latency. It's a bit fiddly to auto-generate a noise profile due to the architecture of Equalizer APO (the entire audio processing runs inside the Windows audio stack, so the VST panels in the configuration editor don't have a signal). Basically you use another VST host (e.g. Lighthost or OBS), generate your noise profile there and then copy/paste the chunk data into the APO config file.
Some general EQ'ing on the mic also works wonders for how well it sounds, but that's very specific to your voice and mic.
--
Another use case of Equalizer APO where it is much better than everything else is compressing game audio. Some games simply have audio that was designed without regard for hearing safety (CS:GO is a strong contender for #1 here), and this helps immensely with it.
Please see my sibling comment. The video I linked uses ReaPlugs (https://www.reaper.fm/reaplugs/) and I ended up without needing any explicit noise reduction, just noise gating and some other adjustments.
I think there should be software that provides virtual audio devices, so you could configure it as the output for OBS and the input for you other application - something like this: https://www.vb-audio.com/Cable/
I was just thinking about this, but mostly to route the audio through OBS so I can play my teammates some sweet synth music while we wait for others to join the standup.
I've just started learning how to use it and it's blown my mind how bloody useful it is, while also being open source. I've got it set up to record part of my screen and my webcam at the same time, and then I can chop it up in post using Resolve, no problem.
Until I found OBS I was basically trying to record my screen and then narrate over it after the fact, but it just didn't jibe with me as there's basically no room for improv or failure at that point. And I personally prefer to leave my less egregious mistakes in the final cut to demonstrate that you don't always get things right the first time.
Thanks for posting the bounty. Awesome! I wonder if it would make sense to add a virtual mic output as well so that the video and audio can be synced together, for ie when someone is using OBS to Zoom... I've gotten it to work via pulseaudio routing but the audio isn't automatically synced.
Poll Everywhere (YC S08) has Q&A question support with voting. I'm a developer at Poll Everywhere and we use it during our weekly townhalls. Our company was 40% remote before the coronavirus crisis, and my experience with the Q&A poll as a remote worker has been great.
You're replying to Tobi the CEO and founder of Shopify who recently and very publically posted about 40 hour work weeks and how Shopify lives and breathes work life balance which multiple employees confirmed.
I don't know him from a bar of soap outside of what's shared publically but a cold, calculating executive is about as far from the mark as you could possibly be.
Hm, I will admit I didn't realize the particular executive I was addressing, and that Shopify is off my radar in terms of executive abuses, but CEOs are all of a class, and 40-hour work weeks are still an untenable arrangement, and the Overton window for commonly acceptable work-life balance is far, far toward the wage-slavery side of things–to say nothing of the work-life balance for millions upstream of the Shopify supply chain, e.g. computer-mineral miners in the Global South. So I am not ready to give Tobi any humanitarian awards.
In truth, I have long admired Shopify for their open culture, their tech blogs, and a product that empowers small businesses. None of these things are enough, however, to convince me that he's anything more than a wealthy Libertarian seeking (primarily) personal gain through economic exploitation, managerial coercion, and authoritarian hierarchies.
> a cold, calculating executive is about as far from the mark as you could possibly be
This is an indefensible exaggeration. You're telling me Tobi is a saint? A CEO? An absolute absurdity.
Liberals love democracy until it comes to the workplace. You disgust me.
"Presence" in terms of 1:1s should be about listening to, understanding, and empathizing with the person opposite you.
"presence" before remote working was about actively being in the room - if you've ever had a manager who checks their email in your 1:1 you've experienced someone not being "present".
Video quality in a remote 1:1 is the foundation of that presence eg how the other perseon can see that you're understanding and empathising.
I'm one of the core contributors for OBS. Our website traffic has more than doubled over the last couple of weeks due to the COVID-19 situation - when we released the v25 update we accidentally killed our site due to a cache stampede after purging the CDN (oops).
We're seeing all kinds of new uses, especially users who are integrating the OBS Virtualcam plugin to do presentations and other content sharing with apps that only support webcam input.
Thanks for maintaining OBS! Any timelines for OBS Virtualcam to be available on macOS? It seems like several software like Snap Camera are integrating with users in this mode, OBS could be really helpful this way in webinars that don't happen over Twitch/YouTube.
Woah, I was gonna ask if anyone knew when Display Capture was gonna be possible on macOS Catalina again, but when I opened OBS now, I see that it's actually possible in version 24.0.6 that I am running. Guess I must've overlooked it until now the last few times I looked for it, lol.
Anyway. Does anyone have any recommendations for settings to use in order to avoid lots of frame dropping and OBS making other applications sluggish when I try to stream to Twitch? I'm using a MacBook Air Retina 2018 model with an external monitor connected to it.
Check out CamTwist in the meantime - that has a virtual camera on Mac, and I enjoy using it to flip my video upside down (because I live in New Zealand). Also for typing large-text subtitles over the image, which was great for my late grandmother. I'm not affiliated, and it's free.
My problem is more like OBS does everything I need, but I need to use OBS as a camera in Zoom or Hangouts, but I suspect CamTwist isn't as feature-rich.
OBS is very good software, thanks for working on it!
Recording the screen works much better than any other screen recording software I tried. For this use case the preview can be a bit confusing. If the resolution does not match (because of OS 200% scaling for example) going to the settings each time to adjust it is a bit cumbersome; the interactive resizing handles in the preview somehow never helped me. Sometimes one of the reset zoom context menus helps.
Also the "Window source" would be awesome, but is a bit cumbersome to set up every time, and doesn't capture things like menus unfortunately.
It's probably really difficult to improve these things, so they work automagically for dummies like me that know very little about OBS and use only a tiny feature set, without taking making things worse for power users, which are probably happy with things as they are.
OBS is awesome. I love being able to switch scenes on the fly. But I have one suggestion that would save a lot of embarrassment for a lot of people.
Tldr; There needs to be a master audio level display or at least some sort of master indication whether a stream is getting an audio signal or not.
Or at the very least, audio sources should be muted by default in new scenes.
We have an Intro screen scene that just displays our logo and some background movement with a message that we will begin soon. We started the live stream and then muted the mic on the audio inputs and then, for good measure, muted the physical mic. We then proceeded to chat and get things ready for the presentation, etc.
Little did I know that OBS includes -all- audio sources on every scene, by default, unmuted.
And though I had muted our regular mic, the webcam's built-in mic was on and transmitting. We didn't see the green audio level animation or even the listing for that input either, because it was at the bottom of the list of audio input sources where you have to scroll down within that box to see it.
Luckily, we didn't say anything too embarrassing, but it was embarrassing nonetheless.
This is something we want to do, but there are some complications both in the design of the program and getting the UI balance between "normal" users and users familiar with professional equipment/DAWs. There's been some proposals, but we haven't reached a consensus yet.
Currently the lead developer (Jim) is the only paid developer, and he's able to work on the program full time thanks to a few large sponsorships. That said, we'd really like to be able to pay more people, as the program has many development needs that could really use attention. More detail about sponsorship/donation opportunities can be found here: https://obsproject.com/contribute
I haven't used it yet, so maybe this already exists, but maybe think about adding something to the app itself to remind people to contribute financially, similar to how Wikipedia is doing it.
Maybe count how many times someone is using the app (local count) and when the count is high then show a little something, short message and call to action to donate.
Fastly have kindly sponsored us with free CDN service. They have a tiered caching feature called Shielding that ended up being our solution - it just needed turning on.
I see! I didn't mean tiered caching. I meant, if a thousand clients simultaneously request a cacheable resource from the same colo that lacks the resource, only one request should make it to your origin, and the response should be streamed to every client. This is possible in Fastly but IIRC it depends on what combination of http and https your client and origin connections are using.
OBS is amazing. Half our faculty just went all-out and spent thousands of dollars on some commercial screen recording software.
Meanwhile I'm doing my online courses with OBS, and it works beautifully. I have multiple scenes set up in OBS that grab different parts of my screens, and I switch between them with simple key strokes, while narrating on my actions as I do them.
It's a very simple, and very effective setup, and my students love it.
To me, it is immensely powerful to be able to switch scenes and narrate live, instead of doing these things in post. This saves a ton of time, that I can instead spend on refining my content.
Love hearing stories like this! Often, we only hear the negative or when people are having issues (it's rare for folks to speak up when everything is working well!), so it's genuinely heartwarming to hear how much people are able to use our program to keep their livelihoods moving.
Probably any of the education-specific tools that are maintained similar to enterprise software platforms, where they add 'online video tutoring' capabilities to check off a feature/table-stakes box, but it has a painful UI that makes OBS look like a dream to use, and adds an extra few thousand dollars to the school's bill every year.
Not OP but I can say that on Windows it may require a lot of fiddling. Hardware drivers appear to be a big factor.
Another concern is the many overlays that accompany game launchers and drivers, including: Nvidia, Steam, GOG, etc. These add latency and sometimes private notifications.
Interface is similar across the platforms; I had to do a few test streams on macOS before I could find settings that didn't cause streaming hiccups or problems with other software. I also had to make sure the OBS app interface was on a monitor separate from the one I was streaming; otherwise sometimes it would have some weird stuttering issues.
No matter what, you'll probably need to spend a little time tweaking things to get it all working like you want. But the 'scenes' and preferences are pretty good about letting you lock things down once you do find something you like.
In OBS, a Scene is an arrangement of video inputs, images, texts written on the screen, ...
If you're a teacher and you're going through a PDF exercise opened on your screen while drawing things on a whiteboard behind you, you may want to have 2 scenes:
* One with the opened PDF in full screen, with your camera feed in the bottom-right of the video, in small.
* One with the camera feed in full screen, where viewers can clearly see what you're writing
You'd then be able to switch between those 2 scenes at will depending on what you're currently doing. You'd show the first scene when you're reading the exercise out loud and then switch to the second scene when you're resolving it on the whiteboard.
I use OBS Studio with OBS-VirtualCam [0] to attend virtual lectures & hold meetings for my team. I've found it to be incredibly convenient because you can control nearly everything with scenes and the audio controls.
Before meetings start, I can broadcast music and display announcements, and then without having to hit a jarring "End Screenshare" can switch to my webcam and start a meeting. Live demos and presentations are another scene with the desktop/window/browser and webcam. 100% would recommend.
I'm doing something similar on Linux, although a bit more complicated.
I'm using v4l2loopback [0] to create a dummy video device, ffmpeg to create a stream endpoint that streams into the dummy video device, then setting up OBS to stream to localhost.
It is actually really nice to have the capability to fully control what is going in to the video input.
I haven't run into a need to also change the audio input yet but if it becomes necessary, it should be possible to set up loopback with ALSA.
Are you using this with zoom, by any chance? I had no luck trying to capture my Webcam with ffmpeg, add text to it with ffmpeg, and output everything to a fake Webcam with video4linux. Actually, it works perfectly well, but this particular stream I can't open with zoom, even though zoom will accept it perfectly if instead of my Webcam, I add text to a video file.
I suppose zoom detects that my actual Webcam is in use, and therefore refuses to display... any webcam whatsoever, including the virtual one...? Makes little sense but maybe...
where exclusive_caps=1 is the work-around for Chrome (both video_nr for /dev/video7 and card_label should be able to be set to ~arbitrary values). You need to first start writing stream to the loopback device and then it would switch itself into a capture-only device and Chrome will recognize it.
# Replace `/dev/video2` with the dummy video device added by `l4v2loopback`.
ffmpeg -re -listen 1 -i rtmp://127.0.0.1:5050/ -c:v rawvideo -an -pix_fmt yuv420p -f v4l2 /dev/video2
After starting ffmpeg, you set up OBS to stream to a custom streaming server at`rtmp://127.0.0.1:5050` and start streaming.
It's not very efficient and there's a delay since OBS is encoding with h264 then ffmpeg is decoding that. It's not too bad for me because I can use the NVENC encoder but I'm sure there's a way to get OBS to stream raw video somehow.
how are you getting desktop audio (music or whatever) to get sent to your meetings? I didn't see how to expose the audio output from obs as a "microphone" or whatever to video conferencing software. I ended up hacking my setup together with voicemeeter but it's pretty sloppy and error prone.
Pipewire [0] (the successor to PulseAudio) attempts to streamline this process for
Linux. I've been messing with wf-recorder [1] for my screen+audio recordings, and
might try to get it to spoof a camera input so I can get any program attempting to
connect to the webcam to instead turn into a screen-casting tool.
Tip: zoom doesn't list pulseaudio monitors in the available sources, and trying to set zoom's input to anything manually from pavucontrol fails silently (even if zoom is set to "use system default" or whatever it's called).
The only way I found to stream audio to zoom is to use a pulseaudio module that lets you use a named pipe as a source. You can then output your sound to said named pipe, and set it as the microphone in zoom. The sound is pretty bad of course.
Are there any "virtual usb" devices, something like a software usb gadgets to emulate a webcam so that software will have to special knowledge of these alternative sources?
OBS is commonly used for video game streaming, but it's a great tool for any scenario where you need to take live audio and video from different sources and display them at the same time or transition between them.
I've been using it to make a corny music interview show with my local musician friends during the coronavirus shelter in place. Whereas a lot of my fellow musicians are streaming from their phone, I'm able to connect a mixer to my computer and stream the show with really good audio quality.
The B in OBS hints at streaming, but it's also fantastic for purely recording. It's honestly surprising how lacking that space was before OBS. I remember using FRAPS/Taksi a bit, and stuff like Camtasia, but there were all pretty awful to be honesty and definitely not free or open source.
I really like using the recording feature to do sound checks. We go through all of our checks, then I watch the video locally in VLC. That way I'm certain when it goes live it'll sound the way it's supposed to.
I wish more people would do this. Or maybe OBS should have some (opt-out) warnings for when your audio is either unbalanced, too low or too loud. I seen way too videos or streams with bad audio levels.
It feels like OBS has been here forever but I remember the days when I had to use Camtasia. The software was actually surprisingly good and easy to use but for the amount you were paying, it wasn't worth it, not to mention the proprietary recording format wasn't doing it much good.
Agreed. I used it just this week to record my screen (have used it in the past for streaming) and was blown away at not only how easy it was to use, but how small the files were and how well it integrated the encoders my system supported.
I've been using OBS for a couple months for live streams on YouTube (see https://www.jeffgeerling.com/blog/2020/ansible-101-jeff-geer...), and I have had rock-solid reliability, using an external mic interface, an external camera (displayed PIP), and sharing one of two displays during some instructional sessions.
One thing to keep in mind, though—unless you have a dedicated video card, and it's supported by OBS (the list of supported cards on macOS is very thin), your CPU has to do all the compositing and compression, meaning you need a lot of CPU to be able to manage the streaming.
On my 2016 MacBook Pro 13", it barely has the horsepower to do a stream and also run processes that I'm explaining (e.g. manage some VMs, run some database operations, etc.). I had to turn down the compression method to 'ultrafast', which is lowest quality (but still pretty good with 1080p output), and I also use SwitchResX to set my shared display at 1080p 1x resolution (instead of 4K/2x resolution).
OBS core team member here, just a quick clarification on this post. OBS will not run without a supported GPU for compositing; that is always handled by the GPU running on OpenGL (for macOS and Linux, we use Direct3D on Windows). The available encoders, however, might change based on the available hardware. Hardware encoders are, generally, much faster and lower impact on system resources, but may have lower quality per bitrate as a trade-off.
If I was doing Logic Pro X tutorials, or XCode tutorials, this would be a non-option - which are the only kinds of tutorials I personally have enough knowledge and drive to create.
PC is a non-option for a chunk of people in dev and media - I, for instance, do iOS dev for a living, and live by Logic Pro X for professional audio work. (I've been using Logic for about 15 years...)
The amount of time it would take for me to transition, to, say Ardour - or the amount of my career that would be lost if I swapped from, say, iOS to...I'm not sure what the FOSS equivalent is...(Android doesn't count, obviously, because Google) I'd lose years and years of training, experience, and wisdom.
From my experience real experience comes from learning concepts not applications.
Someone that knows the ins and outs of Microsoft Word can relatively easy switch to switch to Libreoffice Write.
Or someone that is a good modeler in 3dmax can also become a good modeler in blender.
Buttons in applications change position all the time. UI gets reworked, keyboard shortcuts changes, etc. from each version to the next, but if the concept behind doing certain stuff is learned, it doesn't really matter which software product is used.
The only time when stuff is really hard, if you encounter a concept you haven't worked with before and have to change the way you think radically.
If you're running a modern Linux desktop you're probably running Wayland, and screencasts on that have long since been a complete pain in the neck with per-compositor "solutions" that mostly don't work quite right. Fortunately someone who works on Gnome wrote the obs-xdg-portal plugin that should fix this, at least for Gnome and hopefully soon for wlroots and KDE once they fully support the underlying portal API. Until then, the easiest way to get screencasting working is just to run in X11.
(Ask me about ffmpeg raw GPU buffer capture one day; running a bunch of codec code as root is always exciting.)
Does Wayland do anything better than Xorg? Every time I see it mentioned, it is about how it does not support this or that core feature of Xorg (e.g multiple displays with custom pixel densities/scaling, screen sharing apps being broken, etc...). What is Wayland's reason for existing?
Same reason systemd exists, the previous solutions were old and clunky and some people got fed up and decided to update. Plus some good old NIH. People who were dealing just fine with previous solutions then start complaining about breaking things for the sake of breaking things and not being able to do things the way they were used to.
In the specific case of Xorg I find the situation strange because I'd gladly have made the switch 15 years ago back when messing with Xorg.conf was a common occurrence for me and it kept getting in the way (although a big portion of the blame was with the proprietary drivers, especially AMD's). Xorg was sometimes a bit of a pig too resource-wise, but that's when I was running a PC with 256MB of RAM. I remember being fairly optimistic when I first heard about Wayland and Mir, the prospect of ditching X11 was enticing.
But now? I haven't really had to wrestle with X in a long time. It just works for me. I'm definitely not looking forward to reworking my entire workflow for minor benefits although I suppose I'll have to one day. I also use X forwarding pretty extensively, but I'm probably a small minority these days.
I agree with this. Wayland had taken so long and is still lacking in a few areas, while Xorg somehow managed to reach a point where it actually "just works" first try, with some minor problem every other year. I don't need Wayland anymore.
I'm doing my classes online with Zoom. For whatever reason, under Wayland I can not choose individual windows to share--I can only share the whole desktop. I switched to Xorg and now I can share individual windows in Zoom. I honestly don't care to support proprietary software, but application writers must do extra work to support both.
Wayland by default prevents applications from accessing other application display, input and output, while on X it is basically free for all. Any application can see what any other application is displaying, read it's input & send it it input events.
This might have been fine in the past, but is not really OK any more with efforts to make things more secure (eq. to prevent a malicious application to read you password entry, make screenshots of sensitive data, inject input events to your secure sessions, etc.).
The side effects is that new protocols need to be developed that applications can use to request access to display/input/output of other applications in legitimate cases (such as screen sharing in your case) & not all is in place for that yet.
> The side effects is that new protocols need to be developed that applications can use to request access to display/input/output of other applications in legitimate cases (such as screen sharing in your case) & not all is in place for that yet.
This is the main hurdle for most people. I would say that 99% of people agree that the Wayland way makes sense and is he better way of doing things but without the needed access controls it just not ready yet.
Like if Google said "Apps can't access [location|files|whatever] without permission" on Android with no way to grant those permissions.
>Wayland by default prevents applications from accessing other application display, input and output,
So it breaks the entire linux philosophy of using input and output streams to pipe data between different modular applications?
>This might have been fine in the past, but is not really OK any more
Says who? Personally I like my computer being able to access other things on my computer. It kind of makes it more useful that way. The ability for applications on linux to fairly seamlessly work together using a set of standard protocols is one of the primary reasons I use it.
> So it breaks the entire linux philosophy of using input and output streams to pipe data between different modular applications?
Not really. To use your analogy, the way that X works - every application being able to read the framebuffer of any other - is the equivalent of every application running as root and being able to read and modify any file on the system. When you consider that applications running under Wayland may include e.g. banking details, any app being able to read that is like anything being able to read /etc/shadow.
If your computer is perfectly secure, with no untrusted code running, that's great - and also far more secure than 90% desktop computers out there.
On many systems, and by default, yes - but the other part of what's going on is that Wayland allows applications to be sandboxed like they couldn't be before, as they can no longer use your X server as a conduit to spawn an unsandboxed shell and run commands. You can, today, run e.g. Firefox in a sandboxed environment and be certain it can't access anything you don't want it to.
AFAIK graphical application disttribution/sandboxing systems such as Flatpak pretty much require this to be avaialable if they ever want to provide reasonably secure sandboxing & might be already making use of this on Wayland systems.
Arguably the end state planned for Wayland in this regard (having access to specific applications provided) is conceptually closer to streams than the current situation with X (one big shared ball of global state).
Not really - I should have bean clearer - by input i mean keyboard input and its manipulation.
You can pipe stuff to other executable all you want under Wayland, you might just not easily (eq. user granting permission using the correct protocol) inject keyboard events from on application to another (sey malware masking as a game injecting code in the form of keyboard events into a running terminal emulator or ssh client).
Nitpick: This is not a meaningful comparison. Wayland is a wire protocol. Xorg is a display server, and the wire protocol it implements is called X11. There are several other display servers that implement the Wayland protocol. Some of these display servers do support those core features, and some of them don't (yet). It depends on which one you're using. The display server used by GNOME should support those features.
>What is Wayland's reason for existing?
From the website [0]:
>Wayland is intended as a simpler replacement for X, easier to develop and maintain.
wayland may be easier to maintain than X. But from what I've seen writing a compositor for wayland is more difficult than writing an X window manager, because things you got for free with X have to be implemented by the compositor in wayland.
And many applications are more difficult to target wayland, because there aren't (at least yet) standard protocols for things like screenshots, screencapture, etc. So they have to either choose one desktop environment to target, or have implementations for all of them.
Take a look at wlroots [0] for a library that massively simplifies the task of writing a wayland compositor. It also gives many of those lower level things "for free". For an even higher-level API built on top of wlroots, you can look at wltrunk [1].
There are standard APIs for screenshots and screencapture, implemented through the desktop portal and pipewire. Check the top-level post for more info about this -- it's part of why Wayland support for OBS has progressed.
I'm aware of wlroots. But KDE, GNOME, Enlightenment, etc. don't use it, so each of those have to implement things separately.
Concerning the desktop portal API. It's basically just a wrapper around the native custom API's of the underlying compositor. And it is pretty limited in functionality. For example, the screenshot API just has a way to request a screenshot, it doesn't have a way to specify that you would like to select a window, region, or display/screen/monitor. In the case of wlr-portal, from what I could tell it just always gives you a screenshot of the full desktop.
>But KDE, GNOME, Enlightenment, etc. don't use it, so each of those have to implement things separately.
I am not sure how this is relevant if you're trying to write your own compositor. If those projects want to create extra work for themselves, that's on them.
>the screenshot API just has a way to request a screenshot, it doesn't have a way to specify that you would like to select a window, region, or display/screen/monitor.
Yes, that's on purpose. What's supposed to happen is that the portal daemon (NOT the application) pops up a dialog asking the user to choose which one they want. Unfortunately the wlr portal is still not done yet and doesn't implement this.
> I am not sure how this is relevant if you're trying to write your own compositor. If those projects want to create extra work for themselves, that's on them.
I'm actually more concerned about the fact that wlroots has/had to duplicate work done by Gnome and KDE (wlroots is more recent than much of gnome and kde's wayland support).
> Yes, that's on purpose. What's supposed to happen is that the portal daemon (NOT the application) pops up a dialog asking the user to choose which one they want. Unfortunately the wlr portal is still not done yet and doesn't implement this.
Yeah, the problem is that each compositor has to implement it's own screenshot dialog, and you _have_ to go through that dialog for that compositor. So on wlroots, currently, an app can only get a full screen screenshot. And a tool like flameshot becomes awkward if the compositor opens it's own dialog. In X, if you don't like Gnome's screenshto tool, you have a handful of other options. With wayland, tough luck, the most you can get is a better editor/annotation tool.
>I'm actually more concerned about the fact that wlroots has/had to duplicate work done by Gnome and KDE
I don't think so, GNOME and KDE have never had the goal of making a reusable and generic compositor library like wlroots. You can try to build something with their internal compositor libraries (libmutter or kwayland) but they probably won't be as nice.
>The problem is that each compositor has to implement it's own screenshot dialog, and you _have_ to go through that dialog for that compositor.
This is on purpose and it's not the problem. It's the only way to do it securely. The problem is that you are trying to perform a privileged operation, which is the only way that something like flameshot can even work. Allowing random unprivileged programs to scrape your screen without confirmation is how you get trojans and other spyware. It's not worth adding more APIs to the portal just to support this because it's intended to be a secure API that can be accessed from within sandboxed applications.
Sure there are other tools on X but unfortunately none of those options are secure either.
> This is on purpose and it's not the problem. It's the only way to do it securely. The problem is that you are trying to perform a privileged operation, which is the only way that something like flameshot can even work.
That's not true. One way is to have secure protocols that can only be used by whitelisted programs in a secure context. sway has something like this (although by default I think it is pretty open), but there isn't any kind of standard mechanism for privileged protocols in wayland.
Also, I don't see why the screenshot API couldn't take a value for the type of screenshot to take. Like an enum with values for Region, Window, Screen, Full, and Any. To hint at what kind of screenshot to prefer.
Yes, one way to have a whitelist is to pop up a dialog asking to approve elevated permissions for a certain application. This is what mobile operating systems already do. The security implementation in sway is incomplete and has stalled, and is not going to work for all other types of desktop anyway. Pluggable security configuration should probably be added to wlroots at some point. This would allow any compositor to implement their preferred security policy and support whatever MACs or auditing they need.
It actually works in a way which makes sense for modern compositors and GPUs, which means the rendering is much smoother without tearing issues and so on. Issues with getting this to work reliably in Xorg is what lead to the maintainers abandoning it to work on a replacement. It just turns out shifting a bunch of software built around a core and complicated interface to another system is quite difficult.
I'm kinda new to Linux as a desktop, and thus went straight to Wayland, so these kinds of comments from ol'-timers are super interesting to me.
I run a Wayland desktop, and I start it by typing it's executable from the TTY after I log in. No fuss, no muss.
Everything works great, except there was this one game I wanted to try out that's a Windows .exe and needs to run in Wine and I couldn't quite get it to run in Wayland. So I installed xorg-server and an X window manager. Tried to just run it from TTY and it complained that there was no X server running. Okay, turns out I need another program to start X, then start my window manager, as a kind of desktop chaperone. Finally get that worked out, try running my game, and the screen tearing is a nightmare. So now I have to run a compositor in there as well to be an intermediary in the already extremely complicated X protocol. And since X needs to run as root (I think?), half the time I try to start it, I get odd permissions errors, or it tries to use the wrong TTY. As someone going the _other_ direction, I can't fathom how anyone puts up with X.
The good news, is that after it did it's initial setup and install in X, the game now seems to run fine in Wayland. :D
X11/Xorg was the default on many distros, so often it was preconfigured in a working state. I started my Linux journey around...2004? And booting into Mandrake or Slackware (or a Knoppix Live CD, my true beginning), X would work fine. But as soon as I had to install it myself (minimal Fedora, minimal Debian, or my fave, stage 1 Gentoo), I'd hit all kinds of issues with configuration and starting the X server.
As a counterpoint, I tried to set up Wayland a couple years back on Ubuntu and Fedora before it was default anywhere, and that was also a nightmare.
It's easy to forget sometimes just how much the distro maintainers make our lives easier.
Xorg has definitely become easier to configure. Back in the old days, you had to write the XF86Config file, either manually or automatically during installation, or else it wouldn’t do anything. These days, Xorg auto-detects everything and you only need an xorg.conf if you’re doing something weird.
Yeah, back then I was using one of the glorious Trinitrons at 75 Hz and 1600x1200. To work at all, I had to manually look up the horizontal and vertical sync ranges and put 'em in the XF86Config.
Debian 11, AMD RX560, KDE on X, FreeSync on and working, screen tearing appears in landscape orientation and isn't there in portrait. If that's the driver problem, then show me any reason it works fine on the same buffer size.
I don't think any solution based on video streaming can ever match what X11 provides, which is, remote apps use the settings of the client computer for rendering. e.g. with ssh -X, if I set my dpi in my .Xresources, no matter the machine to which I'm ssh'ing to, I'm always getting a correct font size for my local screen.
I haven't tested but there is no reason that can't be done in waypipe. It works by intercepting certain protocol messages and proxying them over the network. The client just has to be given the output information from the remote machine.
See chapter 7 of the Unix haters handbook from 1994, which was linked here the other day: http://simson.net/ref/ugh.pdf
The amazing thing about Wayland is that it's taken over 25 years to happen. Over those 25 years, X has become less of a problem as CPU speed and RAM have grown exponentially, and we now have GPUs to help it too.
I had to turn on X11 for screen recording once (the screen recording was done by a windows only app running under wine). It didn't take a minute to see extreme tearing. I seriously don't understand how anyone can use X11 other than as a fallback.
Those will cover the capture plugins, but OBS still needs XWayland to run due to a dependency on GLX for rendering. For those interested, there is an open PR here to add native EGL/Wayland support: https://github.com/obsproject/obs-studio/pull/2484
For wlroots compositors there is also the wlrobs plugin, which can be used if you don't need pipewire: https://hg.sr.ht/~scoopta/wlrobs
I think this is a better way to go to get the same performance and low latency gaming capture as on Windows with gaming GPUs.
The guy who made that PR frequently streams coding sessions on Youtube. I think he made it because he wanted a better way to stream some cool live opengl coding sessions. And even though that code isn't production ready, he has used it for some time now and it seems to work great.
If there is some company that slightly cares about Linux desktop and gaming on Linux, I would suggest helping with that pull request and getting it merged. (Anyone from Steam, AMD or Nvidia here?)
Some of the EGL portions of that PR are actually included in the one I linked :) At some point my plan is to go through and merge these all together if no one else does it, but streaming has not been a priority for me at the moment.
Right, I figured capturing is the big ticket item, and that most people wouldn't care what OBS itself runs on. Is there a reason to care about it running on XWayland other than being able to say you don't need X at all anymore? Would you expect to see major improvements for apps that are already doing all their heavy lifting in GLX on X11?
>Is there a reason to care about it running on XWayland other than being able to say you don't need X at all anymore?
I personally don't on my setup but the reason to do it is so other plugins can make use of EGL extensions. Native Wayland support just comes along with that trivially. Future development on platform-integration extensions is expected to happen in EGL instead of in GLX. For a current example the other PR that does direct KMS capture needs EGL to work, even with the X11 backend.
> If you're running a modern Linux desktop you're probably running Wayland
I'm a single data point but I'm running Ubuntu 19.10 and I'm not running Wayland. I don't remember if I opted out during the installation or if I wasn't given the choice.
The top reason to stay on X11 is that no screen sharing application work with Wayland (Meet, Slack, Skype) and I need them a few times per week to work with my customers.
This was more or less your point but from a different perspective.
> If you're running a modern Linux desktop you're probably running Wayland
I believe the big exception to this is Nvidia. It looks like things might be changing, but until quite recently, the Nvidia proprietary driver was X11 only, so anyone running Nvidia graphics would automatically fall back to X11.
Wayland won't be a thing until they agree on the common API for the real time capturing of the screen. I've read some Wayland developers said screen capturing is not a priority and I can't understand it. The screen capturing demand is higher than ever now we have ubiquitous live streaming sites everywhere and people earn the money from it. Besides, the easiest way to explain how to use GNU/Linux desktop for the complete beginners is by the videos.
The common API is the desktop portal and pipewire. The major projects (GNOME/KDE/wlroots) all agree on this one. Take a look at the links in the GP comment more info.
THe common API is whatever gains traction. Wayland has even less direction than Xorg development (especially in its early days), because it's a spec with lots of holes that others have to implement and fill in, respectively. Even Keith Packard doesn't think Wayland is on a good track anymore.
The Linux ecosystem needs a standard and unified API or SDK for its desktop endeavour like macOS and Windows does.
This is why this whole thread on having the user to find out if an app like OBS is running on KDE, GNOME with X11 or Wayland on Linux is something which risks itself in losing traction with general users. I always recommend people to don't bother trying out the other distros and use Ubuntu instead.
The Linux community is eternally stuck with its micro-ecosystem of alternatives of alternatives of the desktop stack which is best described by Howl's moving Castle of components.
Also for future Linux app developers, never tell the user to 'compile' something as a way of distributing your app.
>The Linux ecosystem needs a standard and unified API or SDK for its desktop endeavour
In my opinion, this is incredibly unlikely to happen any time soon. The closest existing thing to that is building web apps targeting Chrome and Chrome OS. If that's not your thing, then I would advise against operating on the assumption that there will ever be a unified SDK. At least for me it's gotten easier to understand and work with the open source world after internalizing that. There are both upsides and downsides to it.
Ubuntu is a funny example because they were ready to drop both X and Wayland for a while. They came very close to shipping their own incompatible display server called Mir.
Maybe ChromeOS could do since it is the closest to this idea.
But distro-wise, if that's the case then the second last sentence in my previous reply is an unfortunate tautology which doesn't look good for those who just wants work done or needs to reproduce/trace bugs in subsystems. :(
My point with web apps is that you can target both Chromebooks (technically a "Linux desktop") and any other system that has the Chrome browser installed.
If you're shipping a native B2B application the standard solution I see is to target a specific distro version (Latest RHEL/CentOS, Ubuntu LTS, etc) and tell customers you only support the default desktop. If they want support for some other weird configuration they can pay extra for that.
The desktop portal has gained traction. This is what we have right now, I don't know how to solve the problem of vendor- or desktop-specific features that need to be supported in extensions. X has experienced fragmentation from having to do this through its entire existence. I think the only thing a protocol designer really can do is make it easier to ship extensions. If Wayland does that for you, you probably know it already.
> If you're running a modern Linux desktop you're probably running Wayland
I missed the point when Wayland took over all the major modern distros. Did supersede Xorg now? I've been using X11 forever and never thought of alternatives.
Debian Buster (which is "stable" now) defaults to Wayland in Gnome, but you can switch to Xorg at the login window, which I had to do this week for screencasting to work as expected.
That's mostly the case now, but that's a much more recent development than Wayland on the desktop becoming popular. Also, pipedrive gives you the plumbing, but you still need the portal API. My understanding is everyone more or less agrees that's the way forward, but it's still not stable and ubiquitously implemented. Even then, that resolves capture but not control: if you want something closer to ssh -X you need ways to forward input too, and IIRC right now the main answer for that is still compositor specific, e.g. krfb relying on kwin/KDE.
There's still plenty of things which don't work with Wayland. And even Gnome Shell, which perhaps has the best support for Wayland, doesn't work very well with Wayland on some of my machines (Gnome Shell is extremely slow and jerky at least on my one machine if I try to run it with Wayland).
I've been using screen sharing on Plasma/Wayland for a while now and it works absolutely fine. With krfb remote desktop control is also fully available. The latter uses a KWin specific protocol though IIRC as virtual input isn't part of the portal API.
I am currently running Gnome in wayland and multiple displays with different fractional scaling settings, it works fine. On the other hand in Xorg I can't set different scaling factor for each monitor (at least gnome doesn't allow it), neither can I use fractional scaling.
If you are looking for a more robust solution Streamlabs OBS[1] is more popular in the livestreaming community, it's OBS on steroids. It is also open source, and just released a beta on Mac like this week.
I wouldn't say Streamlabs is more robust - it's the same core of OBS Studio, with Electron instead of Qt for the frontend and better Streamlabs integration.
For newbies, it also comes with a ton of demo content and setups (pre-roll, transition, etc), to ease the learning and spin-up curve. Agree that doesn't make it more robust (see my other comment). I have no affiliation with either project, just an Old Guy learning/doing some streaming projects.
The main issue I have with Streamlabs is that they _heavily_ push their "Prime" SaaS model.
I started to get myself set up on Streamlabs for the first time the other day, and accidentally deleted my free Theme/ Scenes that I set up during install. So I went to their Store to re-find it (https://streamlabs.com/library#/), and it's almost impossible to filter through to find the non-prime things -- none of their filters / sorts allow filtering by price or Prime.
I stumbled upon that I could type "free" in the search bar to finally do it, but it was quite painful, and without that I was having to filter out the first 20-30 pages to get past all the "Prime" addons.
If it's a fork, why not work on the OBS project to implement these enhancements there? Is there backlash to that sort of thing from the OBS maintainers?
Almost all of their changes are UI changes. They use our core OBS Studio code, with an Electron GUI instead of Qt. Not easy/basically impossible to port back over. We monitor any back end changes they make, and pull when appropriate. They do not collaborate with us, however, so it's rare they make changes that we can use.
That aside, I think it specializes OBS in a way that is too specific for OBS, which tries to be more generic. I think they both have their place, but personally I use OBS for recording videos and SLOBS adds no value for me.
I didn't know about slobs, but the idea of a "theme store" maybe would be something worth to explore on obs, It actually could be a revenue stream, just like wordpress, where the core is open source but you have several theme stores.
One example: keeping Preview and Program windows open (common for realtime stream prep/mgmt) on Mac with OBS will kill your frame rate (dropping 20%+ of frames) due to GPU rendering issues in Qt [1]. Streamlabs has no issues with this. I guess there's a question about whether that == "robust", but in terms of the app's features performing as it should, I would guess so.
Some simple/weird workarounds: literally move the Preview window offscreen, and/or open a Windowed Projector (popup) for Preview (but again keep Preview off screen).
[1] Can't find the thread on it now, but it's something about the way the two views are structured in a container
It's still in beta, so it's not actually better yet. I tried it out for a week and it didn't work out. After about 30 minutes of streaming, my dropped frames went up to 90%, despite my connection being strong. OBS never had a problem with this.
Streamlabs OBS has support for alerts (subs/followers, donations, etc), themes, overlays/widgets, etc. Most livestreamers use it for a more interactive experience. Just depends on what you need.
Love it. I've been using it on Linux with v4l2loopback to get it into things like Skype, zoom, jitsi, and teams. Really slick.
For quarantine levity, this combined with live audio effects possible with JACK rack like voice changers and echos is hilarious. Maybe today's a good day to try that out on the engineering managers meeting.
That sounds like a great setup. I use OBS for recording and PulseEffects for some features like a noise gate. However, the latter doesn’t work well for me.
Do you have some docs you could share on the setup you describe above, please?
In any case, thanks for sharing your setup so far!
I have a video about the JACK Rack setup at least from a while back but haven't written anything about the OBS/v4l2loopback stuff. It's probably a good time to write something like that up, eh?
I started using OBS when our church moved our services to live streaming due to the pandemic. Our mostly non-technical volunteer media team has had zero issues using it to stream to Facebook or a self-hosted Restreamer instance. Easy to use, straightforward interface. I'm sure we'll keep using it for streaming even after the pandemic is over.
Yep, same here. Went from zero to a fairly professional streaming experience in a few hours. I had a couple Logitech C920 webcams, and a Presonus Audiobox lying around. Combined with an older iMac, I was able to set up a pretty "fire and forget" rig.
I know I've been getting a number of questions around livestreaming for churches lately (since I used to do that a lot more in the past), and I've been gathering my thoughts in a blog post here: https://www.jeffgeerling.com/blog/2020/how-livestream-masses...
It depends mainly on the budget, but with Easter coming up (probably _the_ major day in many (if not most) Christian churches), it seems many groups are scrambling to find a way to get a decent quality stream set up in time.
Many groups on the lower end of the budget scale are using an iPhone on a tripod (but the audio is terrible). Medium range you have one or two cameras plugged into a laptop with OBS, and you can get audio from the church's sound system. High end many places already have PTZ camera systems installed, and they just need someone to control the video system during the event.
I've found OBS Studio to be brilliant. I needed to capture and livestream the screen of a small embedded Windows box. I purchased an HDMI-in/USB-out HD capture device. Plugged the HDMI side into the little Windows box, and the USB side into my linux box. OBS Studio recognized the new "HD Capture" virtual device, and captured the live video off the other system. I could save to a file, livestream, etc. No driver issues or problems. Just amazing.
If you are looking for self-hosted desktop streaming with OBS via nginx and RTMP, you might find some insights in my recent blog post: https://bitkeks.eu/blog/2020/03/desktop-video-streaming-serv...
The nginx module also supports DASH encoding, which can be delivered by dash.js - I have it in production, but not yet updated the article. Next I'll try setting up SRT.
I'm looking for an actively-developed macOS virtual webcam tool, as CamTwist's website is showing a PHP error, and their Mac software doesn't seem to be notarized: http://camtwiststudio.com.
I didn't mean to suggest they're better developers per se, but there's very little malware that comes out of Canada so with limited information about the trustworthiness of this application it might be inherently less sketchy than camera/mic capture software that comes out of some other countries. That said, they are in Quebec. :D
This.
Just a few days ago I wasted a few hours playing with OBS, CamTwist, and a few other "chroma key capable apps"... all with no luck... Zoom's built in beats them all....
What I really would like to do is have my green screen capabiity regardless of the video conference app (e.g. Facetime, FB, etc)...
I ran into the issue with syphon and Camtwist... and I gave up :)
I've been using it to create videos of an infrastructure provisioning product. One of the most useful things so far is being able to record a process that may take 15 minutes to fail but only has an error for a few seconds before clearing the screen, rebooting, hanging, etc. Much easier to rewind and grab the failure from a stream than to hang poised over a keyboard waiting to bang print screen at the precise moment needed.
Absolutely. However this isn't my code and it can fail in strange ways with an ephemeral error message. If I can't change the code, this is my workaround.
OBS is great! Huge thanks to everyone who's maintained it over the years.
As a side note, I'm trying to figure out how to do live streaming with less intense CPU requirements. My use case is effectively trying to use a Macbook Air to stream high quality video (720p, 30fps). Is there any way I could stream the video raw and encode it on a VPS somewhere? Or is there just a very real hurdle of needing a beefy CPU for any live streaming?
I've looked at WebRTC a bit, but can't seem to find much in terms of how to broadcast it in a 1 to many (like Twitch, Vimeo, etc) when using WebRTC. Mux.com at least allows you to do that if you have an RTMP source stream, but I can only find web based libraries that require Flash to stream.
Is there some HTML5 camera broadcasting solution that I'm missing? Some kind of VPS software for turning WebRTC into RTMP? I'd appreciate any direction I can get on this!
I'm actually working on a blog post on the state of streaming live from a browser! WebRTC is...a bit of a monster, especially if you want to broadcast one-to-many.
I've been hacking around with using the MediaRecorder API and piping that through WebSockets to a server that publishes via RTMP. It's definitely rough, and browsers de-prioritizing requestAnimationFrame callbacks when the tab isn't in focus kills things, but it's promising. It runs shockingly well on a Glitch instance for what that's worth.
OBS is really great, but beware the latest version has some serious crash issues on Catalina. Seems related to password managers and HTML forms with password inputs. Had my stream crash over and over until I stopped using my browser during the session.
I use OBS to record training videos for our engineers. I used QuickTime for a while, but jumped ship when QuickTime crashed and ate an hour-long session, with no recoverable backup.
There’s one feature of OBS that’s missing for me, and that’s recording sources to separate files. I believe this is called “multicording”, and it something I’ve only found in paid software, like screen flow.
Much more details in those links than the forum thread you posted that explains why we haven't done it already. Most of the issue is UI, which is most of what is discussed in that RFC. We need to change the paradigm on how outputs are handled in the UI to support things, and UI is really, really hard. We basically have to throw out a large portion of the application UI and start over to accomplish this, which is a monumental undertaking, and we just haven't had the time or resources to devote to it with all the other higher priority items that keep coming up.
We want it. Users want it. It's definitely not an if, it's a when. Unfortunately, I don't have any timeframe I can give you. OBS still only has one single full-time developer working on it, so resources are limited.
If you are using any of the many hardware-accelerated encoders available on desktop today the CPU is not at all limiting. Quicksync can run 9 HD h264 encodes, NVENC can run 4 to 17 depending on the card, etc.
Most OBS users don't configure software encode because of the resource issues.
I wasted a good chunk of a day looking into this. If you start multiple instances of OBS (you need to do this on the command line otherwise clicking will just foreground the current instance) then "multicording" should work fine, though perhaps using lots of system resources (the fan goes nuts). This was on macos.
Overall, I realized it's better to just use OBS and have it mixed into one file/stream. You trade off a bit of flexibility to do after-the-fact editing but save a lot of production/editing time.
It's not better--you're just not paying for tools that do it. When it works it solves entire categories of problem and it's one of the reasons I pay a lot of money for vMix. (In fairness, the OBS developers I know are super sharp and I'm pretty sure that this is somewhere on their to-do list.)
The main big one for me is similar - not recording but being able to stream to multiple servers.
We can do things like run nginx with nginx-rtmp-module as a proxy, but then you have to put stunnel or something in front of it for Facebook because nginx-rtmp-module doesn't support rtmps (and development seems completely stalled).
The protocol is one thing, the other nearly more important one is the bandwidth/encoding for each platform. On YouTube you may want to put out a high-res/high bitrate version and let YouTube handle the churning down to lower resolutions. On Twitch, unless you're a partner, you want to pick a low enough bitrate, otherwise your viewers will be hit with a high bitrate and some end up unable to watch the stream.
So lots of reencoding happening on the fly, which is why I know of some streamers using dedicated streaming computers, that take the OBS stream as input, but I don't know what software they use to distribute it.
[0]There are services that do this. Of course, this goes against TOS for affiliate and partnered streamers on twitch. As of my last check, it's not an issue for other platforms yet.
Being able to do this is actually the number one reason why I strongly believe nobody should ever become a Twitch affiliate.
I've been looking into building a containerized FFMPEG stream routing solution but haven't been able to think of a good use-case. What is your alternative destination after Facebook RTMPS?
FFMPEG is a great tool for any IP video streaming workflow. You can easily define multiple outputs. Check out the live streaming guide: https://trac.ffmpeg.org/wiki/StreamingGuide
I just haven't yet found a good way to export active stream status from the CLI out to other interfaces for the tool I'm working on.
Exporting to something like iMovie to manage chroma keys (video alpha channels), or manage the two videos separately: zooming in, highlighting in the background recording, effects for the foreground recording etc.
One thing I wish you could do is have two different copies of an input - in my case, I wanted a video stream input (webcam) where I apply one set of filters to it for one scene, and a different set of filters to another scene. Seems like filters are globally applied to the instance, and you can't have two different scene elements with the same video input device.
You can use groups to accomplish this currently, but it's not the most obvious thing. Just create two groups, and add the shared source to each group, then apply your filter to the group itself. Scene-specific filters and other scene-specific features is something that is on our to-do list, but very tricky at best.
Their latest Windows version has some weird DLL blacklist which blocks the ffmpeg DLLs used by the virtual camera. I made an inert version of the DLL[1] but this will break screen sharing due to a set of digital signature checks Zoom does before screen sharing.
Thank you for doing that! I'm using ManyCam for now (also Canadian) since it seems to be one of the only other mac apps that does camera switching easily right now. But, I'd much rather use OBS. Thanks again.
PS: I was going to build a shopify app a few years ago but I've got so many projects on the go. If you're interested in the short domain, let me know: shpfy.com
If you use the "preview to projector/monitor" with an extra monitor, then you can share that screen in the teleconference. I've done that with zoom while moving to online teaching these past weeks. I also got an idea from folks at the libre graphics track at SCaLE (southern California Linux expo) this year that you could use the preview feature but run the video out (eg HDMI) to a video capture device then back into your computer. I haven't tried that yet though.
On Linux I have used a v4l2loopback kernel module and an obs plugin called v4l2sink. The obs module has not had any development activity for 2 years but seems to work. This a such a useful feature it should be upstreamed for all platforms.
I wish they would have explored what some of the commenters noted: does it happen when you position OBS such that it has to recursively capture itself? (The feed contains a picture of the feed which contains a picture of the feed which …)
It seemed like some positions, it would handle 2-4 recursions okay, but the positions where it had to do infinite recursions were the ones that the frame-rate plummeted.
It looks like whenever the scene, source, and audio panels on the bottom of the window are on screen but in the background the frame times become choppy.
It may be just me, but I don't find the project's main page clear and intuitive; the first screen capture looks scary and there is nothing in that page that suggests (shows) the software is easy to use. Maybe it is easy to use, it's just not showing it.
I don't think many people would describe OBS as 'easy to use'. I think that typically comes with the territory of very powerful software like OBS though. Steep learning curve and all that.
Using OBS to record and stream interactive tutorials for customers, teaching machine learning for audio. Works super to use v4l2sink plugin for Linux to emulate a webcam and put it into any standard videochatting software. Thank you!
I absolutely love OBS, and use it consistently three times a week for streaming my church’s worship services. It’s an amazing tool, and I’m really thankful for it. That said, if you want to stream on macOS, be ready for some friction. I believe they are working on some issues with updating to metal, which should help quite a bit. Right now, studio mode will drop your FPS by half because OBS doesn’t disable v-sync properly, and window capture is really buggy. If you are wanting to commit to streaming, you may have better luck on a pc right now.
Even though OBS was initially aimed at streamers, it's also what I use to record all the YouTube videos I make for https://alchemist.camp
It's surprisingly flexible and there are plugins for almost everything. If you want to display keypresses, you can. If you want to add a layer and have both your video camera and screen incorporated, you can. If you want a layer to show keypresses, like I did when recording a video on VIM, you can do that, too.
I love OBS, just started using it a couple weeks ago. I am using the v4l2loopback plugin on Linux to output the video to Zoom. Is there a way for it to output the sound similarly to be used as an input in Zoom for Linux? I'm using a blackmagic decklink mini recorder as input, and Zoom won't accept it as a mic, but OBS does. I've tried a bunch of hacks with pavucontrol, but nothing will let me accept the OBS output as an input into Zoom. Does anyone have any ideas?
OBS is amazing. I've used it to record canned demos and presentations for a year or so now, and it just works great on Linux. I even use it to record demos of a wip or partially working feature that I'm working on and attach the video to issues in our gitlab instance to request feedback from stakeholders.
Its feature list is really impressive, but it's great for this kind of simple use case too. Be sure to give it a try and many thanks to the maintainers!
I try to love OBS but even the simplest thing (sharing a portion of your screen) requires tons of fiddling to get it work (you need to get the right area with sliders instead of just drawing it on the screen).
I guess it’s once again the same old story. In theory, it is a wonderful piece of open source software, but because of the lack of competent designers in the OSS-scene, the UX is awful for casual and first-time users. For pro-users, however, it is perfect.
You can hold alt+drag the red bounding box to crop easily. Screen-region select is on our to-do list, but hasn't been a high priority as it's technically already possible to get the same end result, just not as simple.
Some of my teammates at GitLab are coordinating the Cloud Native Summit online event. They intend to use OBS + Zoom (and possibly Restream). If there are any resources you recommend for setting up OBS + Zoom + Restream for (mostly) Mac users, I'd love to share with the team. For context, the event will feature a mix of keynote and panel-style conversations.
Just started using it last week, and we'll be using it for our first virtual meetup next Thursday!
It's a great project, but the area that would make it absolutely amazing would be a better layout UX. It requires lots of clicking around to get things lined up and sized properly. Some UX practices from existing layout systems like inkscape would go a long way.
I have a Sony A6000 going into it via a Elgato HD60 S. A Rode NT-USB microphone and a green screen my brother made me (big, solid, wooden... it's awesome.)
OBS has enabled me to create super high quality training videos and also provide excellent remote working video conferencing capabilities. I can basically do remote pair programming with this thing... it's amazing.
I have interacted with the guy that started OBS a few times because he's a member of a community I am in. He's an extremely intelligent person and definitely deserves some positive attention. If you use OBS and find value in it you should definitely consider donating, I know it goes a long way.
OBS is awesome and it’s helped with my company to grow. We build plugins and overlays for OBS to make it super easy for streamers to reward their audience. Shameless plug check out get.incent.com/Ingage and get free reward overlays.
I think the community effort to grow OBS has been awesome.
No joke, i was working with colleagues just yesterday on real time closed captioning, and started googling for a few ideas for Google's cloud speech recognition API. OBS was the first few hits, so i started learning all about this amazing tool just yesterday.
I don't use it for streaming but for recording screencasts and it does an excellent job there. I'm sure if I dig in a little, I'd find more features that would my videos better but even the default settings (on mac and Linux) work very well.
I must say, I had a rather mixed experience with OBS. I've used it before to do some streaming too, but it definitely has some issues with instability.
The example (anecdote, if you will) I remember right now: There is apparently a way to mess up your settings, so that OBS crashes. No error or anything in the UI visible. But then your config is messed up, so when you restart it, it immediately crashes again. You cannot fix it, unless you delete the settings files or reinstall the application.
It should not be possible to ever reach a situation, where you can make an application crash, by only accessing the settings it provides. It should warn about invalid settings or disable settings, which cannot be used, because of settings or reset the settings to something valid or ... But in OBS it's possible to basically mess up the whole application using its own settings UI.
I'm curious how large the development team is. There is an impressive community around OBS but seems to be many more requests for help rather than helpers.
As a seasoned user, I love the UI. But also I'm a seasoned user.
It is a highly complicated software, but it's extremely flexible and surprisingly reliable for all things streaming and screen recording. A dead-simple mode might suffice but if you do anything with streaming or recording it's worth it to learn the deeper idiosyncrasies of OBS.
> It also ships with its own 180MB copy of Chromium embedded because, you know, how could anyone write a desktop app without a copy of Chrome these days?
It has a browser because one of its features is a browser.
"Browser source is one of the most versatile sources available in OBS. It is, quite literally, a web browser that you can add directly to OBS. This allows you to perform all sorts of custom layout, image, video, and even audio tasks. Anything that you can program to run in a normal browser (within reason, of course), can be added directly to OBS."
Which must have been useful enough as a plugin that it got added to the standard install.
I just use Quicktime. Open up a a new video recording, but don't start recording. The video recording window has an option to "float on top" — use that to put your video on your screen where you want it. Then you can either open up a new screen recording (for offline recordings) or just share your entire screen on your favorite conferencing platform.
I also use desktop backgrounds smartly in conjunction with multiple spaces.
Hey there, OBS team member here. Want to get some clarification on these issues, as they are not really things we hear commonly, outside the lack of desktop audio capture.
First, for clarity, the macOS version of OBS Studio is not a "port". It's a native application designed to run on macOS.
The mentioned audio issues are not unique to OBS. Apple does not feel that applications need to directly capture desktop audio, so third-party programs that inject themselves in to the desktop audio using kernel drivers (iirc) and create a loopback device are required to capture desktop audio. We have a guide on how to do that here: https://obsproject.com/forum/resources/505/
Can you be more specific about how "none of the widgets work properly"? Which features are you referring to (outside the aforementioned audio limitations) that are not implemented?
System shortcuts like find don't really make sense in an application like OBS. There's very few things you'd need to run a find operation on. Communicating shortcuts is something we could do better, however. We are working on an undo function that will help here in case anything is accidentally changed, but with the complexity of operations available in OBS this is a bit of a challenge and will take time to get right.
> It's a native application designed to run on macOS.
It's a Qt app that happens to compile and run on macOS.
If you want a few really obvious example of how the UI works wrong, try scrolling any of the lists with a trackpad. It won't bounce properly. It also won't autohide the scrolling indicator correctly. You need to repurpose cmd+F, ehm, ok, but why did you remove cmd+W to close a window entirely?
It get much worse when you get into accessibility. Turn on keyboard navigation in the system (in System Preferences -> Keyboard -> "Use keyboard navigation…"). Now open a dialog box, like, say, the Add Image source sheet. Hit tab. It won't actually move between the widgets properly; it cycles between the radio buttons, text fields, and checkbox, but can't highlight the push buttons. This is catastrophic for people who depend on this functionality.
I run a 6,000 people company during the day and have OBS setup to push into Google Meet. I've done townhall with live on-screen Q/A voting, hosted podcast discussions, done PIP product reviews. I use its video record feature to react to figma prototypes and post the MP4s in the respective channel for discussion.
OBS is an amazing tool and its worth learning. Even simple things like adding a compressor to an audio stream can make a huge difference to the quality. As one of our coaches recently said "Video quality is the new presence in 1:1s".
On windows its reasonably easy to output OBS to a virtual camera for video conferencing software through a plugin. I've posted a bounty of $10k recently to make this a native feature and it's getting lots of traction.
https://twitter.com/tobi/status/1242641154576965634 https://github.com/obsproject/obs-studio/issues/2568 https://github.com/obsproject/rfcs/pull/15