The README says you can pipe this "matrix feed" to Zoom by using OBS Studio's Virtual Camera feature. I've used this virtual camera on Zoom meetings before and, especially on macOS, it can be laggy.
I need a gstreamer pipeline which gives video feeds a "Tom Goes to the Mayor" effect, and then something to hook that into Zoom. Then I'll actually be able to focus when I'm on a video call.
Could anyone provide better instructions? I'm new to OBS and have spent almost an hour trying variations of things with no success. The matrix-webcam runs in my terminal, but it outputs whatever OBS sees (video feedback infinite fractal). README.md instructions leave too many steps out.
NICE! and way cooler than what I did; i too went the python - obs - virtual device route, but to have my randomly hued face pingpong around the screen like the dvd logo screensaver
I've been playing around with OBS and Zoom lately as well. Prior to a meeting, I pre-recorded a couple of minutes of me watching the screen. I used OBS to switch to this looped pre-recording whenever I wanted to eat/drink or get out of my chair to stretch etc during the meeting.
It was mostly just to satisfy my curiousity on how difficult it is to do (it's easy) and whether anyone noticed the looping (no one did). But what I did find interesting was that I found it immensely 'freeing' to be able to participate in the meeting without being observed - there is something about knowing I'm on camera that I find taxing that I don't get in a face to face meeting.
If the meeting is big enough that you can get up without missing anything or your absence being noticed (an all hands), you can probably just turn the video off.
If it's small enough that people will notice/care that your video is off but not that you're not interacting (and you won't miss anything valuable by being absent), then your company really needs to re-evaluate its approach to meetings.
The "because I can" argument is perfectly reasonable, though.
Looks like a fun tool. On macOS, it always does an 80x24 display, no matter the actual size of the display. This make is low enough resolution that nothing really useful is resolvable. Tput has no issue determining the console size.
You might be able to refactor line 76 in __main__.py to use larger values than the default terminal window size? caveat emptor and all, since I haven't tried this myself.
Thanks boston_clone, I decided to try it and it wasn't needed to change, at least for me building from source make it work at higher res like the screenshot in the readme, really cool!
Ah, I see, webcam selection is not available, so it pulls-up my laptop's in-built one (I think) and not the one I have connected on top of my monitor, so I just see some noise from the closed laptop. Looks fun, though.
Awesome! It works now! Thanks. Device is selected by an actual number, not the device path, but easy enough to figure out. I guess the help actually says that, if I paid attention...
It'd be interesting if this could be embedded in the client/server, e.g. Jitsi. At work we turn off cameras to save bandwidth, but this seems like you could just send a bitstring, and heavily compressed at that.
Actually that reminds me of - and i personally think it's the future of video compression - that there a neural network based codecs in the making https://developer.nvidia.com/ai-video-compression
At my work, I'm making a media player to rule them all, including with live streaming via WebRTC. I only work on the front-end portion but Javascript and the browser now need absolutely no extras to perform live streams.
Here is a simpler one I made 10 years ago with few lines of Python using OpenCV which runs very slow though. https://github.com/mustafaakin/terminal-webcam Not very matrix like though.
You can follow the same steps in Linux as you did for macOS too (OBS is also available for Linux, in fact that how I personally run OBS).
The reason the Linux instructions are more verbose is because Linux also supports writing directly to a virtual camera via the command line. Something that’s not so easy to do with macOS. So you can save yourself a lot of lag and system resources by doing so rather than using OBS.
Is there any way you can make this into an installable plugin for OBS for Windows users? I'm no programmer by any means, so making this work as-is is a no starter for me.
Why does this have to be ASCII art? Is there a technical limitation to terminals that prevents the full color space of the video sensor from being rendered?