I want to use Resolve so much but unfortunately for me every time I try to export an h264 mp4 I end up having video artifacts and audio artifacts. I tried every 16.x release as well. The source is fine and playback during the timeline is fine, it's only after it's been exported with Resolve.
I'd get all of these little black pixel'y boxes that would appear and audio would end up having little pops and squeaks. Neither of these things ever happened using Camtasia with the same hardware (even after 500+ videos) but it happens in every single video in Resolve with every export format and setting. This is only dealing with 1080p too.
These are just simple recordings of my desktop with a webcam on the bottom right. Recorded as an mkv with OBS (which I still do even when editing with Camtasia).
It's a shame because Resolve is really good for editing, and video editing is the only reason I stick with Windows.
> What did Resolve support say when you contacted them about this reproducible issue?
I was never able to get an answer from support.
The only way to get support is to make a public forum post but they protect the forum by not allowing anyone to post without first getting whitelisted by a moderator.
I spent over an hour writing a post, adding video links to DropBox, a full list of steps, settings, everything.
Except I found out I wasn't able to add URLs in the post because my account was brand new so I had to remove the only important part of my post (the video proof of the artifacts and calling them out with exact time stamp references).
Also it took multiple days for my original post to get whitelisted and their forum sorts posts by date, but it doesn't treat "moderated_at" as a separate date so by the time my post was public on the forums it was already on like page 8 and got no replies.
So I asked them if I could bump my post and they said ok, but it took days for the moderator to whitelist the bumped post and the same problem happened again where I was nearly on page 10 before it was visible.
They also have no way to trial the paid version to see if the studio version's codec is better.
I spent honestly about 10-15 hours over a week trying to solve the issue by Googling everything and trying many different version combos with no luck.
I've never seen a company work so hard on treating potential paying customers so poorly. I'd still use the software if I could, but yeah, I have no way to even get in contact with support and after spending ~2 full working days on unsuccessfully solving the problem with no resolution in sight I gave up and stick with using Camtasia.
Oh, it sadly requires `sudo` but only tells me it's for `/usr/bin/env`. Not that I'm security paranoid or anything but I'm afraid it'll bork my system by overwriting random things on my filesystem with root stuff. Gonna have to pass, sadly.
I second Resolve being excellent. I still run into crashes now and then and have had pains getting some OFX plugins to not fail-- mainly the deflicker one which causes the memory usage to skyrocket and cap out during rendering.
If you want to do fun things with ffmpeg as a cli, most roads end at filter_complex[0]. You can get some truly impressive results at the cost of maintaining really intricate commands. There's also added value when onboarding a new teammate and seeing their eyes open when they find some of the commands you maintain...
Once you grok filter_complex, it becomes a lot easier to understand how people get very complex results out of ffmpeg.
Of less renown but worthy of a mention is the tee[1] muxer. It lets you take the same resulting input and pass it to different destinations. Incredibly handy, ie, for outputting DASH, HLS, and a regular ol mp4 from the same command WITHOUT having to redo the scaling or anything else that was in the filter_complex.
As "intricate" as some -filter_complex commands may be, it's still readable. Spending 2 hours reading and testing out commands should be all you need to master it.
I've been using many CLI tools for all kind of purpose and I think it cannot be stressed how well design the whole ffmpeg program is. I even made a whole music video montage out of it.
the thing that took a bit of time to keep from getting confused is audio mapping. streams are indexed from 0 while channels are indexed from 1. knowing when something needs the stream index vs the channel index gets weird until that one day it finally clicks. i also argue with its choices of channel layout names[0]. a lack of 5.1+2.0 to be labeled similar to L,R,C,LFE,Ls,Rs+Lt,Rt. instead, it wants to call it some 7.1 monstrosity. also, industry standard for 5.1 is L,R,C,LFE,Ls,Rs while FFMPEG calls it FL,FR,FC,LFE,BL,BR. They also forgo Lt/Rt for DL/DR. Maybe I'm too video centric, and these make more sense to an audio centric person???
I did some work on making CTF-style challenge videos using filter_complex to try to get folks to play around with using ffmpeg filters in scripts. I barely scratched the surface but it ended up being a lot of fun.
I wrote an Electron app (started as just a local web server) to basically do this. The UI is kind of like iPhotos or any other photo viewer except you can split view as many times as you want and views something different in each pane.
You can also open multiple windows so if you have multiple monitors you can easily run one more videos per monitor until your computer can't keep up.
note: It's pretty alpha, several bugs, but I've used it several times a week, living with it's bugs, for a couple of years. Not sure I can justify fixing it up as I'm the only person using it AFAICT but maybe it will give others inspiration. The code I'm sure is a mess. It was my first React app (a reason to give React a try).
It also works pretty good in VR as in on Oculus Rift I open my desktop and manipulate videos and splits with the touch controllers as a mouse. No particular VR support, just saying that most of the features have clickable buttons so it's easy to use in VR.
A big limit is being Electron it can only play what Chromium plays (mp4, mkv, vp8, vp9). I've been wanting to look info FFMpeg in wasm to add support for other formats, saw the post yesterday someone had done something. I figured it might be too slow and that figuring out how to sync audio might be painful but it's still on my magical "someday I might" to do list. Since Chromium already uses ffmpeg it might be easier to just patch Electron to use ffmpeg with all features enabled instead of just the few that ship with Chrome.
When I was making a Kickstarter video, I used Keynote to arrange videos side by side, mask them, and change landscape/portrait orientation. The output is surely not high quality, but if you are already experienced with Keynote and don't need 4k, it's actually a pretty handy tool.
It did the trick for me (my Kickstarter got funded!) and I later wrote a blog post about how to use Keynote to add special effects to video on the cheap.
I've been using ffmpeg's xgrid filter (similar to hstack and vstack, but allows for arbitrary grids) to produce virtual choir videos for my church choir during covid. Here's an example: https://www.youtube.com/watch?v=Oeg9w8X6hrA.
A lot of people are producing virtual choir videos right now, but I suspect few use a process similar to mine. I use Audacity to edit the audio separately, then crop the input videos using a face-aware cropping script (which uses https://github.com/ageitgey/face_recognition), then generate a video grid using ffmpeg + xgrid.
Nice, this was super timely as I was literally building the same thing. And I had just reached the part where I was annoyed that all the submitted videos had different shapes and sizes which as you know would be tedious to correct manually.
I want to love ffmpeg but I honestly can't stand that I have to spend ~15 minutes looking for the magical incantation to get the result I want. After I find it, it works quickly and has good results.
However, it really feels like video editing isn't something that should be done on the command line. Does anyone know of decent GUI frontends to ffmpeg?
Main trick is that ffmpeg is not really a command line tool. It's a set of libraries - libav* - built in C which lets you do absolutely everything you want with video/media files (and somewhat less successfully, with streaming data too, while for modern applications such as webrtc, ffmpeg sucks).
Command line tool is just a (rather lame) wrapper for (some) of those features.
If you have one video to edit I agree. As soon as you are talking batch processing, having this massively powerful, free, self-contained executable that is compatible with pretty much any video format on earth is a god bless.
I usually only use a handful of features repeatedly, so I tend to creat some UI for myself to generate the commands. For instance for editing movies I download, removing audio streams I don’t want, adding subtitles, changing defaults, overwriting metadata (title, etc).
But dealing with all features would be a challenge. Handbrake is one attempt I can think of, but it is still kind of specialised.
if you are trying to think of ffmpeg as a traditional editor, then yes it is the wrong tool. if you want to programmatically process video/audio/etc, then it is a god send. if you have 100s, 1000s, or more videos that all have different encoding parameters (framesize,framerate,etc) where all of them need a logo or two or three prepended/appended, it would take human ops in a GUI environment forever to sort them and apply the correct logos. this is a piece of cake script-o-matically with ffprobe and ffmpeg. that's just a simple task barely even making ffmpeg get it's heartbeat up from idle. add text that comes up saying some version of "don't share this video" at timed intervals, add a studio bug, burn-in subtitles, etc. all in the same single command and hands-free. now we're talking time saving.
any new piece of software that doesn't get used frequently will have the same "what's the command again" situation. plus, the filter list is just a web page open next to your terminal away.
I did something similar using filter_complex to create a 14x14 grid showing 196 days of earth full disc shots for my earthin24 Twitter bot. It's truly impressive what ffmpeg can do https://ryanseddon.com/javascript/an-earth-mosaic/
I was already doing this but recently I've started looking into obs (https://obsproject.com/) as a possible alternative.
I can for instance, add and remove things and move them around the screen with a mouse, without having to restart things or do any math - sounds way more convenient.
It also supports the OpenFX plugin format: http://openfx.sourceforge.net/