You afix a caption describing an unfortunate scenario onto the song for comical juxtaposition, since the Crab Rave song itself is jovial sounding. For example, one might add the caption "I got fired".
When I switched to Linux and started using FFmpeg, the idea of automating video production (this thing that was so time intensive in my mind) was so appealing. This is an awesome little example. What a damn good piece of software.
My experience with fmpeg is like a DSL embedded into another DSL. It's very powerful yet confusing at times. I saved a ton of ffmpeg one-liners in a notebook just in case.
FWIW when I asked about doing something on IRC (splitting a stream, process one, and overlay it over the existing one at 20%), they were quite helpful.
It was: `split[a][b];[b]lut3d=${path}[c];[a][c]blend=all_mode=overlay:all_opacity=${opacity}`
I find gstreamer (using gst-launch)[1] a more pleasant experience than ffmpeg.
You can't use some expresions on the command line like `x=(w-text_w)/2:y=(h-text_h)` but it can be workaround with some scripting using python bindings for example.
Some (simple) text overlay can be acomplished with:
Looks like BBC’s Brave project was the inspiration for this: https://github.com/bbc/brave (specifically the gst-WebRenderSrc) Brave is a real-time remote video/audio editing app. Looks neat.
Unrelated but wow, BBC has quite a bit of interesting and relevant open source projects. Simorgh, their react SSR framework, caught my eye “used on some of our biggest websites”. Encouraging for those looking to build out performance react/amp platforms https://github.com/bbc/simorgh
I used CasparCG[0] to do live html overlays with a major broacaster out of Singapore about 5 years ago, still going strong. The actual on air graphics that were used rather tame compared with the sample ones I did to prove the system.
Why do you say you can't do expressions like that on the command line? I do it all of the time. You have to escape the parentheses, but other than that it is totally doable.
Very true. And, the parameters change over time, but, the stackoverflow answers using the old parameters remain on top. This IMO adds to the confusion :)
Yes, the stackexchange (implicit) model contributes to this. It assumes once a Q has been asked and answered, the issue is resolved. Due to the non-duplicates policy, any corrections or updates are supposed to be added to the same thread. But the original accepted answer will have a tick mark next to it and likely a high score. So naive users don't read further and the obsolete answers remain heeded.
Some sort of deprecation or salience decay needs to be added.
Does anyone have some experience with this use-case: I have a MP4 file and a SRT file with subtitles is there a convenient way to re render the video with subtitles drawn on it so I can use it in places where subtitles are not supported? Is it a super cpu intensive process that takes a long time for let's say a typical 2 hour movie?
Burning subtitles into the video implies re-encoding. And if you want maintain video quality, you'll have to choose a pretty good quality setting like crf=24 and this will take a lot of time even on H.264/AVC.
You are looking at multiple hours (>2) for a 2-hour movie from my experience. One way to reduce this is to spin up a DigitalOcean droplet (a $80 per month droplet) that'll give you a lot of processing power, finish your encoding in 2-3 hours,and shut it down. It'll save you a lot of headache and time.
Not sure I answered your question, but, hope that helped!
Much harder than it sounds because future macroblocks depend on present ones, and motion vectors mean that over a few seconds, transitive dependency can reach the entire image (and does in panned and zoomed shots)
Can possibly work, but depending on movie content might not save all that much encoding.
What would be much better would be to encode an overlay raster stream, which the player can then composite at runtime, like they do with text subtitles.
Much faster to encode. Can be toggled. But players need to support it.
Yeah, but the whole point of the exercise is to see subtitles on players that don't even support subtitles; they're definitely not going to support an overlay stream.
Paralleling many of the other comments that give excellent test advice, it's also worth seeing if your CPU has Intel Quicksync support, which can give a hardware-accelerated rendering option to ffmpeg. See below link for reference of the flags.
Even the most low-end processors like the Celeron, if they have embeded GPUs, have this encoding acceleration built in and it makes a huge difference versus using the general purpose CPU portion. The generation of CPU will determine the encoders/decoders available.
I learned about this from the below post, and have been using the functionality where ever I can as it applies many of the popular transcoding sofwares out there, for example handbrake also.
> I have a MP4 file and a SRT file with subtitles is there a convenient way to re render the video with subtitles drawn on it so I can use it in places where subtitles are not supported?
You're going to have to reencode it, which is a lossy and CPU-intensive process (unless you manage to get hardware encoding working with your video card, which is possible but very fiddly IME).
> Is it a super cpu intensive process that takes a long time for let's say a typical 2 hour movie?
There are a lot of compute/filesize tradeoffs that you can make. If you don't mind a file that's 2-3x larger than your original then you can do it fairly quickly (say 1/3 of realtime). If you don't mind an effectively uncompressed video (so tens or hundreds of gigabytes) then you can do it as fast as your disks will write. If you want something similar to the original without sacrificing too much quality then it'll probably be slower than realtime.
If compression ratio or filesize isn't a priority, NVENC definitely is good enough to consider. The speed is just too good.
It can do 300fps+ (for simply transcoding. Burning in subtitles probably will slow it down a little ibt) already on my very dated GPU, likely would be even faster on newer ones.
What some people might not know is that the transcoding results and quality are hardware dependent. You will get different results depending on whether you're using NVENC, intel's on chip stuff, actual encoding hardware, etc.
Edit: Obviously it's more complicated than this. But I think this is one of the reasons behind the need for things like Netflix's VMAF
I mean, obviously it would be the case for NVENC. Old(er) versions don't even support B frames.
But I guess you can call it "firmware" or "software" dependant too, because they don't really use/support the same version of NVENC to begin with.
And I doubt the result is significantly different, if any, for software encoders like x264 - of course assuming you use the exactly same version and parameter.
The above line will hardcode the subtitles onto the video and stop after five minutes. This should give you an idea of how slow re-encoding is going to be on your machine, and what the quality of the result is going to look like.
What you ideally want is that your rendered video has approximately the same bitrate as the original, and the same visual quality. You can play with the -crf parameter to get the quality the same, and the -preset parameter to get the bitrate the same.
For reference, when I'm rendering 720p video I'm getting about 4x speed on my 10th gen Core i5. 1080p should take about twice the time.
I've played around a bit with using the gpu-accelerated encoders in ffmpeg, but I could never get the same quality or bitrate as the cpu encoder, it seems to me that they're tuned for quickly turning raw capture into something that can realistically be streamed and handled.
And when ingesting srt files in ffmpeg. Ffmpeg expects the srt file to be utf8 encoded, but it is sometimes encoded in Windows-1252 which is the original format. So you might need to transcode it first.
Urghk. I had no idea, but then again I would never use .srt myself, it's a terrible format. I'm always using .ass, because then you can get proper left-aligned subtitles, proper font, and a proper box around them.
Is it that a separate subtitle file is not supported or subtitles are not supported at all? Otherwise you can use ffmpeg to merge the srt file and the mp4 file into either a single mp4 or mkv file without re-encoding the video (using the -c copy flag). Many players support subtitles embedded into a .mp4 file.
^this. I'd be curious what player gp has found that can playback the mp4, but not read embedded subtitles. There are a few that have poor support for external (separate file) subtitles.
I'm uncertain if there are some metadata you could toggle to hint at default subtitle language and/or default to subtitles on (or off). I generally use vlc, so this isn't much of a problem for me.
It will probably take approx 1:1 time. Maybe a bit more but not so much. As another sibling says, you need good quality re-encoding. Obviosly it depends on your CPU/GPU but I guess that is a giod approximation.
I'm sorry I'm not knowledgeable in that area to tell you what it does, but it works regardless of resolution and quality. 4k or 1080p, it doesn't matter.
What's also impressive is it can stream your media at a lower quality, say 4k to 720p to your phone, also in near real time, on a very old laptop that I repurposed into a NAS.
I am curious if you would like to share your experiences.
I use FFmpeg (almost) daily and I would go as far as saying it is one of the most reliable pieces of software I 've ever used.
It all depends on what kind of data you're feeding it. I'd been using ffmpeg for years without ever seeing it segfault, until I started processing videos at an unusually small resolution and suddenly it crashes a lot. (But nondeterminstically, so I can put the command in a loop and it'll eventually finish successfully.) Well, I submitted a core dump via Ubuntu's Apport and hopefully someone will have a look at it.
Across a huge dataset from a huge number of backgrounds, usually incredibly crap, for about 18 months, now. Without downtime. And with decent quality output, or at least pretty close to equal the input (did I mention it is often incredibly bad? Including near-broken files, and even some files with less than 20,000 pixels, total).
I'm not sure exactly what "never worked" means without more detail, but I'd say it's a workhorse.
It's actually super powerful and useful for simple one-off encodes or complex batch processes. I have worked for clients who required an end-to-end transcoding pipeline (ABR) with packaging, subtitles, multiple-audio, etc. and it took a 100 line shell script to get everything done.
It's true that if you simply use it on it's simple form such as "ffmpeg -i in.mp4 out.mov" the default bitrate will favour creating small files that will have artefacts, as soon as you set the right option you get very nice results.
You might also need to supply your own font path I'm not sure how universal they are.