Level of potentially messing up. If you have a chat app, relax the deps and it crashes - oh well. But if you have a tool which can cause data loss, you really want to be sure everyone's running what you tested.
It's similar to why restic (backup software) has official, static, reproducible binaries in their releases.
In user space tools and tests only. The kernel part does not include stdlib.h anywhere. Nowhere where it matters in the context of "And if my kernel crashes?"
Filesystems are complicated and have nasty failure modes. Strict versioning reduces the impact of software development QoI properties, i.e. that basically all code is substantially wrong.
I would expect it to be the opposite (especially for a linux file system): you want to test with as wide a variety of versions as possible, because that what you're going to encounter in the real world. If your code only works in a limited set of cases, what happens when you mount the file system on a different system, is the file system going to be trashed?
These are all compile time dependencies in this case. You're not going to encounter a variety of versions - that's exactly what the strict version bounds are doing. The app will be built with the tested version and everyone gets the same results.
There's at least 3 different versions that can happen (the filesystem version, the kernel version (including stable backports), and the tooling version). I'm presuming udev being a dependency means bcachefs links to that (which will be different on different OS versions).
Those are completely different things than the dependencies you can make a choice about. You have to handle different filesystem/kernel versions regardless - you don't get a choice here.
This is a bad dependency management, nothing more. Vendoring is a PITA.