> I've seen a lot of programs using 1 or 2 functions from a library, but, nevertheless, linking the whole library.
Exactly. Because it's easier.
Of course it would be much better to split that library up into smaller libraries so you only linked the part you actually needed, but then people on HN will go crazy about "thousands of dependencies".
Why does this even matter? The system will only ever load code into RAM that's actually used by the final binary, because code is fetched as-needed from disk/mass storage.
That other code is still taking up address space and making the symbol table bigger (and therefore slower to use). And someone might want to use your program in a context where they don't have so much disk available. More importantly these giant libraries become hard to upgrade and therefore hard to make changes to.
Address space is abundant enough on 64-bit platforms that we even waste a lot of it on security mitigations. Symbols tables are overhead, but loading multiple libs also involves all sorts of overhead. The disk space argument makes the most sense, but binary code doesn't take up a lot of disk space these days; a vast majority is typically used for data, media, the latest large AI model etc.
Having larger shared libraries also means you no longer have to statically link that included code into multiple separate executables.
Exactly. Because it's easier.
Of course it would be much better to split that library up into smaller libraries so you only linked the part you actually needed, but then people on HN will go crazy about "thousands of dependencies".