Hacker News new | past | comments | ask | show | jobs | submit login

  understanding that they are different tools for different jobs
Right, but this goes against the dogma on both sides and the fact that much of Linux userspace is the wild west. Ideally, there should be a set of core system libraries (ex glibc, openssl, xlib, etc) that have extremely stable API/ABI somatics and are rarely updated.

Then one dynamically links the core libraries and statically links everything else. This solves the problem that a bug/exploit found in something like OpenSSL doesn't require the entire system to be recompiled and updated while allowing libraries that are not stable, used by few packages, etc, to be statically linked to their users. Then, when lib_coolnew_pos has a bug, it only requires rebuilding the two apps linked to it, and not necessarily even then if those applications don't expose the bug.




> Then one dynamically links the core libraries and statically links everything else.

Agreed, and that is already totally possible.

- If you split your project in libraries (there are reasons to do that), then by all means link them statically.

- If you depend on a third party library that is so unstable that nobody maintains a package for it, then the first question should be: do you really want to depend on it? If yes, you have to understand that you are now the maintainer of that library. Link it dynamically or statically, whichever you want, but you are responsible for its updates in any case.

The fashion that goes towards statically linking everything shows, to me, that people generally don't know how to handle dependencies. "It's simpler" to copy-paste the library code in your project, build it as part of it, and call that "statically linking". And then probably never update it, or try to update it and give up after 10min the first time the update fails ("well, the old version works for now, I don't have time for an update").

I am fine with people who know how to do both and choose to statically link. I don't like the arguments coming from those who statically link because they don't know better, but still try to justify themselves.


> Agreed, and that is already totally possible

How? Take for instance OpenSSL mentioned above. I have a software to distribute for multiple Debian versions, starting from Bullseye which uses OpenSSL 1.x and libicu67. Bookworm the more recent has icu72 and OpenSSL 3.x which are binary-incompatible. My requirement is that I do only one build, not one per distro as i do not have the manpower or CI availability for this. What's your recommendation?


> How?

Well you build OpenSSL as a static library, and you use that...

> Take for instance OpenSSL mentioned above.

However for something like OpenSSL on a distro like Debian, I really don't get why one would want it: it is most definitely distributed by Debian in the core repo. But yeah, I do link OpenSSL statically for Android and iOS (where anyway the system does not provide it). That's fairly straightforward, I just need to build OpenSSL myself.

> My requirement is that I do only one build

You want to make only one build that works with both OpenSSL 1 and OpenSSL 3? I am not sure I understand... the whole point of the major update is that they are not compatible. I think there is fundamentally no way (and that's by definition) to support two explicitly incompatible versions in the same build...


> Well you build OpenSSL as a static library, and you use that...

I mean yes that's what I do but see my comment, I was asking specifically about dynamic linking mentioned by the parent (OpenSSL is definitely a "core library")

> I think there is fundamentally no way (and that's by definition) to support two explicitly incompatible versions in the same build.

Yes, that's my point - in the end static linking is the only thing that will work reliably when you have to ship across an array of distros even for core libraries... The only exceptions in my mind is libgl & other drivers


I strongly believe that developers should not ship across an array of distros. First because you probably don't test on them all.

Really, that's the job of the distro/package maintainers. As a developer, you provide the sources of your project. If people want to use it on their respective distro, they write and maintain a package for it, or ask their distro maintainers to do it. That is the whole point of the distro!


Well, I completely disagree. I have a fair amount of users on a wide array of distro who are non-technical - just users, they wouldn't know how to compile something let alone write a distro package. They still deserve to be able to use the software they want without having to change OS.

> or ask their distro maintainers to do it.

This only works if you're using a rolling-release distro. You can't get new packages in the repos of Ubuntu 20.04, Suse Leap, Fedora 30 or Debian Bullseye.


Statically linking does not imply copying the code into the project


Of course not. My point was that people who say "static linking is better" because the only thing they know (which is copying the code into their project) results in something that looks like static linking are in the wrong.


> Right, but this goes against the dogma on both sides and the fact that much of Linux userspace is the wild west. Ideally, there should be a set of core system libraries (ex glibc, openssl, xlib, etc) that have extremely stable API/ABI somatics and are rarely updated.

This is largely true and how most proprietary software is deployed on Linux.

glibc is pretty good about backwards compatibility. It gets shit for not being forwards compatible (i.e. you can't take a binary linked against glibc 2.34 and run it on a glibc 2.17 system). It's not fully bug for bug compatible. Sometimes they'll patch it, sometimes not. On Windows a lot of applications still link and ship their own libc, for example.

xlib et al don't break in practice. Programs bring their own GUI framework linking them and it'll work. Some are adventurous and link against system gtk2 or gtk3. Even that generally works.

OpenSSL does have a few popular SONAMEs around but they have had particularly nastily broken APIs in the past. Many distros offer two or more versions of OpenSSL for this reason. However, most applications ship their own.

If you only need to talk to some servers, you can link against system libcurl though (ABI compatible for like twenty years). This would IMHO be much better than what most applications do today (shipping their own crypto + protocol stack which invariably ends up with holes). While Microsoft ships curl.exe nowadays, they don't include libcurl with their OS. Otherwise that would be pretty close to a universally compatible protocol client API and ABI and you really wouldn't have any good reason any more to patch the same tired X.509 and HTTP parser vulnerabilities in each and every app.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: