Hacker News new | past | comments | ask | show | jobs | submit login

> It Debian is upgrading a dependency instead of a developer, then Debian should be ready to fix any bugs they introduce.

That's what the Debian Bug Tracking System is for. However, if the package is actually broken, and it's because e.g. it uses the dependency improperly and broke because the update broke a bad assumption, then it would ideally be reported upstream.

> This is already how it works. All vulnerable programs make an update and try to hold off in releasing it until near an embargo date. You don't have to literally update them all at the same time. It's okay of some are updated at different times than others.

That's not how it works in the vast majority of Linux distributions, for many reasons, such as the common rule of having only one version, or the fact that Debian probably does not want to update Blender to a new major version because libpng bumped. That would just turn all supported branches of Debian effectively into a rolling release distro.

> Duplicate libraries are not an issue.

In your opinion, anyway. I don't really think that there's one way of thinking about this, but duplicate libraries certainly are an issue, whether you choose to address them or not.

> This is a ridiculous policy to me as you are forcing programs to use dependencies they were not designed for. This is something that should be avoided as much as possible.

Honestly, this whole tangent is pointless. Distributions like Debian have been operating like this for like 20+ years. It's dramatically too late to argue about it now, but if you're going to, this is not exactly the strongest argument.

Based on this logic, effectively programs are apparently usually designed for exactly one specific code snapshot in time of each of its dependencies.

So let's say I want to depend on two libraries, and both of them eventually depend on two different but compatible versions of a library, and only one of them can be loaded into the process space. Is this a made-up problem? No, this exact thing happens constantly, for example with libwayland.

Of course you can just pick any newer version of libwayland and it works absolutely perfectly fine, because that's why we have shared libraries and semver to begin with. We solved this problem absolutely eons ago. The solution isn't perfect, but it's not a shocking new thing, it's been the status quo for as long as I've been using Linux!

> That doesn't mean there isn't damage done. There are many people who consider kdenlive an unstable program that constantly crashes because of distros shipping it with the incorrect dependencies. This creates reputational damage.

If you want your software to work better on Linux distributions, you could always decide to take supporting them more seriously. If your program is segfaulting because of slightly different library versions, this is a serious problem. Note that Chromium is a vastly larger piece of software than Kdenlive, packaged downstream by many Linux distributions using this very same policy, and yet it is quite stable.

For particularly complex and large programs, at some point it becomes a matter of, OK, it's literally just going to crash sometimes, even if distributions don't package unintended versions of packages, how do we make it better? There are tons of avenues for this, like improving crash recovery, introducing fault isolation, and simply, being more defensive when calling into third party libraries in the first place (e.g. against unexpected output.)

Maintainers, of course, are free to complain about this situation, mark bugs as WONTFIX INVALID, whatever they want really, but it won't fix their problem. If you don't want downstreams, then fine: don't release open source code. If you don't want people to build your software outside of your exact specification because it might damage its reputation, then simply do not release code whose license is literally for the primary purpose of making what Linux distributions do possible. You of course give up access to copyleft code, and that's intended. That's the system working as intended.

I believe that ultimately releasing open source code does indeed not obligate you as a maintainer to do anything at all. You can do all manner of things, foul or otherwise, as you please. However, note that this relationship is mutual. When you release open source code, you relinquish yourself of liability and warranty, but you grant everyone else the right to modify, use and share that code under the terms of the license. Nowhere in the license does it say you can't modify it in specific ways that might damage your program's reputation, or even yours.




>That's what the Debian Bug Tracking System is for.

Software should be extensively tested and code review should be done before it gets shipped to users. Most users don't know about the Debian Bug Tracking system, but they do know about upstream.

>Honestly, this whole tangent is pointless. Distributions like Debian have been operating like this for like 20+ years. It's dramatically too late to argue about it now, but if you're going to, this is not exactly the strongest argument.

It's not too late as evidence by the growth of solutions like appimage and flatpak which allows developers to avoid this.

>So let's say I want to depend on two libraries, and both of them eventually depend on two different but compatible versions of a library, and only one of them can be loaded into the process space. Is this a made-up problem? No, this exact thing happens constantly, for example with libwayland.

Multiple versions of a library can be loaded into the same address space. Developers can choose to have their libraries support a range of versions.

>that's why we have shared libraries and semver to begin with

Hyrum's Law. Semver doesn't prevent breakages on minor bumps.


> Software should be extensively tested and code review should be done before it gets shipped to users.

That's why distributions have multiple branches. Debian Unstable packages get promoted to Debian Testing, which get promoted to a stable Debian release. Distributions do bug tracking and testing.

> Most users don't know about the Debian Bug Tracking system, but they do know about upstream.

There are over 80,000 bugs in the Debian bug tracker. There are over 144,000 bugs in the Ubuntu bug tracker. It would suffice to say that a lot of users indeed know about upstream bug trackers.

I am not blaming anyone who did not know this. It's fully understandable. (And if you ask your users to please go report bugs to their distribution, I think most distributions will absolutely not blame you or get mad at you. I've seen it happen plenty of times.) But just FYI, this is literally one of the main reasons distributions exist in the first place. Most people do not want to be in charge of release engineering for an entire system's worth of packages. All distributions, Debian, Ubuntu, Arch, NixOS, etc. wind up needing THOUSANDS of at least temporarily downstream patches to make a system usable, because the programs and libraries in isolation are not designed for any specific distribution. Like, many of them don't have an exact build environment or runtime environment.

Flatpak solves this, right? Well yes, but actually no. When you target Flatpak, you pick a runtime. You don't get to decide the version of every library in the runtime unless you actually build your own runtime from scratch, which is actually ill-advised in most cases, since it's essentially just making a Linux distribution. And yeah. That's the thing about those Flatpak runtimes. They're effectively, Linux distributions!

So it's nice that Flatpak provides reproducibility, but it's absolutely the same concept as just testing your program on a distro's stable branch. Stable branches pretty much only apply security updates, so while it's not bit-for-bit reproducible, it's not very different in practice; Ubuntu Pro will flat out just default to automatically applying security updates for you, because the risk is essentially nil.

> It's not too late as evidence by the growth of solutions like appimage and flatpak which allows developers to avoid this.

That's not what AppImage is for, AppImage is just meant to bring portable binaries to Linux. It is about developers being able to package their application into a single file, and then users being able to use that on whatever distribution they want. Flatpak is the same.

AppImage and Flatpak don't replace Linux distribution packaging, mainly because they literally can not. For one thing, apps still have interdependencies even if you containerize them. For another, neither AppImage nor Flatpak solve the problem of providing the base operating system for which they run under, both are pretty squarely aimed at providing a distribution platform specifically for applications the user would install. The distribution inevitably still has to do a lot of packaging of C and C++ projects no matter what happens.

I do not find AppImage or Flatpak to be bad developments, but they are not in the business of replacing distribution packaging. What it's doing instead is introducing multiple tiers of packaging. However, for now, both distribution methods are limited and not going to be desirable in all cases. A good example is something like OBS plugins. I'm sure Flatpak either has or will provide solutions for plugins, but today, plugins are very awkward for containerized applications.

> Multiple versions of a library can be loaded into the same address space. Developers can choose to have their libraries support a range of versions.

Sorry, but this is not necessarily correct. Some libraries can be loaded into the address space multiple times, however, this is not often the case for libraries that are not reentrant. For example, if your library has internal state that maintains a global connection pool, passing handles from one instance of the library to the other will not work. I use libwayland as an example because this is exactly what you do when you want to initialize a graphics context on a Wayland surface!

With static linking, this is complicated too. Your program only has one symbol table. If you try to statically link e.g. multiple versions of SDL, you will quickly find that the two versions will in fact conflict.

Dynamic linking makes it better, right? Well, not easily. We're talking about Linux, so we're talking about ELF platforms. The funny thing about ELF platforms is that the way the linker works, there is a global symbol table and the default behavior you get is that symbols are resolved globally and libraries load in a certain order. This behavior is good in some cases as it is how libpthreads replaces libc functionality to be thread-safe, in addition to implementing the pthreads APIs. However it's bad if you want multiple versions, as instead you will get mostly one version of a library. In some catastrophic cases, like having both GTK+2 and GTK3 in the same address space, it will just crash as you call a GTK+2 symbol that tries to access other symbols and winds up hitting a GTK3 symbol instead of what it expected. You CAN resolve this, but that's the most hilarious part: The only obvious way to fix this, to my knowledge, is to compile your dependencies with different flags, namely -Bsymbolic (iirc), and it may or may not even compile with these settings; they're likely to be unsupported by your dependencies, ironically. (Though maybe they would accept bug reports about it.) The only other way to do this that I am aware of is to replace the shared library calls with dlopen with RTLD_LOCAL. Neither of these options are ideal though, because they require invasive changes: in the former, in your dependencies, in the latter, in your own program. I could be missing something obvious, but this is my understanding!

> Hyrum's Law. Semver doesn't prevent breakages on minor bumps.

Hyrum's law describes buggy code that either accidentally or intentionally violates contracts to depend on implementation details. Thankfully, people will, for free, report these bugs to you. It's legitimately a service, because chances are you will have to deal with these problems eventually, and "as soon as possible" is a great time.

Just leaving your dependencies out of date and not testing against newer versions ever will lead to ossification, especially if you continue to build more code on top of other flawed code.

Hyrum's law does not state that it is good that people depend on implementation details. It just states that people will. Also, it's not really true in practice, in the sense that not all implementation details will actually wind up being depended on. It's true in spherical cow land, but taking it to its "theoretical" extreme implies infinite time and infinite users. In the real world, libraries like SDL2 make potentially breaking changes all the time that never break anything. But even when people do experience breakages as a result of a change, sometimes it's good. Sometimes these breakages reveal actual bugs that were causing silent problems before they turned into loud problems. This is especially true for memory issues, but it's even true for other issues. For example, a change to the Go programming language recently fixed a lot of accidental bugs and broke, as far as anyone can tell, absolutely nobody. But it did lead to "breakages" anyways, in the form of code that used to be accidentally not actually doing the work it was intended to do, and now it is, and it turns out that code was broken the whole time. (The specific change is the semantics change to for loop binding, and I believe the example was a test suite that accidentally wasn't running most of the tests.)

Hyrum's law also describes a phenomena that happens to libraries being depended on. For you as a user, you should want to avoid invoking Hyrum's law because it makes your life harder. And of course, statistically, even if you treat it as fact, the odds that your software will break due to an unintended API breakage is relatively low; it's just higher that across an entire distribution's worth of software something will go wrong. But for your libraries, they actually know that this problem exists and do their best to make it hard to rely on things outside the contract. Good C libraries use opaque pointers and carefully constrain the input domain on each of their APIs to try to expose as little unintended API surface area as humanly possible. This is a good thing, because again, Hyrum's law is an undesirable consequence!


Update for posterity: Actually, Flatpak does have a solution for plugins, and they even explicitly use OBS as an example! Unfortunately, a lot of information around the web suggests that there are only a couple of plugins available as Flatpak extensions, but actually nowadays it appears there are in fact tons[1]. Very cool! Another one off the list.

[1]: https://flathub.org/apps/com.obsproject.Studio (go to Add-ons, click "More")




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: