Hacker News new | past | comments | ask | show | jobs | submit login

What if another heartbleed happens? Wouldn't it be better to update a single shared library?



In theory. In practise it's much faster to push for a fix to your users with a brand new fat binary than having to figure out the mess that is distributing your software on every possible Linux distribution (obligatory XKCD: https://xkcd.com/927/).

Also shared libaries assumed your software will work on a different version of a library which is quite a bold assumption that may or may be true depending on the phase of the moon


I agree with this -- I also think it's much easier to check for vulnerabilities this way.

It's a bit of a stretch but I think we're moving towards a more micro-kernel approach across the board -- trying to move more and more code/libs into the software artifacts we run (in part making them bigger, like with containers/AppImage/snaps/flatpak).

I'm no security expert, but I think it's much easier to maintain the security of barebones systems + fat binaries than big systems with smaller binaries. Running programs that are supposedly self-sufficient (i.e. will never need to dynamically link) is easier to reason about and secure.

Also, there's the current renaissance in virtual machines and sandboxing tech (nemu, firecracker, gvisor, etc) which are being currently used for containers and cloud stuff but can usher in a huge level of security for the typical user as well.


> In theory. In practise it's much faster to push for a fix to your users with a brand new fat binary than having to figure out the mess that is distributing your software on every possible Linux distribution (obligatory XKCD: https://xkcd.com/927/).

Electron seems to have disproved this. There are many Electron based applications that are broken with glibc >= 2.28 even though a fixed version of Electron has been out for it for nearly a year.

Fat binaries (or fat binpacks) are a failure.


Would you mind explaining more about this? I'm not sure I understand completely what you mean -- glibc is basically impossible to statically build and is linked. It's part of the reason why "static" builds don't really exist on debian and many other distributions. Correct me if I'm wrong but glibc just isn't portable -- this is why I mentioned having to go into alpine & build with musl libc. Seems like the electron project has chosen not to support it[0].

Another aspect worth considering is the software logistics/delivery problem -- it absolutely would be great to have dynamically linked software updates if:

1) Your software could always ensure to get the version it expects with the version it expects

2) It wasn't hard to distribute the software (AKA X > 5 providers are hard to package for)

Assuming I'm not completely misunderstanding your point, if the electron based applications you're discussing were truly fat binaries, nothing could break them, outside of CPU architecture level impropriety.

BTW, there are some systems like Nix & Guix that have solved #1 -- it's extremely easy to ensure that your program gets the exact version of some dependency.

[0]: https://github.com/electron/electron/issues/9662




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: