Hacker News new | past | comments | ask | show | jobs | submit login
Stali: A new static Linux distribution (sta.li)
185 points by antoinealb on Sept 28, 2016 | hide | past | favorite | 226 comments



( Also discussed previously at https://news.ycombinator.com/item?id=8819085 and https://news.ycombinator.com/item?id=7261559 )

Interesting points:

* http://sta.li/filesystem -- Everything not obviously covered by a fairly simplified hierarchy is put into /sucks , without fixing the problem of /etc still being a grab-bag miscellany.

* http://sta.li/filesystem -- /dev being for devices is a "Linux assumption".

* http://sta.li/filesystem -- There is no /usr/local .

* http://sta.li/filesystem -- /.git goes unmentioned. (-:

* http://sta.li/filesystem -- Half a decade ago, it was claimed that it has no /lib . This is contradicted here.

* http://dwm.suckless.org/ -- "Because dwm is customized through editing its source code, it’s pointless to make binary packages of it. This keeps its userbase small and elitist. No novices asking stupid questions."

* http://sta.li/installation -- Installation uses HTTP to fetch a boot volume image.

* http://sta.li/installation -- The actual installation steps of the doco are "TODO".

* http://sta.li/technologies -- The Korn shell is the default.

* http://sta.li/technologies -- uses Gerrit Pape's runit, tinydyndns, and socklog

* Commentators a few years ago assumed no kernel modules or initrd. There isn't "official" word on this, however.

* Nor is there official word on PAM, which logically falls foul of a no dynamic linking philosophy too.


> http://dwm.suckless.org/ -- "Because dwm is customized through editing its source code, it’s pointless to make binary packages of it. This keeps its userbase small and elitist. No novices asking stupid questions."

The worst thing about that attitude is that the suckless programs I tried are full of bugs and usability problems. I actually liked the idea of building simplified core programs, without the bloat and bc stuff of the environment around them, and thus opted to use st with a multiplexer. But if the result is that one has a terminal where you can't copy stuff properly because it does not handle linebreaks, or where you can't use ssh properly because the moment you scroll all text gets distorted, that is not excellence, it's just bad. And if they had more users they maybe would find those bugs, and get the help they seem to need.

And the whole "we are cool, we configure in source code" is just lazy. It's not like they gain anything from that, the spared lines of code pale in comparison to the effort needed to organize the configuration at distro level.


> And the whole "we are cool, we configure in source code" is just lazy.

Right. A little config file parsing library does not harm (and can be just copied into the project) but goes a long way for user experience.

Look at icewm, one of my favourite software projects. Yes, it's written in 60K lines of C++, but it's rock solid, fast and flexible. The configuration mechanism is powerful and an example of "no bullshit" design.


I just wish icewm had a more active project. I reported a bug and never got feedback, it also was never fixed. But yes, apart from that it is a great WM, and especially the configuration is what makes it great. My choice as well.


Also, st with a multiplexer (any multiplexer) has much more code complexity than any decent modern terminal. And less features.

Way to go, suckless.


That's precisely the reason that I use spectrwm instead of dwm. Making your program unnecessarily difficult to configure to "keep the newbies away" is just plain assholery.


> Installation uses HTTP to fetch a boot volume image.

This is a problem with all of the suckless software and it, well, sucks.


What's wrong with HTTP? I think the OpenBSD team made it clear that HTTPS does nothing for security (At least the integrity portion) of downloading stuff (which is why signify is a thing). By explicitly using a non-secure channel for transport you make it explicit that integrity must come from some other channel.


Except… There is no other channel. Suckless doesn't sign their releases, nor the image download. Nor is anything in git signed.

Integrity, schmintegrity ¯\_(ツ)_/¯


The other channel: read the source code yourself before compiling.

This actually possible with most suckless projects since they are just a couple of hundred lines of C...


Are you familiar with the Underhanded C Contest? It's possible to maliciously tamper with C code such that you definitely wouldn't spot it with a cursory glance over, and possibly even with careful study.


Counterpoint: most underhanded C successes look like incompetence instead of malice, but still look "wrong"


…to someone who spends a lot of time writing C code.


Ignoring the fact that this implies you have to be a programmer that knows how to read C to verify the integrity of the downloads, and that the lines of code in C often correlates directly to complexity of the solution being expressed, this also assumes that an attack vector would be something obvious upon reading the code, and not hidden as a subtle bug which is hard to spot.


> Ignoring the fact that this implies you have to be a programmer that knows how to read C to verify the integrity of the downloads

Don't ignore it, that's the practically the point. This is the perfect example of "don't trust me, trust the code".

From http://dwm.suckless.org/

> Because dwm is customized through editing its source code, it’s pointless to make binary packages of it. This keeps its userbase small and elitist. No novices asking stupid questions.


> This is the perfect example of "don't trust me, trust the code".

But it's not that. It's "don't trust me, trust the code as long as you're part of the select group of people that can read and understand the code given the language we use, which by the way happens to be well known to be hard to vet for subtle logic errors, which because of the nature of the language often result in segfaults (at best), or remote code execution (at worst)." Why not write it in brainfuck? They'll only change their audience by a minuscule amount compared to the original audience they could have appealed to, and they will be even more "small and elitist".

I originally translated "This keeps its userbase small and elitist. No novices asking stupid questions." as "We're a tight knit group of assholes, and if you didn't take the same path in life we did there's no point in talking to you." It may be unfair to assume that's their intention, but I think it is fair to say that's what they are doing in practice.


The stali source includes, among other things:

* a full checkout of libressl: http://git.sta.li/src/tree/lib/libressl

* a full checkout of expat: http://git.sta.li/src/tree/lib/expat

* two full /bins: http://git.sta.li/rootfs-x86_64/tree/bin http://git.sta.li/rootfs-pi/tree/bin


Emphasis on "most suckless projects".

The point was that you can verify the integrity by reading the source code. The actual limits to this depend on many things.


> > Installation uses HTTP to fetch a boot volume image.

> This is a problem with all of the suckless software and it, well, sucks.

may you please elaborate on why you think fetching boot images over http is a bad idea ? fwiw, a large number of networking gear boot their line-cards with image fetched over http e.g. csco's asr-5000 etc.


> networking gear

As someone unfamiliar with that domain, this just raises more questions for me. Is there some secondary layer of code-signing/validation that happens? Is the image host always within the local network and the connection assumed trustworthy?


> Is there some secondary layer of code-signing/validation that happens? Is the image host always within the local network and the connection assumed trustworthy?

some more information: basically most of the these routers etc. are 'chassis' based e.g. asr-5k is a 14u (iirc) device. depending on architecture, these generally have a bunch cards which end up doing most of the work e.g. session processing, forwarding traffic, running your routing protocols etc. etc. most of the session-processing cards don't really have any non-volatile storage media.

ok, now, two of these (for redundancy purposes) are management cards. these cards are not involved in any session processing, and provide 'shell' access to the device. these are generally used to configure the device, run diagnostics etc. etc. management cards have non-volatile storage media, which end up storing various boot images.

now, when a chassis is powered on, management cards are booted up first, using one of the stored images (depending on configuration). other cards (non-management) get their boot images over an internal network via http (e.g. in case of asr-5k).

tldr-mode: yes, the 'network' is trustworthy, and images are present locally (in most of the cases). though it is possible to boot management cards (if your read the above longish explanation) via images over tftp/scp etc. which can be as secure / insecure as you want :)

edit-001: lemme know if you need some more information, and i can see how i can make things clearer.


> > Installation uses HTTP to fetch a boot volume image.

> This is a problem with all of the suckless software and it, well, sucks.

If you serve things over HTTP its easier to daisy-chain it in a PXE -> iPXE kind of setup. HTTPS is not supported by most boot-ROMs.

Keep your boot-image server on site and local and HTTP isn't a real world issue.


1) dl.sta.li is not on my site, and probably not on yours.

2) That's still suboptimal. You're giving each host on your LAN the ability to interfere with any newly-provisioned server. This is a perfect and hard-to-detect way for an attacker to pivot from "RCE on some random webapp" to "advanced persistent threat".


>* http://sta.li/filesystem -- /.git goes unmentioned. (-:

This was an important point to me.

I've never used suckless software but one of the first concerns I had with stali was upgrading. Apparently initiating an upgrade is simple using git but what happens if an upgrade fails due to its size?

I assume we're upgrading the entire system over git, and in this case that means many very large binary files.

I'm of the philosophy that no security measure will get rid of the need to contiunally patch and upgrade your software.


> The actual installation steps of the doco are "TODO".

Building a kernel and such can be done like this:

http://sta.li/build

I assume that's how that part of installation works. They could have documented that more nicely, though...

(Also, I use dwm. It works out of the box, using it on a stock Arch system is as simple as compiling, installing, and adding it to .xinitrc. And configuration has been made extremely simple also. It's not obvious at all, but it works for the particular niche I fit into.)


I really like the idea of static linking.

Then I think about how I'd patch the next inevitable openssl bug.

Then I don't like it as much.


It's a complete fallacy that every program that needs crypto needs to link to crypto libraries. Look at how Plan 9 does it, where everything is statically linked, but it's other processes which do crypto. Replace only one binary, and the crypto is fixed for all binaries.


shared object libraries are exactly that, a special case of an ELF executable. Special, in the sense that they are not directly executable by users on the command line, other than that, there is no difference (as regular ELF executables can be compiled as re-entrant, position independent code just like shared object libraries).


Not exactly the same—shared libs don't have process isolation like an external crypto process would.


Yeah, definitely a po-ta-to distinction.

The real difference between static and dynamic linking is self-contained vs. external dependencies. This dependency can be actual "linking" or it can be inter-process; I don't think that changes the equation.


Interesting fact: while not all shared object files are executable (or rather: do something interesting other than dump core), some most definitely are: try executing libc someday: $ /usr/lib/libc.so.6

See http://stackoverflow.com/questions/1449987/building-a-so-tha... for more information.


Most Linux distributions are not going to have enough hardware to rebuild large fractions of their packages when a vulnerability is found in a popular library in a reasonable time. Also, many users will be unhappy to download gigabytes of updates when this happens instead of few megs. (Not every organization runs an internal mirror and not every user sits on a 1GBit pipe)

This would lead to very slow security updates for the end users.


Chrome makes use of binary patching. Just the deltas are sent. Originally they used bsdiff and ended up implementing their own (superior) version. https://blog.chromium.org/2009/07/smaller-is-faster-and-safe...


I guess the "suckless" answer would be to not use OpenSSL.


Who knows, we might soon get SuckleSSL. :)

Jokes aside, they have a bizarre philosophy and attitude, especially when you consider their software is most of the time buggy, and… Well, sucky.


It's hard to take them seriously when they include statements like this on their FAQ: "Of course Ulrich Drepper thinks that dynamic linking is great, but clearly that’s because of his lack of experience and his delusions of grandeur."


Per http://suckless.org/rocks you appear to have guessed correctly.


OK, then same question about whatever SSL library it does use.


You'd run "pkg-manager upgrade", just like you do with dynamic linking.


Yep but now instead of fetching 1 updated library, you depend on everybody and their cat to rebuild their binaries and publish updated versions.


Not really, I depend on my distro to push updated packages that I will update. And I also hope that my distro pushes me binary diffs so that it's going to be very fast.

The point is: in the context of a Linux distro, it's not true that you need dynamic linking to be able to do security patches effectively. What users do is to run the package manager to update the system; the package manager can provide updates to static binaries as well (and do it efficiently). It's just a matter of tooling; current package managers are designed around the concept of dynamic libraries, but they could be updated.


Is it practical to make diffs of recompiled binaries? Don't you need to compile to position independent code? Or otherwise make sure that most of the code's position does not change when some statically linked library changes?


Slightly different comparison, but I remember some google project to do this for shipping updates a while ago. Must have been for android, but I can't remember.


Chrome, actually. Called Courgette [1]. This would actually be really awesome to apply to statically-linked distro updates.

[1]: https://www.chromium.org/developers/design-documents/softwar...


There is no reason binaries have to be downloaded completely. They can be patched. And we can use rabin fingerprinting for deduplicating to not send duplicate blocks for each binary. Also, don't forget Chrome's approach of patching the disassembly of a binary. https://www.chromium.org/developers/design-documents/softwar...


You would thing a distro like this would be more like gentoo...you recompile stuff as needed (which for openssl means almost everything).


Gentoo is dynamically linked, so you only recompile if there's an ABI break - a major version - not a patch/minor release. And, you only recompile the stuff that directly links to it.

With static linking, you literally need to recompile everything that uses the library in any form, for every single change. So of there's a security fix in openssl and LibreOffice uses openssl, you need to recompile LibreOffice. If QEMU uses libssh2 which uses openssl, you need to recompile QEMU, even though it doesn't use openssl directly. With Gentoo you just recompile openssl and that's it.

And if there a fix to glibc, you need to recompile EVERYTHING because everything would be statically linked to it.


You don't have to recompile everything. If your system keeps a cache of object files, you only have to relink everything, which is quicker.


This is why binary patching exists: http://www.daemonology.net/bsdiff/


You mean "git update":

  * Upgrade/install using git, no package manager needed
How exactly this is going to work, I don't know.


You probably mean `git fetch` or `git pull`. There is no `git update` in the default feature set.


I like static linking, for some thing. Web applications is something I would really like to be just a statically linked binary, simply because it would enable you to chroot the application easily.


Indeed, static linking is very handy for deploying web services. Moreover, the trend with docker and similar technologies is that you deploy a whole new VM image.

That could be considered "static linking", too, because even if it uses shared libraries within the VM image, the image is always replaced as a whole - in those systems you do not replace just a single library within the running image.

If you go even further, you finally reach concepts like MirageOS, where not only the libraries are statically linked into the application, but the whole kernel as well. That way, you have exactly the code you need within your VM, nothing more.


We could use binary patches to reduce download size. SuSE did (does?) it.


This is a win if and only if there is a fast, competent, reliable security team for the distro.

Suckless doesn't currently have that capability, so... it's not a win, yet.


"FHS sucks" - and then they make an even worse hierarchy. at least modern distros moved all distro-contents to a single /usr (possibly read-only, snapshoted, etc) mount point.

#suckmore


I don't know what a good hierarchy is, but with symlinks it should not be so important where the mountpoints are. So why not keep opportunities to split distro contents into convenient directories living under "/"?

Also separating writable from read-only content, or having volumes with different performance characteristics, makes sense in certain situations. Why not have these mountpoints directly under "/"?


When distro-packaged content is in /bin and /sbin (and /lib, /usr etc), you can't easily have those 4 directories in a single image that you can atomically handle (upgrade, snapshot, mount, share, etc) and you must have them together on the /.

That's why Android has /system, and Linux has /usr now. (https://www.freedesktop.org/wiki/Software/systemd/TheCaseFor...).

Ideally, and we are getting there, / should be tmpfs/rootfs, it's populated with the needed structure by the initramfs, the distro is mounted on /usr (readonly squashfs perhaps?), user data is /home, system data is /var, configuration is /etc. And the system should boot with empty /home /var and /etc ideally. Soon...


If you want all in one file system, make symlinks. Problem solved.


If I have to set up a bunch of symlinks to get basic packages to work my distro has failed me.


I'm confused about some claims / justifications. For example:

> A user can easily run dynamic library code using LD_PRELOAD in conjunction with some trivial program like ping.

No... Not sure if it ever worked, but it's definitely not possible for many years now.

Also the link to the ASLR effectiveness paper was valid in 2004. That means they're still testing 32bit systems and effectively say scanning 64bit systems the same way is unfeasible.


Yeah, that's outdated, but there have been a few related bugs in the dynamic linker not that long ago, like LD_AUDIT: http://seclists.org/fulldisclosure/2010/Oct/344


IIRC that worked in the summer of 1992, and was blocked sometime that year or the next.


https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=59159 is an older reference, and that's 16 years ago..


I love the "suckless" philosophy [1], and I'm glad that Stali sticks explicitly to it.

[1] http://suckless.org/philosophy


Too bad it prodyces buggier, suckier code.


> "Follow the suckless philosophy"

I used to love the idea of the suckless philosophy, until I forked `st` to make it work on OS X by removing globals and by separating the terminal emulator logic from the X11 logic, and asked[1] on the mailing list if they'd be interested in merging my changes. The responses I got made me never want to use any suckless software ever again.

[1] http://lists.suckless.org/dev/1408/23366.html


Wow that was really disheartening to read. Definitely didn't expect that level of childishness.


I thought we were happy to leave behind static linked OSes back in the 90's, other than targeting embedded deployments....


One of the first things I read was support for arm and RPi so I assume it's targeting the IoT.


RPi is used for a lot more than IoT.


Shocked that it takes a fringe project to be promoting static linking and cleaning up the filesystem.

Would love to hear the linux grandfathers chime in...


You should rather be shocked that the Linux world still believes that it is anywhere near this century when it comes to modernizing the filesystem hierarchy.

* NeXTSTEP had ~/Apps, /LocalLibrary, /LocalApps, and so forth back in the early 1990s.

* The "/usr merge" first happened in AT&T Unix System 5 Release 4. SunOS 5 (a.k.a. Solaris 2) introduced it a few years after NeXTSTEP introduced its directory hierarchy. AIX gained it with version 3.2 in 1992.

* Daniel J. Bernstein's /package hierarchy (for package management without the need for conflict resolution) has been around since the turn of the century. Their major problem was the idea that one had to register things with what was to most of the world just some bloke on another continent. But they had concepts like a hierarchical package naming scheme ("admin/", "mail/", &c.), self-contained build trees with versioned names (allowing for side-by-side installation of multiple versions of a package amongst other things), and an "index" directory full of symbolic links.

* * http://cr.yp.to/slashpackage.html

* * http://cr.yp.to/slashcommand.html

Previous Hacker News discussions with more: https://news.ycombinator.com/item?id=10356933 https://news.ycombinator.com/item?id=11647304


Gobolinux is another Linux distro that ditches the FHS: http://gobolinux.org/


Static linking is a virtue now? I don't understand the advantages.


Google Go and Rust promote it. However, deploying web services is different than Linux distro.


I don't know about Go, but Rust doesn't promote it. Rust compiles a whole binary at once (for full program optimisation), but that binary can be a dynamic library or executable. How you then build the operating system on top of that is up to you.


Recently, ripgrep was on the frontpage. It says "Linux binaries are static executables" [0]. If I run ldd on cargo, it spits out the usually glibc dependencies, but Rust libraries are statically linked. It seems to be the default behavior of cargo? I would describe that as "promote static linking".

Personally, I don't judge this as good or bad. There is no simple answer. Static linking has clear disadvantage wrt security patches. On the other hand, it makes little sense to dynamically link tiny libraries, e.g. a queue data structure.

[0] https://github.com/BurntSushi/ripgrep


> It says "Linux binaries are static executables" [0].

They are. Running `ldd` on cargo doesn't confirm or deny this. You need to run `ldd` on the binary distributed:

    $ curl -sLO 'https://github.com/BurntSushi/ripgrep/releases/download/0.2.1/ripgrep-0.2.1-x86_64-unknown-linux-musl.tar.gz'
    $ tar xf ripgrep-0.2.1-x86_64-unknown-linux-musl.tar.gz 
    $ ldd ./ripgrep-0.2.1-x86_64-unknown-linux-musl/rg
            not a dynamic executable
This is not the default output on Linux. You need to compile with musl instead of glibc: https://news.ycombinator.com/item?id=12565268


To elaborate on what burntsushi said:

By default, Rust statically links all Rust code. However, glibc isn't usually statically linked, so Rust doesn't either. You can use musl instead of glibc to get 100% statically linked binaries.


The problem here is that the choice is one extreme or the other, never the happy medium.

The extremes are statically linked everything and dynamically linked everything.

Bryan Cantrill deplores this part of the design of Go. He decries the fact that rather than sitting on top of a HLL-function-call binary interface, with the portability layer being HLL function calls like read(), write(), close(), socket(), execve(), Go produces executables that hardwire one specific instruction set's kernel system call traps.

See https://news.ycombinator.com/item?id=11392119 for his own words.

GCC allows one to selectively statically link various libraries in an otherwise dynamically linked executable. (clang had yet to achieve this functionality, last that I checked a year or so ago.)


Bryan would have to elaborate an awful lot on the evilness of using system calls to make that a more general argument. (we're not supposed to use the kernel now for some reason?) It looks like it was just kind of a pain in the ass for the Joyent guys because of what they were doing specifically, which was pretty specialized.

(there's a mean and almost clinically insane blog post about Go by one of the other Joyent guys, who retired a week or two after writing it, which also doesn't shed much light but is funnier. You've got to wonder what their deal is with the Go guys, exactly...)


Yeah, on Linux, the system call interface is the supported interface for interacting with the kernel. Of course we are going to use it since 1) it's supported, 2) it's stable, more stable than e.g. glibc has ever been, not to mention we're not locked to a particular libc vendor, it just works on embedded systems which don't use glibc, 3) it allows us to make system calls without going through all the cgo machinery and without having to switch stacks.

On Solaris, the system call interface is not a supported public interface, so I made the Solaris port use libc through cgo.

Solaris made the decision that the supported public interface is through libc. That's a fine decision to make if you are the sole vendor of libc. But, that doesn't mean there's anything wrong with different operating systems making different decisions.

On Linux we use the Linux rules, and on Solaris we use the Solaris rules.

I will tell you this though, having done numerous Go ports and having written many Go compiler targets several times, it's much, much simpler to port Go if your target is something that allows static executables and making system calls.

Having done linux/arm64 and solaris/sparc64, I know exactly how much harder is one versus the other.


> On Solaris, the system call interface is not a supported public interface

To be honest, I did not know that, I guess it's never been an issue for me. That sheds a little light. Thanks.



One advantage is that it improves the compiler's ability to optimize. Obviously that matters for some applications more than others.


A lot -- maybe not a majority, but a sizeable number of people -- think dynamic linking is potentially exploitable due to its complexity, has real-life deployment problems (e.g. due to applications depending on a specific version of a library, or simply due to complacency in building and distribution) and that the advantages it offered twenty years ago are offset by larger hard drives, better and faster network connectivity, and better updating systems.

In other words, that its cost is no longer as easy to justify as it was back in the 1990s.

I have not studied the problem enough to be able to comment on it in its entirety, but there is at least some merit to a few of these claims:

* Static linking was a huge nuisance back when Unices introduced dynamic linking because keeping a system up to date was a very different affair. There was no apt-get update, apt-get upgrade. The OS vendor (often the hardware manufacturer) would keep his system up to date. For third-party software, you often depended on building programs from source (oh, yes: no autotools/cmake/whatever to deal with junk Makefiles, either, although this was partly offset by the fact that a lot of developers still knew how to write a makefile). An update in a single library could mean a bunch of manual rebuilds, sometimes followed by manual deployment. This is no longer the case, really. It is a little schizophrenic that we do daily builds for continuous integration, but insist that there's no frickin' way we're going to be able to rebuild packages that depend on a library and distribute them on time. The problem is certainly tractable, albeit at higher resource expense (imagine what an update in glibc would entail).

* On the other hand, it's not like dynamic libraries have fulfilled the promise of never using an out-of-date version ever again. It's not at all uncommon for programs to bundle their own version of shared libraries. A while back, when I first came across material discussing this problem, it turned out that a lot of programs of my system did it -- OpenOffice is the one I most distinctly remember, but there were others, too. As for other operating systems where package managers are less common (cough Windows cough), this situation is pretty much the norm when it comes to any library that Windows Update doesn't take care of.

* The linking process is extremely complex, and it has been found to be vulnerable. The vulnerabilities were patched, however. There is always a degree of uncertainty in affirming that vulnerability is inherent to complexity. Plus, if it is, then we have a lot of really bigger things to worry about, like that huge pile of code in the kernel which is orders of magnitude more complex than a dynamic loader.

Edit: I guess the best way to sum up my (current) understanding of the matter is that the case for static linking isn't as weak as it was a long time ago, but I don't think the case against dynamic linking is spectacular enough to be worth a full migration of everything. That its usefulness is diminishing, at least in some fields, is sufficiently proven by e.g. its adoption in Go. But I doubt that going back to static linking is the universal solution that it is sometimes advertised to be.


>offset by larger hard drives, better and faster network connectivity, and better updating systems.

Static linking puts more pressure on RAM and on each level of cache (because if 2 running programs share the same library, there are now 2 copies of the library contending for those resources) than dynamic linking does.

That strikes me as more important than increased use of the resources you list.


That's not always true, you don't need to have two full copies of the library at all times. See Geoff Collyer's explanation here: http://harmful.cat-v.org/software/dynamic-linking/ (but note that the memory use that he cites may not be that relevant).

I am also not convinced that this is a significant impediment for all workloads. For a lot of applications, the time wasted due to inefficient cache use is a fraction of the time spent waiting for stuff to be delivered over the network.


> cleaning up the filesystem.

4 years ago: https://fedoraproject.org/wiki/Features/UsrMove


Arch already pretty much does the same thing this does. Of course, Arch defaults to using systemd, although it's easy to change...


I still don't understand why systemd sucks


That's a loaded question, and I'm only qualified to answer from my perspective and experience. My biggest gripe with it has always been that it is alpha-quality software, even today, that has a central role in an otherwise mature OS ecosystem. It has been widely adopted (some would say forced or tricked into adoption by a few distros) and therefore all the major Linux distributions are now running at an alpha level while its creators try to figure out exactly what they want it to be. That was the state of Linux in the late 90s, a state that it overcame during the 2000s, but now it's regressing again.

First it was "just an init to replace SysV", something I could get behind, and back in 2012 or so I was actually excited about it. Then it started growing, replacing individual components of GNU/Linux with a monolithic mega-app that has more in common with Windows NT based OSes than with anything UNIX-like. Gone is the philosophy of "do one thing and do it well", replaced with "do everything no matter the quality of the results".

I've always been a Slackware user since I started messing with Linux in the late 90s, and these days I find it getting faster and better while mainstream Linux distros slow down and grow more and more bugs. One of my benchmark systems for observing the growing bloat of modern OSes is an Atom based netbook from around 2010. It shipped with Windows 7 Starter, which it ran acceptably but not great.

Recently I tested Windows 10, Slackware 14.2, Ubuntu 14.04, Ubuntu 16.04, Debian unstable, OpenBSD, and Elementary OS Loki on it. Slackware was the fastest OS on it by a wide margin, followed by OpenBSD, then Debian, Ubuntu 14.04, Elementary, Windows, and Ubuntu 16.04 dead last. Guess which of those (not counting Windows) do not have systemd? Yep, Slackware and OpenBSD. Maybe it's a coincidence, but given how Ubuntu 16.04 on my modern workstation gets progressively slower with each systemd update, whereas Slackware on the same machine continues to chug along with no issues, that's telling.

All of that said, systemd was and maybe still is a good idea, if only they can stop trying to reinvent the wheel and instead fix the spokes they broke along the way. I can't say I'm happy about eroding the UNIX philosophy from Linux, but if systemd is the future of Linux then it damn well needs to be a stable future.


The irony of the eroding the UNIX philosophy from Linux is that most real UNIX systems, meaning AIX, HP-UX, Solaris, NeXTSTep (cough macOS), Tru64,... do have something similar to systemd.

Sometimes shouting "UNIX philosophy" in GNU/Linux forums reminds me of emigrants that keep traditions of their home countries alive that are long out of fashion back home.


The sarcastic irony is, Solaris engineers implemented a fully functional systemd(8) long before systemd(8) by designing and implementing SMF, which went on to break world records with startup and shutdown speed, on what is now an ancient AMD Opteron system (I think it was either v20z or a v40z). I wanted to include the reference to the slashdot's then-article, but try as I might, I can't find it any more.


XML and CDDL, yuck and yuck.

Go drool some more over Cantrill, will you?


No, you're actually wrong, IIRC.

The other systems still fundamentally do init's job, as well as managing services. They DON'T do everything, including:

-replace cron

-handle dynamic device files (udev's job)

-set the hostname

-replace syslog

-replace inetd

-automount

-anything I forgot


AIX, HP-UX, Solaris and NeXTStep were not written by the original authors of Unix and its philosophy. Linux has always been closer to the philosophy than many of these, actually. So much so that it has imported concepts from the successor of Unix, Plan9. Linux's procfs which exposes sysinfo as files within the filesystem is a concept taken from Plan9, which was the OS people like Ken Thompson and Rob Pike envisioned as the future of OSes and replacement for Unix.

Those "traditions" you're speaking of not only are not outdated, but they never were fully realized to their ideal outside of Plan9, which attempted to make everything accessible through file APIs.

The suckless crowds are not about reproducing the original Unix. They are about carrying the torch of that philosophy, and the original unix was just the beginning, not an end in itself. Here's an example of software from suckless that follows Plan9 : http://tools.suckless.org/ii/


Actually the only thing I find positive about Plan9 is that it gave birth to Inferno and Limbo, both of which don't have much to do with UNIX philosophy.

Those that worship Plan 9 as the UNIX culture, should actually be aware what the authors think about UNIX.

"I didn't use Unix at all, really, from about 1990 until 2002, when I joined Google. (I worked entirely on Plan 9, which I still believe does a pretty good job of solving those fundamental problems.) I was surprised when I came back to Unix how many of even the little things that were annoying in 1990 continue to annoy today. In 1975, when the argument vector had to live in a 512-byte-block, the 6th Edition system would often complain, 'arg list too long'. But today, when machines have gigabytes of memory, I still see that silly message far too often. The argument list is now limited somewhere north of 100K on the Linux machines I use at work, but come on people, dynamic memory allocation is a done deal!

I started keeping a list of these annoyances but it got too long and depressing so I just learned to live with them again. We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy. "

https://interviews.slashdot.org/story/04/10/18/1153211/rob-p...


For me at least, unix philosophy is Thompson, Ritchie, Raymond, Stevens, etc., not some obscure commercial things.


Which validates my point of long gone traditions.

The world isn't a PDP-11 anymore.


commercial unixes haven't been relevant for decades. Unix is linux these days.


How about the second largest desktop OS in the world, OS X (or now macOS)? That's BSD.


OSX is actually a certified UNIX http://www.opengroup.org/openbrand/register/


and systemd was modelled partially after its launchd


Mac OS X is not BSD: https://wiki.freebsd.org/Myths


Actually not from the point of view of our enterprise customers, the same that still happily pay for mainframes.


I don't believe Mac OS X/OS X/macOS's launchd is as complicated as systemd.


Ubuntu 16.04 is not meant for lightweight machines - for example, the Unity desktop assumes you have 3D acceleration (which sucks for using in a VM). It's not systemd that makes your atom netbook slow (well, assuming you're using Unity...)

Re: systemd itself, I could care less about the bells and whistles, but every time I go back to fiddle with a sysv init script, I yearn for either of upstart or systemd...


Ubuntu actually has a low graphics mode so that it can be used in VMs

https://insights.ubuntu.com/2016/09/19/low-graphics-mode-in-...


I stand corrected. Maybe I'm thinking of 14.04?


Nope, Xfce on all the Linux distros on that machine, except for Elementary. I was surprised to find that Elementary's Pantheon was faster than Xfce on Ubuntu 16.04.

Besides, it wasn't a test of DE performance alone, it was a combination of factors including boot time, script run time, video encode/decode, build from source time, and so on. Yes, DE performance was also a metric, and for fun I did load Unity on both 14.04 and 16.04 just to see what would happen. If I were basing it on DE performance alone and used the default DE for each distro, both Ubuntu versions would be the slowest by far.

Also, 3D acceleration was not an issue, the Intel video hardware in that machine is fully accelerated in Linux and OpenBSD.


It didn't just start growing. There was talk of it being the base of the OS (excluding the kernel) since 2012 (as mentioned here http://0pointer.net/blog/projects/systemd-update-3.html).

> We have been working hard to turn systemd into the most viable set of components to build operating systems, appliances and devices from, and make it the best choice for servers, for desktops and for embedded environments alike. I think we have a really convincing set of features now, but we are actively working on making it even better.

I'm pretty sure I saw others posts, but my googlefu is a bit weak.

So, the expansion was in the plan from nearly the beginning (for good or ill)


Thanks for that. When I had first heard of it back in 2012, it was right after getting my first Raspberry Pi, and a friend had suggested trying to port systemd to it to improve boot speed. At that time, all I was able to find out about systemd was that it was a faster init. There was nothing I saw back then about the authors wanting to replace all of GNU with it. It was several months later, after the update to systemd broke my Arch installation, that I started reading about how it's growing too fast and rather than focus on code quality and stability, the authors were rushing to make it this huge replacement for GNU.

Since then I've followed its progress, and while my overall impression remains slightly negative, I'm hoping it improves to the point that it is stable and mature enough for daily use. Until then, I happily run Slackware for serious work and Windows 10 for games.


> There was talk of it being the base of the OS

To be fair, that's pretty much the definition of an init system, innit?


When Linux came along in the mid-90s, most commercial Unixes had left behind the Unix philosophy, with their own integrated, object oriented desktop environments and sophisticated administration tools. Only Xenix, the engine that powered many an auto shop's rinky-dink five-user database setup, stuck with the model of text terminals and CLI administration with simple tools.

Of course Linux took off, and it sort of reset everything back to stone knives and bearskins. But systemd itself is modelled on Solaris SMF, which is world-class industrial grade service management for large server deployments.

Appeals to the "Unix Philosophy" are the province of reactionary greybeards. Unix philosophy means nothing in the modern era.


Out of curiosity, what were you using to measure the speed of the various operating systems you mentioned?


For CLI stuff (compiling, file operations etc) it's the time command, for video decode/encode it's built into ffmpeg, and for graphical stuff it's mostly subjective. There's honestly not a ton of difference on most of the CLI stuff since the hardware is the same, but it is measurable. As for the DE, let's just say that Xfce under Slackware and OpenBSD is quick and peppy while Xfce under Debian-based distros is anything but. Ubuntu seemed to be the slowest for that test, and Elementary's Pantheon desktop is a mixed bag. I have considered running the Phoronix test suite for a more accurate result.

Also note that I did have to tweak OpenBSD a little to get it on par with Slackware on the desktop, though the stock install is still faster than the more "modern" Linuxen for most tasks.

And for those who wonder why I do all of this: It's a hobby. It's more fun than watching TV on my off days, and it keeps me up to date on the latest goings-on in the OS world.


>There's honestly not a ton of difference on most of the CLI stuff since the hardware is the same, but it is measurable.

This is what I was after. I can't imagine ffmpeg running slower just because of systemd or unity. But yeah, if you're running on a 2010 netbook I wouldn't be surprised if it ran better under Xfce.


I'm an Ubuntu LTS user. Compared to Upstart in 14.04, Systemd is an improvement. ;)


I am also an Ubuntu LTS user, but more a developer than a system administrator.

I have migrated from 14.04 LTS to 16.04 recently. I am using a NAS drive. After my do-release-upgrade -d, internet was not working anymore because of systemd circularity problem. I had to learn how to create systemd configuration files to describe remote filesystem mounts. It was not easy to find documentation on systemd.

When my computer enters in sleep mode, I can wake it with a press on enter. The next time, it enters in sleep mode, I can not wake it up anymore.

My system used to boot in high resolution. Now, it is using huge fonts that makes boot message impossible to read (25 lines on a 23" screen!). I still do not know how to fix it.

It may not be only the fault of systemd, but migration from 14.04 LTS to 16.04 LTS was a very bad experience for me.


Ubuntu upgrades almost always suck, but the upgrade from 14.04 to 16.04 was the worst I ever saw. Nothing worked, my system was broken beyond rescue. Pulseaudio all over again.


Upstart solves the same sysv-init problems that was the motivation for systemd.

And systemd should be an improvement - it started after upstart, and hit production well after upstart :)


> gets progressively slower with each systemd update

Windows 10 will not be left behind! I mean, ahead!

Microsoft recently pushed out the Anniversary Update, which made at least my Win10 laptop noticeably - as in extra 10 or 15 seconds - slower waking up, and generally more sluggish here and there.

(How convenient, 400 million PCs need an upgrade now. Mwahahaha.)


I am puzzled that you had an Atom based netbook from around 2010 that shipped with any kind of Windows 10.


That was a typo, it shipped with Windows 7 Starter. Fixed now, thanks! :-)


> Ubuntu 16.04 on my modern workstation gets progressively slower with each systemd update

I hadn't noticed a noticeable slowdown on my Ubuntu 16.04. I wouldn't even know about the controversy except people keep mentioning it.


[flagged]


Please don't post like this here. We ask that you comment civilly and substantively or not at all.


> WE MUST KILL IT WITH FIRE.

You know, it's funny, I never said that, in fact I said once it matures systemd is actually a good idea.

Perhaps you're projecting a bit?


static linux isn't really a reaction to systemd. What it is a reaction to is exemplified both by what the blurb on its WWW spends most of its time on, and indeed by its very name: dynamic linking.

"Executing statically linked executables is much faster" ... "Statically linked executables are portable" ... "Statically linked executables use less disk space" ... "Statically linked executables consume less memory" -- http://wayback.archive.org/web/20090525150626/http://blog.ga...


I refuse to believe that disk space is less as it can leverage other libraries in the deps list to load at run time and other can use it too.

For static executable the same dependent library will have to linked to all binaries. Maintenance is a pain in the neck.


> I refuse to believe that disk space is less as it can leverage other libraries in the deps list to load at run time and other can use it too.

If I remember correctly, the argument goes someting like this: modern compilers, i.e. something as recent as the Plan 9 toolchain or a GCC version from this millenium, usually compile in only the necessary code with static linking, and not whole libraries. With dynamic linking, you always have to load the whole library into memory, which supposedly pays off only with heavily used libraries such as libc (e.g. think about how many libraries used by Firefox/Chromium are used by other programs).

So the hope is (combined with a general strive for small programs), since text pages are shared between processes and statically linked programs only include the absolute necessary code you end up with a smaller memory footprint. (I'm not sure whether you save disk space, but I don't think that would be a problem nowadays. Heck, look at go binaries.)

And I guess, the linker could do more whole-program-optimization on a statically linked program, since all the coude is available.

> For static executable the same dependent library will have to linked to all binaries. Maintenance is a pain in the neck.

Generally you would want to have a proper build system. In case of StaLi, they have one global git repository (/.git). An update is simply "git pull && make install".

I don't know if this process is slower or faster than binary updates, but if they strive for small programs/binaries, then I guess it doesn't matter as much.

Source-based distributions, such as Gentoo, have the advantage that you don't have to wait for someone to publish an upgraded binary, you can compile it yourself, instead. This might give you a slight edge for security vulnerabilities.


> you always have to load the whole library into memory

Not really. You do have to mmap it, but it can be demand-paged (executables are handled this way on most modern systems, which is why compressed executables are usually a bad idea). IIRC, what saves time is mostly not having to do the actual linking part where the references are resolved. This can be precomputed and stashed in the binary (an optimization well-known to Gentoo+KDE users), but that confuses some package managers, breaks some uses of dlopen()/dlsym(), and has issues with ASLR.


Modern compilers?!

Static linking was already like that in MS-DOS compilers.


> Modern compilers?

I was being partially ironic. It seems that the common assumption is still that you (statically) link in the whole library. Then, of course, binaries get really huge. But when you link in only what's necessary, the overhead is probably relatively small (when was the last time you used all of libc?).

The other thing is (which you can see in this thread, as well), people seem to think that you can do things only the way we are doing them now without ever questioning whether these things are still apropriate and how they originally came into existence. ("There has to be dynamic linking", "we have to use virutal memory", "there have to be at least 5 levels of caches", etc.)

To my knowledge, all the reasons regarding saving space, security, and maintenance were all made up after the fact (and aren't necessarily true, even (or especially) with modern implementations). Originally, dynamic linking was intended for swapping in code at runtime (was it Multics or OS/360?), which you can't do anymore today.

Furthermore, dynamic linking (as it is done today) is really complex. In contrast, static linking is much simpler (=> fewer bugs/security holes). I think we should reconsider if the overhead is worth it or not (do you really care whether your binaries make up 100MB or 200MB on your 1TB HDD?).

For embedded devices: yes, space does matter, but you probably don't run a full fledged Ubuntu desktop on you IoT device, anyway. You use different approaches (e.g. busybox, buildroot, etc.).


Because people like to complain more than they like to actually build a usable alternative.

Edit: here's a great example from one of the links in the other comment:

suckless complaining about "sysv removed" in systemd. Link takes you to this changelog entry:

"The support for SysV and LSB init scripts has been removed from the systemd daemon itself. Instead, it is now implemented as a generator that creates native systemd units from these scripts when needed. This enables us to remove a substantial amount of legacy code from PID 1, following the fact that many distributions only ship a very small number of LSB/SysV init scripts nowadays."

So, code was removed from the init daemon itself and moved into a standalone utility that does one specific job.

Systemd is now both being blamed for bloating init, and for splitting functionality out into a separate tool that does one thing.


runit on void linux runs great. it's tiny, easy to understand and very fast. and it doesn't try to take everything over.


> Because people like to complain more than they like to actually build a usable alternative.

More like people have had perfectly usable alternatives but now the hivemind is more or less forcing something else onto them. I don't need to build a new init system, I have one that works, thank you. Please don't give me systemd.


The one I had did not work so I am happy to have systemd now.


They built many. runit, s6, nosh, bsdinit, openrc.

But yeah, it is being blamed for bloating init, because for one thing, cron isn't init's job. And that's just the start.


> cron isn't init's job.

Which is why other platforms started moving from cron to init years ago?

OS X:

> Note: Although it is still supported, cron is not a recommended solution. It has been deprecated in favor of launchd. [1]

Solaris:

> cron has had a long reign as the arbiter of scheduled system tasks on Unix systems. However, it has some critical flaws that make its use somewhat fraught. [...] cron also lacks validation, error handling, dependency management, and a host of other features. [...] The Periodic Restarter is a delegated restarter, at svc:/system/svc/periodic-restarter:default, that allows the creation of SMF services that represent scheduled or periodic tasks. [2]

[1]: https://developer.apple.com/library/content/documentation/Ma...

[2]: https://blogs.oracle.com/SolarisSMF/entry/cron_begone_predic...


Okay, so I got that one wrong. I still think it's a bad idea, but I did get that point wrong, true enough.

However, the other points I've made are still correct and accurate.


Here are a pair of links that you may have missed under the "Don’t use systemd (read more about why it sucks)" line item.

http://suckless.org/sucks/systemd

http://uselessd.darknedgy.net/ProSystemdAntiSystemd/

The first link reads like a sort of 99 Theses, while the second is a dissertation on why people will never get along on the subject of systemd.


I take the attribution in the first link (references to "Führerbunker" and "Führer") to mean that the author is comparing Lennart Poettering to Hitler. That's not funny, it's just very, very inappropriate.


I didn't write the linked content, nor did I choose the links.

If you feel strongly about this, consider contacting the authors of the story.


I didn't mean to imply that you did; sorry if that came out wrong.


Hating systemd is like hating Hillary Clinton at this point. It's well past time to suck it up and make peace with your next init system/President because the only viable alternative(s) are far worse.


Fortunately, picking an OS and a distro is not like voting in US presidential elections. In particular, there are more than two viable options.


And...you can actually choose.

Not only does your vote count, when it comes down to it, yours is the only vote that counts.


Eh, arguably from ecosystem effects, other people's votes count a lot too. I don't think I'd want to be the sole user of best init system in the world!


True, but you could if you wanted to.


American democracy: suck it up, you don't really have a choice.


As someone who has found runit to meet my needs I don't think I have any reason to make peace with systemd.

I feel like part of what people object to about systemd is the 'one true way to linux' thing.


Right now what I'm objecting to with systemd is that this system replaces syslog, has been created and driven by the enterprise linux distro, with full-time experienced linux devs, and has been released and used in production for years...

... and still doesn't have functional centralised logging ability. People have to use dirty hacks to make it work. This is my current headache.


That is "replaces" syslog (you can still have it forward to syslog if you insist) is one of the best parts. After getting used to journald I have no desire to ever go back to dealing with syslog.

> ... and still doesn't have functional centralised logging ability. People have to use dirty hacks to make it work. This is my current headache.

What are those "dirty hacks"? You can trivially use logstash or similar or you can forward log entries to a remote syslog-compatible endpoint. Incidentally the same that people usually do with syslog.


Is switching to a BSD the move to Canada option? I know several people who have migrated to various BSDs because of the ugliness of systemd.


Come on in, the water's fine here! Honestly, I use FreeBSD/OpenBSD for everything I need and anytime I have to deal with some linux monstrosity it's like taking a day trip from Toronto to Detroit.


Maybe they're far worse to you..

You also have the option of creating something yourself if everyting sucks so much.


I cannot even begin to describe how silly your comment is. Since when were politicians even comparable to programs? Do we "elect" a init system, as one nation united under Torvalds?

I'm hoping I was just trolled by an HN-flavored Markov Chain.


> Since when were politicians even comparable to programs?

Since people learned the power of the metaphor.

> Do we "elect" a init system

For some distros? Sure. By its nature, Linux, GNU and the open source software that goes into the ecosystem allows people to create new distributions, or choose one of the many that exist. This choice is, in some small way, like a vote. If systemd was really that bad, enough people would work around it to make it's adoption much more problematic.

If you want more than that, some distributions literally vote on features like this, and have voted specifically on systemd[1].

> I cannot even begin to describe how silly your comment is. ... I'm hoping I was just trolled by an HN-flavored Markov Chain.

That doesn't seem very constructive.

1: https://lists.debian.org/debian-ctte/2014/02/msg00294.html


I do retract my complaint about comparing politicians to programs. In its place, I complain about the process of electing a President being different from voting on an init system.

The most important point here is that distributions vote on which init system they elect. We are not all electing one init system to rule them all, across Linux. Distributions are nation-states of varying size that follow similar but sometimes incompatible rules, all derived from the same core tenets and program. So we're electing governors from the same political parties, more or less.

I think telling people to suck it up and just accept systemd as their one true init system is just silly. Regardless about how you feel about Clinton, there are always reasons to use something else.

If you need a barebones system, or something for experimentation, or something that is hardened at the price of flexibility, that is an applicable choice, and one you can make from the comfort of your own home. You can't fork the US or an individual state in the same way you can download a different distro to your Raspberry Pi.

And I could go on and on. But you're right; I suppose I could, in the end, begin to describe how silly that comment was. Even if the explanation ended up being really unwieldy and not my best writing. It might not have been wholly constructive either, but we're generally all here to have a good time.

My point is, it's a silly, leaky metaphor. And telling people to suck it up and use an actually useful tool in the comments for a distribution that's written as an elitist hobby project is similarly silly. These people aren't picketing your Debian or Arch systemd parties. They're just doing their own dang thing.


All metaphors and similes are leaky. The point is to focus on the ways it works and doesn't work, because each has the possibility to expand your thinking on a topic. The original comparison could have only worked in a singular facet, yet that would still make it a valid, correct and possibly useful simile. Here you've expanded on some ways the two things are different, which is also generally the point of using an analogy, in that it promotes that thinking as well.

> You can't fork the US or an individual state in the same way you can download a different distro to your Raspberry Pi.

Well, you can (in that you can fork the rules and structures), it's just finding the resources (people and location) to make use of this new government is hard, because we are currently resource constrained. In the past, when land was plentiful, this happened. It happened to some extent with the Pilgrims (although it mostly a separation from the prior church, not the government, although I don't doubt it was also viewed as a partial separation from the government due to the distances involved). If we start colonizing Mars at some point, I'm pretty sure there will be some more separatist movements and forking of governments.

Another way to look at this is that you can fork the government right now, you just can't supercede the rights of the current government you are part of. To follow the resource and forking metaphor, you can virtualize governments to your heart's content, but in cases where your rules conflict with the host government, you can emulate the result but you can't enforce it. That is, Ring 0 doesn't care what you think you can do, the rules are the rules.


Actually, it was a simile, not a metaphor.


Yeah, I'm aware, and actually thought of that while writing the comment, and specifically chose metaphor. I think it still worked better to use metaphor because I think that's the more common way to relate the items in question, and being the more abstract of the two, metaphors obviously allow for similes.


In the Linux ecosystem, generally you use whatever the majority supports, or if you use an alternative you assume responsibility for supporting it yourself. Since the majority of distros, and soon the majority of upstream, are supporting systemd, what do you think is going to be used by most commercial Linux deployments?


> Do we "elect" a init system, as one nation united under Torvalds?

Debian selected it via an election.


See https://news.ycombinator.com/item?id=11834348 for an interesting contrast.


don't bring politics in here.

the god damn stuff is everywhere and it doesn't need to be.


That's not true. Unlike presidential elections, we don't all have to make the same choice. openrc, runit, s6, nosh, bsdinit... there are plenty of choices that are better.


[flagged]


You can't comment like this here. We ban accounts that continue to violate the guidelines this way.

https://news.ycombinator.com/newsguidelines.html


Oh, I didn't drink the liberal koolaid. Feel free to ban my account.


Because it's a Windows monolithic approach to startup, shutdown and dependency management, as well as being a poor copy of Solaris' service management facility, smf(5).


Hasn't stali been around for at least a little while now? I don't think "brand new" is quite right :)

Looks like a neat distro, though. It's been on my bucket list to try.


Sucks less until it starts to suck more. It's religion not science. Classic NIH.


I don't doubt that this won't fly in the traditional sense, but at the very least NIH is also a wheel for diversity. It is clearly interesting to see what can be done there.


Nonsense. They have a clear philosophy on what makes software suck less, it doesn't matter where it comes from.

http://suckless.org/philosophy


Exactly my point. Philosophy has its place in software engineering but often it's not on a pedestal.


I was responding to the "NIH" comment. It's not NIH syndrome if it doesn't matter where it comes from.

Philosophy has its place in software engineering but often it's not on a pedestal.

That's a philosophy in itself, as is pragmatism or adherence to science. You just happen to disagree with theirs.


Meh. I don't buy the "moderate valuing of philosophy is a philosophy" argument. If everything is a philosophy then nothing is a philosophy.

My point is, suckless products can actually suck despite following their "philosophy." It's not only the UI, although strictly adhering to the "UNIX" philosophy has its flaws. No, it's the suckiness of the source code that I'm talking about, e.g. dwm is composed of ad-hoc internal abstractions and special cases based on arcane X11 WM domain knowledge, not to mention the suckless comment "philosophy": that comments are a sign of bad code therefore don't use them.

Their "philosophy" is narrow-minded and overly simplistic, like a religion. It's also not based in truth, just biased anecdotes, e.g.

"Most hackers actually don’t care much about code quality."

...What? Is there a source for this information?

Their "philosophy" only makes sense for the smallest toy programs. Notice how there is no suckless kernel, It would likely be a heaping pile. Dogma loses, science wins.


> Meh. I don't buy the "moderate valuing of philosophy is a philosophy" argument. If everything is a philosophy then nothing is a philosophy.

Nope, all ideas are philosophy. Just as all matter is made up of atoms. You can't escape philosophy no matter how hard you try.


Except "all matter is made of atoms" has been empirically shown and is scientifically consistent, whereas "all ideas are philosophy" is just meaningless abstract gibberish. It's not the same.


Luckily pragmatism really is a philosophy, with an actual documented history. Is that scientific enough for you?


It doesn't matter. Engineering disciplines don't need a philosophy. Imagine if bridge builders had bridge building philosophies. It's nonsense.


20160825 stali for RPi stapi.img.gz available for download

Now for Raspberry Pi, it appears if my decryption is correct.


This project has been around for a while.



> Achieve better memory footprint than heavyweight distros using dynamic linking and all its problems

If this is really effective in reducing executable size, it's supremely ironic, since the original point of dynamic linking was to reduce the overall size of groups of executables by sharing common function libraries.


More likely, they're saying mainstream distros are so bloated even dynamic linking can't save them: dynamic linking does reduce memory footprint, but that only mitigates bloat.


I think that "dynamic linking and all its problems" imply that some of the bloat comes from the dynamic linking itself. At least that's how I read it.

I believe argument here goes beyond the sole fact of introducing dynamic linker and includes the fact that dynamic linking may encourage duplication of functionality in applications ending up in bloated system (on the other hand static linking doesn't seem to do anything to discourage bloat).


I've read around here that static linking allow compilers to remove dead code (unused symbols) from the relevant libraries. The same cannot be done for shared libraries, because you never know which symbols are unused —if any.


Why is static linking supposed to REDUCE memory usage and binary size? Am I missing something really obvious?

Eg, if five different binaries are statically linked against the same version of OpenSSL, won't it be in memory and on disk four times more than it would be on a dynamic system?


Many applications only use parts of libraries, then the linker can throw the unnecessary parts away.


How do I get it to do that?

  $ cat test.c 
  #include <stdio.h>
  int main() { printf("Hello, World!\n"); }
  $ gcc -o test1 test.c && ls -lgG test1
  -rwxr-xr-x 1 6712 Sep 28 10:24 test1*
  $ gcc -static -o test1 test.c && ls -lgG test1
  -rwxr-xr-x 1 800904 Sep 28 10:24 test1*


I always use dynamic linking but I thought it'd be fun to try making a small, static hello-world. The following assumes x86-64 Linux (and is a total hack that "works for me" NO WARRANTY!).

    $ cat test.c
    static int sys_write(int fd, const void *buf, unsigned long n)
    {
        asm("mov $1,%rax; syscall");
    }
    
    static void sys_exit(int status)
    {
        asm("mov $60,%rax; syscall");
    }
    
    void _start(void)
    {
        char s[] = "Hello, World\n";
        int r = sys_write(1, s, sizeof(s) - 1);
        sys_exit(r == -1 ? 1 : 0);
    }
    $ gcc -nostdlib -static -o test test.c && ls -lgG test
    -rwxr-xr-x 1 1480 Sep 28 11:47 test
More to the point of your question, if you want to see the effect you're looking for, I think you need to link to a libfoo.a library that includes some module.o that your program doesn't use.


I like this one even better:

    $ cat test.c
    void _start(void)
    {
        __asm__(
            "movabsq $6278066737626506568,%rax\n\t"
            "movq %rax,-32(%rsp)\n\t"
            "movl $1684828783,-24(%rsp)\n\t"
            "movw $10,-20(%rsp)\n\t"
            "movq $1,%rax\n\t"
            "movq $1,%rdi\n\t"
            "leaq -32(%rsp),%rsi\n\t"
            "movq $13,%rdx\n\t"
            "syscall\n\t"
            "movq $60,%rax\n\t"
            "movq $0,%rdi\n\t"
            "syscall");
    }
    $ gcc -nostdlib -static -Os -o test test.c && strip test && ls -lgG test
    -rwxr-xr-x 1 864 Sep 28 14:22 test
864 bytes! :)


No, the parent said "the linker can throw the unnecessary parts away". There is plenty of stuff in libc that my simple "Hello World" program doesn't use, yet gcc does not throw those parts away.

Theoretically I can see that it's possible, I just wonder how to actually do this in practise.


probably by using musl instead of glibc


That doesn't address scott_karana's question, as it doesn't speak to the cost of the necessary parts being duplicated in each executable.


It does address the question indirectly. The implicit argument is that the size of duplicated common parts may be lower than size of single instance of everything that is bundled in OpenSSL. There is no proof however if that's the case.


If you took the "one tool one purpose" thing to an extreme, you'd only have one binary with OpenSSL compiled in, right?

Given some familiarity with the suckless ideology, it seems that it is the simplicity of static linking, not conservative use of disk space, which is its virtue. For, if anyone groks the tool chain perfectly but lacks disk space, I'd like to hear the secret.


> you'd only have one binary with OpenSSL compiled in

So then any process that wanted to use crypto etc. would have to call this binary?

That just seems like a very inefficient way if implementing dymanic linking...


You have the wrong metric. The metric should be the efficiency of the mechanism in implementing SSL.

Now look at things like Gerrit Pape's sslio , http://smarden.org/ipsvd/sslio.8.html . Note that the subject at hand purportedly uses several of Gerrit Pape's other tools.


Has anyone actually tested the claim of static linking improving performance?

The dynamic linking might not be free in terms of cycles but a huge part of the code is already in memory rather than on disk at the time the binary starts.


Is there any work on ASLR for quasi-static binaries? So you have no external linking (good for distribution and unsurprising dependencies), but still relocate at runtime.


In stali, or in general? For stali there's a slightly confusing FAQ entry: http://sta.li/faq / "Aren’t statically linked executables less secure?"

> it is simple to use position-independent code in static executables and (assuming a modern kernel that supports address randomization for executables) fully position-independent executables are easily created on all modern operating systems. [...] Thus we consider this as an issue with low impact and this is not a real focus for us.

So I don't know if they do it or not. If I read it correctly, they just ignore something they say is easy to do.

But in general, as mentioned, there's normally nothing that prevents you from compiling PIE C code even if you use -static.


I'm assuming that even with PIE, the relative positioning of the statically linked libraries is constant. That means you only need a single address leak to be able to ROP to anywhere.


That's how I understand it as well (it's a single rel/ro segment). But then again, in most cases you only need any address anyway. Rop gadgets are everywhere...


-fPIE? Works on OpenBSD.


Why not Stalin? Catchier and easier to remember. :-)



Word.


Any plans to include i386?


It is quite hard to take seriously any project that so much uses word "suck" to describe other projects, i.e., other people work, or their approaches.


His site is calles suck less so i guess it is his moto


Why? Most software out there sucks. Computers suck. Most people are unbelievably bad at designing and writing software. Very few of us are RJ Micals or Adam Leventhals, or Jeff Bonwicks. Most people have major issues with reasoning and implementation when it comes to writing software. It's the reality.


Ironically, suckless.org is a bunch of plan9 elitists. The plan9 elitists are similar to solaris elitists, but have radically different beliefs, and probably believe that Bonwick and Leventhal and the rest should be jailed for working on ZFS, DTrace, Zones, SMF, etc.


Um... Sounds too much like Stalin?


[flagged]


Please make sure to comment civilly and substantively on Hacker News.

https://news.ycombinator.com/newsguidelines.html


Ignore FHS of Linux, it simply sucks.

Wow. Just wow. So these guys don't understand UNIX. Like they're going to do it better than the fathers of UNIX at AT&T (who came up with the specification on which the FHS is based). Yeah, OK.

Achieve better performance than any other x86_64 or arm distribution, as only statically linked binaries are used

Wow. So not only do they not understand UNIX, but now every process will have memory allocated for every symbol in the ELF header, and any application linking with the same set of libraries will have their own copy of the same machine code, and the startup will be longer, because every application will be larger. Any patching which will need done, all affected applications will need recompiled. Why does this remind me of Windows?

Achieve better memory footprint than heavyweight distros using dynamic linking and all its problems

Yeah, of course! Why try to rack your brains designing versioned interfaces and making sure libraries are backward compatible, when you can just statically link everything and create tremendous overhead in terms of maintenance and security? No point in writing linker map files and having the runtime linker present the correct version of the API to the application, riiiggghhhttt?

Is this a case of more Windows people getting onto the Linux bandwagon, and just completely not getting it, or what?


They're plan9 elitists. They're frankly a lot like you, except their system of choice is plan9, not solaris.

And on Linux, the FHS does kind of suck: there's no difference between /lib and /usr/lib, or /share and /usr/share, because we don't have a separation between OS userland and external userland, /sbin is rarely used for its intended purpose, and /opt is just weird. Honestly, you should just kill most of the cruft, symlink /<whatever> to /usr/<whatever> or vice versa, so we only have one of all of them, and leave /usr/local as it is, because we need it for software installed outside the package manager.


Windows? Have you ever heard of DLLs?


My Windows system currently has 21 copies of zlib1.dll on it, each shipped by a different program. While I have no doubt that Windows programmers have heard of DLLs, they clearly haven't grasped the idea of shared libraries.


The problem in that case is developers not putting their DLLs in a system directory! Windows will use WinSXS to avoid problems with different versions (DLL hell).

Once loaded into memory, DLLs save loading the same code twice. See http://www.ksyash.com/2011/01/dll-internals/


But no one uses them like that any more. Because we remember back when people dropped all kinds of shit into system directories, and the result was awful.


This hasn't seen an airing in a while, because it became accepted wisdom years ago. You're encouraging a bad design that the world has learned better than to employ.

* http://jdebp.eu./FGA/dont-put-your-dll-in-the-system32-direc...


This way, the DLL will be somewhere (usually a "Bin" subdirectory) within the application's own subtree rooted at "\Program Files\company\product\".

This plainly shows just how poorly thought-out Windows is, to paraphrase:

we didn't design a directory tree like UNIX, and we don't have a proper runtime linker like UNIX, so we are going to swipe using the bin/ directory like on UNIX because we don't know where else to put stuff, and while we're at it, we will rename bin/ to Bin/ because we don't know any better, and also while we're at it, we'll dump all the dynamically linked libraries which the application needs into that mis-named Bin/ directory.

Because Windows doesn't have a proper operating system structure, nor does it have a clean structure (case in point C:\Program Files (x86)\, even if your Windows is 64-bit), nor does it have a runtime linker with the functionality equivalent to ld.so.1 (the only way to get anything close to -Wl,-R/opt/local/lib/64 out of LINK.EXE is to use an XML manifest(!!!), and $ORIGIN keyword's functionality of ld(1) is science fiction for LINK.EXE).

How amateurishly bad can one get? Wintendo is a gift that just keeps on giving...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: