Hacker News new | past | comments | ask | show | jobs | submit login
Has modern Linux lost its way? Some thoughts on Jessie (complete.org)
282 points by fpgeek on Feb 10, 2015 | hide | past | favorite | 260 comments



If you're running an up-to-date GNOME/KDE/etc. desktop, and not running systemd, for better or worse you're using a less-supported configuration. There's going to be other stuff that systemd will break, but in the long run you're better off using the upstream-recommended configuration.

That aside... I think it's pretty clear that modern Linux is not losing its way. If you go back even five years, and complain that your latest pre-release version of Debian won't suspend properly because it needs a password, you'll get laughed at by all the people who try to suspend their laptop and have it crash in some way or another.

We're now in the uncanny valley of software progress. If suspend weren't working, you'd feel hardcore and learn it and deal with it. If it worked perfectly, then you wouldn't think about it. (What was the last time you thought about how OS X or Windows does suspend, or handles permissions on mounted drives, or whatever? When was the last time you read a manpage about launchd or one of its helpers? Hint, the manpages are useless.) If it has almost all the complexity to work perfectly, you get neither of these benefits.

But it would help things immensely if someone were to document all this, from a busy sysadmin's perspective, not from an upstream developer's. There are occasionally manpages, but they're written like git's. Lennart's "systemd for administrators" series is a start, but that's more of a pitch for why you should want to build your new systems on it, and it only covers systemd. Docs on how to figure things out when they go wrong would be most useful.


USB drives and suspend were working well for me, for a few years before systemd took over the distro I used (arch linux). Of course, I set it up to work perfectly for me. It wasn't too hard.

It's the reason I started preferring linux in the first place - I run into bugs on windows and mac os x, and I can't make a system fully the way I like it, except with something like linux. But I digress.

Things also work fine now, but during the transitions from "udev and group permissions" to HAL to udisks / consolekit to systemd / logind, stuff broke. It seems like the same group of well-meaning people worked on HAL then dropped it, worked on consolekit then dropped it, and ended up in the systemd ecosystem.

The various desktop / filemanager projects that started supporting one of these previous systems, to be more compatible with the others and theoretically make things easier for users, sort of got pulled from one to the next, whether it was better or not. Because the previous was unsupported and somehow getting more flaky.

Anyway... it's not the end of the world, there are still other distros and more minimal projects, and it's all still open source. There are even people trying to maintain independent udev and consolekit (the end of the usb-drive-access line, if you don't get on systemd).

But it is annoying that there are these people and projects, that have a lot of influence on other prominent, "classic" projects, that are trying to make a linux system like windows or os x, and adding huge amounts of coupling and complexity. All because they want all sorts of things to "just work" and attract the "common user". Both of these goals are futile, and the costs paid are for naught.


Please note: These problems don't crop up in non-rolling distros nearly as much. The transitions are held off, managed and only rolled out for new major versions of a distro.

There are benefits to running Arch, but 'just works without breaking' is not something it's good at.


Indeed. I love Arch and generally it has been very reliable for me (more so than Kubuntu or Suse ever was previously). However, the transition to systemd was one of the few times where I had a issues. Systemd is pretty mature these days, but back then it was much buggier and less well-supported.

In the long run though, I'm glad to have made the switch. I find it much easier to maintain. I also use FreeBSD at home and while I love it too, some aspects of maintaining it don't feel as nice (I suck at writing init scripts apparently).


That's kinda true, but I did use non-rolling-release distros, particularly ubuntu for a while, and they're generally more buggy (and I expect Fedora is even buggier than that).


> What was the last time you thought about how OS X or Windows does suspend

Every. Freaking. Time. That is, since an employer equipped me with a cheap Dell. It looks like it suspends, but one time out of ten it'll come straight out of sleep and cook in my bag.

I've only used Thinkpads, with Linux, for my own work before, and they suspend multiple times a day without a problem ever. So this was a big a-ha moment for me regarding Linux usability complaints, to see what it is that people actually try to do.


Some laptops have a mechanical lid switch (instead of a magnetic one) that easily triggers when some pressure is applied to the lid. I have the same problem with... a Thinkpad. Luckily, under Linux, you can disable wakeup through the lid switch:

    # cat /proc/acpi/wakeup 
    Device	S-state	  Status   Sysfs node
    LID	  S4	*enabled   platform:PNP0C0D:00
    ...

    # echo "LID" > /proc/acpi/wakeup

    # cat /proc/acpi/wakeup 
    Device	S-state	  Status   Sysfs node
    LID	  S4	*disabled  platform:PNP0C0D:00
    ...
It's not a one-shot setting though, as the desktop environments reset it before suspending. As far as I can remember, I put a script in /usr/lib64/pm-utils/sleep.d to make it permanent.


If memory serves this is also possible in Windows under the advanced Power Management settings.


I don't understand the point you're trying to make: are you saying that no matter the OS, some hardware platforms will be flaky and others well supported?


Yes. If my first impression of a Linux desktop would be an install on this crap hardware, I might blame the software as well.


You are not alone. I also have a work provided Dell that I have to shutdown unless I want to use my work bag to keep me warm on the walk home. We have another Toshiba at home running windows that will at random times of the night just spin up its hard drives and perhaps throw out the odd notification sound before spinning down again. It seems for windows suspend is not really go to sleep until you are woken up again by a human.


True, for some the ability for background services or the system clock to wake the computer is a feature not a bug.


As long as they are in control of it.

And I think that may be the crox of the issue.

The recent changes to the Linux middelware ecosystem has added a mass of new automations.

Supposedly making people's lives easier, but for many producing issues that remind them of why they moved from Windows to Linux in the first place.


I have a Dell M4600 running Windows 8.1 (started off with Windows 7). Suspend and resume has always worked. I never think about suspending - I just close the lid or hit the sleep button and throw it in the bag. When I get home I press it into the home dock and press the power switch on the dock to wake up. Windows 7 also never had a problem. Yes, this is anecdotal evidence to counter anecdotal evidence.


My work Macbook Pro has constant problems with suspend. Sometimes it doesn't get the external display up again and sometimes it just takes a long time to wake. To be fair the slowness is likely due to old fashioned hard drive that always gives you time to ponder the finer things in life.


I've got a MBPr and its worse than your situation. Reinstalled 3 times to see if it helps. So when it suspends and i open it up very soon afterwards i just get a blank screen for 30-40 seconds. I've learned to wait but sometimes you have to press keys to get it to wake up. Sometimes you press the power button. Then as you type your password it suspends again.

All because they don't want a sodding LED on the machine


From what I've heard it's because Macbooks have really aggressive hibernation(I've experienced this on my MB Air personally). Basically they will copy everything from the ram to the SSD and shut down fully. Then when you wake up it copies everything from SSD back to RAM - but if you have 16GB of ram, then even with an SSD it will take nearly 30 seconds to copy everything back, and until then the machine is unresponsive. I think it would be better if they just told you they are doing it "please wait, waking up", instead of just giving you a blank screen.


Yea, makes sense. But at least they could provide some progress indication!


My MacBook Pro 15" likes to do some of the above, and it also likes to pretend to suspend and then randomly wake up after a minute.

(I'm running Ubuntu 14.04 with Gnome 3.12)


I have noticed this issue as well and never really thought about it, thought it was user error. So annoying.


I'm very careful to not plug in an external screen before the MBP wakes up as your MBP may crash. Have made that mistake myself a few times and am nowadays very careful to first wake up the MBP before plugging in an external screen.


The opposite also applies: don't unplug a sleeping MBP from external displays.


Interesting, I will be careful with that as well then, being in a hurry might make you do exactly that and regret it.


As does mine, sometimes when I just close it rather than suspending first it doesn't wake up and needs a hard reboot. Other times it needs quite some time before letting me enter my password and acts frozen until then. Granted, the problem is likely related to age and failing memory (one of the slots is dead) but I do certainly think about suspend behaviour.


If the resume lag is the same I experience, it's the intentional result of suspending to disk to improve battery life:

http://osxdaily.com/2013/01/21/mac-slow-wake-from-sleep-fix/

Though I think the user experience could at least be significantly improved.


If I didn't know better I'd say it's almost as if maybe the user knows better whether to suspend or hibernate, and conflating the two is a classic "Apple knows best" pitfall they return to over and over.


More like "programmer/developer knows best", an attitude that is rapidly spreading beyond Apple.


I think the problem is that we don't know how to troubleshoot problems with our desktops (especially things related to permissions as in consolekit/policykit/dbus etc):

* before the switch to systemd everything "just worked" on Debian and you didn't have to deal with this

* during the switch to systemd there were situations when switching was worse than staying with sysvinit (e.g. system wouldn't boot anymore due to entries in fstab).

* then the situation started to improve in systemd at the detriment of those who stayed with sysvinit (no suspend button in KDE when not on systemd; policykit not working; virt-viewer shows a permission error on MATE, but not on KDE, etc.)

Whether you switched to systemd or not you noticed that something broke. That was always a possibility with unstable of course, but usually fixing it was quite simple. In this situation it was worse because you first had to find where the problem lies, which means understanding all the complex interactions of a desktop's components. Searching for documentation isn't very easy either, because sometimes there is no documentation except the source code and configuration files themselves. Fxing things isn't easy as downgrading or fixing a configuration file either: usually you have to patch the applications, or wait for others to do it.

All this is made worse that systemd tries to do too many things at once: PID1/service supervision, initscripts, dbus stuff for logind, policykit/pam, syslog etc. Some people have strong opinions on PID1 and initscripts (myself included) and don't like all these forcibly changed at once.

I wouldn't be opposed to service supervision (I like runit), or a better initscript format, but I'd like to have a choice of changing that without breaking unrelated things (like policykit on the desktop). OTOH I don't really care how consolekit/policykit/dbus is implemented as long as it works, and as long as I'm not forced to change something unrelated, like how I boot my system.

Also if these were all independent applications that didn't require the rest of systemd troubleshooting would be easier by switching out components one at a time. Things like systemd-shim/cgmanager attempt to do that, but that is more like a band-aid than a properly designed, modular application.


I think you are hitting on something with this. And part of the problem is perhaps that so much happens across dbus.

I recall trying to set up dbus on a minimal install, and finding that often when something borked, it borked in odd ways. This in that a signal could seemingly get stuck inside dbus, as one party thought it had sent the relevant message, the other party never responded in kind, and the only real way to get everything unstuck was to kill dbus and all relevant parties (pretty much a reboot of anything above the kernel these days).

With something as seemingly simple as mounting a thumbdrive, the process goes something like filemanager > udisk > pol(icy)kit > consolekit/logind. All that to "verify" that the person sitting at the keyboard has the rights to do "mount /dev/sdb1". And every > in that chain involves dbus. So if dbus borks, everything borks. And dbus is pretty much a black box.


> When was the last time you read a manpage about launchd or one of its helpers?

Every time I had to write a lauchdaemon to implement the equivalent of what would be done in cron or inetd.

But I do a lot of Mac sysadmin work.


My iMac's automated backup has been broken since Yosemite because Apple removed the ability to set environment variables that all processes started from launchd could use.

It appears that "run a python program as root on a schedule regardless of if anyone is logged in" is no longer possible on a Mac. Or rather, me and my CS PhD aren't capable of figuring out how to do it. I just keep a terminal window open at all times as a reminder to occasionally manually run the script.


As a homebrew user I absolutely abhor having to keep around a list of instructions that I've only barely just caught during an install .. things like:

    launchctl unload ~/Library/LaunchAgents/homebrew.mxcl.rethinkdb.plist
    launchctl load ~/Library/LaunchAgents/homebrew.mxcl.rethinkdb.plist

    cassandra

    To have launchd start cassandra at login:
    ln -sfv /usr/local/opt/cassandra/*.plist ~/Library/LaunchAgents
    Then to load cassandra now:
    launchctl load ~/Library/LaunchAgents/homebrew.mxcl.cassandra.plist


    To have launchd start redis at login:
    ln -sfv /usr/local/opt/redis/*.plist ~/Library/LaunchAgents
    Then to load redis now:
    launchctl load ~/Library/LaunchAgents/homebrew.mxcl.redis.plist
    Or, if you don't want/need launchctl, you can just run:
    redis-server /usr/local/etc/redis.conf

.. and so on and on it goes .. seems like the more things 'improve', the more they stay the same.


Pro-tip: you can use "brew info [package-name]" to lookup the post-install info that flashes up after a homebrew install.

    $ brew info cassandra

    cassandra: stable 2.1.2
    http://cassandra.apache.org
    Not installed
    From: https://github.com/Homebrew/homebrew/blob/master/Library/Formula/cassandra.rb
    ==> Caveats
    If you plan to use the CQL shell (cqlsh), you will need the Python CQL library
    installed. Since Homebrew prefers using pip for Python packages, you can
    install that using:

      pip install cql

    To have launchd start cassandra at login:
        ln -sfv /usr/local/opt/cassandra/*.plist ~/Library/LaunchAgents
    Then to load cassandra now:
        launchctl load ~/Library/LaunchAgents/homebrew.mxcl.cassandra.plist


I prefer to use lunchy to do the launchctl stuff: https://github.com/eddiezane/lunchy

'gem install lunchy' to install it. Your commands would have been eg 'lunchy stop rethinkdb', 'lunchy start rethinkdb' - it matches on substrings. The ln step is handled slightly differently - it doesn't work with lists of files, and installs to two different places, but this is roughly equivalent:

    for f in /usr/local/opt/cassandra/*.plist; do lunchy install -s $f; done
(here the -s flag is symlink rather than copy)


I have much disdain for this solution - even though it is a solution - because its a non-builtin tool and has dependencies outside the OSX sphere in order to run.

I wish there were some sort of Guild of OS Developers that could be relied on to enforce the inclusion of standardized tools in all OS's released by its members. It seems that a lack of an organizing body to enforce these standards - or in Apples' case, any way for the public to influence the standards they dictate - is a real problem with OS development today.

Nevertheless, good to know about lunchy. I will try to remember to check it out some time.


If I am having a problem with OpenBSD, I read the man pages. If I am having a problem with OS X, I search the web for a result. I just don't quite trust the OS X man pages anymore.


I never had any problems with either suspend or USB mounting. And I'm running Jessie with systemd.


Same, and I had the issue described in the article. It's just a result of the installation process and is documented on a bug page... somewhere. What happens is that if you install Jessie from USB, a line is added into /etc/fstab for this USB drive. And this changes the usual auto-mount of USB drives later on: instead of mounting them as the logged in user, they're mounted as root, read-only. Hence the error when trying to access the USB key.

Fix: just remove the USB drive entry in /etc/fstab, then it works like a charm again.

Someone not installing from the USB key will not see the issue. As the issue is known it may have already been fixed in the latest Jessie installer (didn't check).


Ah, yes, I remember this issue in the past actually when installing from a USB drive. I guess I'm used to removing that line from fstab in such cases, so I don't keep track of that :)


Same [subjective] experience here. Jessie has been mostly solid for me.


> If suspend weren't working

The irony I'm finding these days is that suspend does work, but now the options to turn it off don't.


I had the same issue.

Then I switched to OpenBSD.



I've been having similar issues and thoughts. I'm not a greybeard by any reasonable measure, having started using Slackware in '99-'00, quickly switched to Debian, and never looked back.

I think it all stems from D-Bus, which has become the primary mechanism of command and control on desktop Linux. To address the OP's initial issue of user mounting disks, it's managed via Udisks, which is mostly accessible via D-Bus.

I have trouble reconciling Unix's notion of "everything is a file" and how a modern Linux desktop operates via D-Bus. Don't get me wrong, I think D-Bus very powerful and flexible. I just wish it was easier to explore and test via file-based tools. Maybe a FUSE layer?

Anyhow, if the OP thinks that Linux was once "clean, logical, well put-together, and organized", I can only laugh at his rose-colored glasses. Linux is and always was a cobbled-together mess of orthogonal paradigms (no better illustrated than by the design philosophy clash of XWindows vs Unix). Maybe earlier Linux was simpler because it was still playing catch-up with other Unix clones out there. Nowadays it's leading the pack in innovation, and so things are changing, and fast. I don't know if documentation really is worse now that it used to be, I'm afraid to fall in the same nostalgia fallacy.

Ultimately, I see this rant as nothing other than "things are different now and I'm scared and confused". Not that I don't sympathize, being in roughly the same place. However, I decided I can't stop technology, and sucked it up and sharpened my google-fu.


Yep. D-Bus feels, honestly, like someone looked at every RPC mechanism ever invented and decided to pile all their bad parts into one thing. It's not discoverable, it has a frustrating permission system, its interface definition is incredibly verbose with a very low information density, etc.

Of course, the fact that you need a full DE to easily mount a usb stick as a non-privileged user means the whole thing is already failing hard. This is not how this should work.


> Of course, the fact that you need a full DE to easily mount a usb stick as a non-privileged user means the whole thing is already failing hard. This is not how this should work.

This is not true. I use udiskctl (and thus the udisks2-polkit-dbus based machinery) to mount pendrives in a vty on a regular basis.

As a bonus, it asks for my password (the user one, not the root one) if i'm in an ssh terminal instead of a vty, and by configuring polkit i can have it allow me (only this particular user) to put a certain HDD to sleep (and no other actions like giving rights on the block device would do) without root password.


> Yep. D-Bus feels, honestly, like someone looked at every RPC mechanism ever invented and decided to pile all their bad parts into one thing.

You must have mixed up D-Bus with CORBA.

> It's not discoverable

Try installing "d-feet", it's a nice GUI that will show you everything that's listening on the bus and what objects and interfaces it supports.


You can just use udiskie (https://pypi.python.org/pypi/udiskie) to automatically mount exernal drives, even if you're not running a DE.


As best i understand, dbus came about as an attempt at making the KDE only dcop into something that could be used across DEs. This to improve interoperability between them.


dcop worked beautifully; the only problem was that the gnome team had religious objections to depending on C++. It was never intended to be a system-level thing though, and that's where a lot of the problems come from; within a desktop session dbus actually works pretty well.


File-based tools is an interesting question. D-Bus is hierarchical, but it's also strongly typed, so you'd have to encode stuff as text, and when you get the serialization wrong, write(2) can't give you a useful error message. But maybe it'd be useful for reading and monitoring.... imagine if you could `sudo tail -f /dbus/org/freedesktop/UDisks2/Manager` or something, and see all calls to it.

I've been using D-Feet and dbus-monitor, which seem to be solid enough. D-Bus does have extensive introspection.

But yeah, in general the state of D-Bus tooling is miserable. Part of it is how verbose it is — which is fine for compiled software, but for interactive use, it really makes you miss the "creat" and "umount" school of thought. Part of it is that there aren't great introspection tools at the command line.


The only D-Bus tool that is nice in the command line is qdbus, and that stems from Qt. I wonder why the original D-Bus writers could not write a decent enough command line tool for it.


Try mdbus2 - https://github.com/freesmartphone/mdbus - it's also in Debian/Ubuntu repos.

It's a little tool that works really well with dbus services that properly implement introspection. We used it extensively on Openmoko phones to play with our middleware (or when GUI was broken ;) )


Has commandline completion and history! This is nice. Thanks, will give this a try. :-)


Thanks. It's quite qood. Handy.


Debiam lost its way. Bugfixes get burried into political bullshit far too frequently. I have issues with Ubuntu but I dont have the time or the patience to support the crap reasons why they for instance keep an outdated intel driver in their distrib ("the serial number of the new one does not look like a stable release". Nevermind that it solves many bugs) or decide that mounting a USB drive is a privileged operation.

I may try Arch, I heard a lot of good from it, but from people who have more time on their hands than me. Right now it is Ubuntu for me and it is working fairly well.


I started using Arch after a new built computer couldn't install using an ubuntu disk. After the initial setup work (~ 1 hour as I wasn't familiar with it, though the documentation is great[0]) everything Just Worked™ and I've installed it on every system since.

I've had the occasional small technical hitch but the forums[1] are some of the most helpful I've found and that in combination with the standard google-fu/stack exchange approach has made quick work of any bugs I've had.

My favourite thing about Arch however is the AUR[2] which makes installing any small command line tool that you just heard of (on hacker news) as effortless as installing a supported package, even though they're unsupported (officially) they most often work and I rarely find something that doesn't already have an AUR package to build it.

N.B. if anyone wonders, the only other distros I've played around with are ubuntu, debian, and open suse (and I've actually liked them all). So I don't have too much comparison.

[0] https://wiki.archlinux.org/index.php/beginners%27_guide

[1] https://bbs.archlinux.org/

[2] https://aur.archlinux.org/


> Debian lost its way. Bugfixes get burried into political bullshit far too frequently.

You know, that's funny. Debian has neven been as pragmatic and apolitical as it's now. Bugfixes never had so little red tape. And yes, they often get lost in political discussion; it's just your memories that are too rose-colored.

(By the way, I'm a happy Debian user. Wouldn't switch to any other main distro, altough a few niche ones look promissing.)


I found Arch to be surprisingly untime-consuming. I was expecting more of a Gentoo-like experience, but it tends to just stay out of the way for me.


Haha! Man, you've never been happy with any Linux distro, or even OS for that matter! (and no, your usual answer is wrong; if you have a problem with everything else, maybe the problem isn't with everything else).

I'm surprised about your issue with the Intel driver, which I've found to be quite stable on Linux, but then I don't do any 3D in Linux. Is this on your System76 laptop? As for mounting a USB drive, that's patently false, see my comment above. Nowadays you can either use udisks, udiskctl (depending on which version distro you're using), or even good old pmount.

I also contest your comment about how political bullshit overruns Debian and Ubuntu. If you think it's different or even worse than before, you're falling for nostalgia :)

I mean, I sympathise with your issues, and I understand the frustration of having to fight something that you'd expect to just work, ("it's 2015 for fuck's sake! Why am I still dealing with this bullshit!?"). Of course, you're doing much more cutting edge stuff with 3D and computer vision, so you're probably tickling the bleeding edge of driver support in Linux. Nevertheless, you've had this class of complaint for 15 years. Haven't you bitten the bullet yet? :)


It is the ThinkPenguin, yes. I bought it at an extra and with an underwhelming GPU exactly because it is designed to work nice on linux and only has devices that work with open-source drivers. Yet, the GPU fails under debian. After hours of tinkering and googling, I concluded I have the same issue as this guy: http://blogs.fsfe.org/the_unconventional/2014/11/12/debian-x...

Add to that that the basic install does not allow USB to be automounted by a user. I did manage to get this one to work after a wasted hour or two, but I don't consider it normal that this is not default behavior or at least very easy to configure. In 2015.

Both these things worked flawlessly out of the box on a Ubuntu with KDE.

As for nostalgia, I actually do not remember stumbling on a case like the intel GPU before: "We are holding bugfixes because we don't like the name of Intel's package". Actually, every time I try to go back to debian, I arrive with a ton of motivation thinking "this time I am going to look deep into the issues and solve them!". This particular issue had no good solution: it would have required me to install the intel driver and accept that my system would probably break at the next big update.

I am used to the old proprietary vs open debate and the horror of binary blobs. Here it is not the case.


DBus (And modules that plug into it i.e. Bluez) actually need much better docs. It would ease a lot of things out.


Which in turn requires someone to actually write them. Which is a problem ALL open source software faces. Freedesktop are better then most at it.


pretty much. When things are exposed as files, programs and users are playing in the same sandbox.

The user can then at any moment emulate the actions of a programs towards those files, and look at when it breaks. And this while only using the barest of CLI tools, rather than having to aim something like GDB or strace at the process and hope to catch the tentacle flailing.


One of the sources of this problem is the relentless experimenting that has been ongoing since about 2007-2008. First, there was pmount and hal introduced to take care this automounting, and as soon as this was in a working state, it got replaced by devicekit. Soon after, devicekit was abandoned in favor of a combination of udev, consolekit, policykit. Then then policykit got replaced by polkit. And then systemd was introduced and consolekit got abandoned. And each time the documentation gets worse (except for systemd, since the authors try to justify their choices).

Overall, it has been a sequence of constant rewriting, and I don't see an end to it. People who have been using rolling distributions like Gentoo and administer their own machines are the most affected. If one is not using the whole systemd to major DE (KDE, Gnome, XFCE, etc) stack, it is arduous to set up a mounting system which always works. Usually, vfat external disks tend to be immediately writable on mount. Move over to exfat, mtp, ext[234], or another other native filesystem and it is an exercise in exasperation to get them to mount as read-write as an unprivileged user, or even mount at all as an unprivileged user (see https://bugzilla.kernel.org/show_bug.cgi?id=15875).

The only decent program that I have found that works quite reliably is udevil.


Basically because udevil don't buy into the whole "fine grained security" wankery that seems to be coming out of the corporate/government world. It may well have started with SELinux, and as best i can tell Fedora has always been at the forefront of pushing it.


I recently redid my Arch install and I'm mounting usb drives on a user account thats just in the wheel group no problem with pretty much any filesystem (ufs, ntfs, fat32, btrfs, ext4, f2fs).

I agree that the rewrites all the time were annoying, but I have a hard time seeing systemd developers rewriting the same part of their project over and over now. For better or worse systemd should at least slow churn.


Well, since before that, really.. I can remember the early attempts at automounting back around '98 or '99, IRC. Previously: http://www.jwz.org/doc/cadt.html


Heh. From what i have seen elsewhere, jwz and at least one high up within Gnome has some antagonism going between them.


The guys who capable of improving modern Linux desktop are using Mac. So we are losing momentum in Linux desktop targeted software or facility.

They are developing in Mac but targeting server software or devops tools. So pretty modern, hot and bleeding edge things are happening in there but not desktop area.

GNOME - Always doing some experiment. No application. Just changing shell. No reliable usability. Call me when they are done. One good thing about GNOME is they care about beauty and elegance.

KDE - They just don't care about beauty of ... anything. But their applications are freaking featureful, reliable and developed by people who actually using it. I can feel they are dogfooding. I saw KF5 screenshots. They still don't care about beauty.

ElementaryOS(Pantheon) - Better than GNOME. They are having and developing actual application like geary and midori. You can feel they are actually dogfooding in contrast to GNOME couterpart.

Unity - I like Unity itself. Very long time user. But Canonical lied to us it's stable but actually alpha stage. One more bad thing is it smells vendor lock-in pretty much. Anyway pretty usable and has big ecosystem(community + vendor).

XMonad - At first time, it just looks like another tiling window manager. More I use it, it feels like 'custom desktop construction kit' if you don't mind learning about some haskell. As you all know, 'kit' is about fun and learning more than actual result. So I'm doing still this dumb like window manager ...


ElementaryOS(Pantheon) - Better than GNOME. They are having and developing actual application like geary and midori. You can feel they are actually dogfooding in contrast to GNOME couterpart.

Are you trolling?

Geary is a GTK3 application. It's great that people from Elementary are helping out, but basically it's made for GNOME, using GNOME's toolkit, object system and programming language. Midori is part of XFCE, and also uses GTK and Vala. The GNOME project of course has their own browser, called Web (previously Epiphany).

http://www.gnome.org/applications/


> The GNOME project of course has their own browser, called Web (previously Epiphany).

This move to the most generic names possible for the software makes searching for i formation mearly impossible for people who don't know the old names.

Problem with "Files"? Good luck searching for a solution and getting relevant results unless you know to include the word "nautilus".

"Package kit" has four different names in the version on Fedora 20.


Thankfully no one else names apps "Calendar" or "Mail"


I think the "vendor + generic noun" name schema is better than the "fantasy word" or "homonym" schema. So, I consider "Gnome Web" better than "Epiphany" for end users. It is also obvious that "Google Mail" and "Apple Mail" do similar things.


But I've already given an example where it fails hard - "Fedora files" returns very many false hits.

Package Kit is called "Packages", "Software", "Software Install" and "Package Kit". Searching for "Fedora packages" or "Fedora software install" is obviously sub-optimal.

Here's a screenshot for the package kit name thing. Package in the Gnome menu bar, Software in the program's title bar, Software Install and PackageKit in the About dialog.

http://imgur.com/lylItwU

I would prefer the about dialog to contain a magic-word to be used when searching for that software. So, you can call the file manager "Gnome Files", but in the about dialog let people know that "Gnome Files" is "nautilus".


Go is also a pretty good example, you have to search Golang to find results about Go on any engine other than Google's and I would assume I get results on Google, for Go, because I am profiled as a developer. I don't think a lot of thought is put into search brand-ability when it comes to some of these projects, until after the fact.


Programming language names seem particularly bad in that respect: python, ruby, Java, C...


Python is very much googleable, and C isn't that bad either. Just differentiating between C, C++ and C# wasn't possible on Google until lately.


As long as they consistently include the "vendor +" part of the name, then fine... But do they?

[This was a huge annoyance with NextStep-derived projects like gnustep... Because they started from a proprietary system developed in relative isolation, all the apps had super generic names, and culturally they were loathe to make them any less generic. This resulted in a big mess on systems where multiple desktop environments are supposed to live in relative harmony...]


According to the Midori website (http://midori-browser.org/about/) it aligns with the XFCE philosophy, which is a far cry from 'being part of XFCE'. Instead, it states it is the default browser for 'ElementaryOS', so the assumption of GP is understandable.


> KDE - They just don't care about beauty of ... anything.

Perhaps it is just different aesthetics. Too often 'beaty' means: cut features to half, remove all icons and make huge empty dialogs.


KDE mainly has a a lack of decision making by the devs, & a lack of taste: the default theming, icons & panels are all fugly, and it takes a world of configuration work to make it look reasonable.

The appearance options are so complicated and multi-layered that it's hard to replicate simple changes, and even installing a pre-built theme isn't wholesale: Separate types of themes need to be applied in a few different places.

Then you finally have the problem of GTK apps: They look bad in 99% of KDE themes, and most users are using at least some GTK apps (Firefox, Chrome ).


Still better than other desktops, which do not even allow changing background color.

> GTK apps: They look bad in 99% of KDE themes

??? There is GTK engine which uses QT to render widgets. I really do not see any difference.

I use Chrome and only problem I have is new window placement, which is not really KDE problem. Chrome uses KDE dialogs and even KDE password manager.


Chrome stopped being a GTK+ app long time ago. They switched to their own internal toolkit(http://www.omgubuntu.co.uk/2014/05/google-chrome-35-linux-ar...).

Also, I am not sure why you say Firefox looks ugly in KDE - http://album.gnufied.org/firefox.png . I have made 0 modifications in KDE/GTK theming and firefox looks just fine under KDE.


If you want to know the real problem, take ElementaryOS as an example: those people are doing some of the best UI work in GNU/Linux, yet their project is not economically sustainable and they have no option but to work on it on their spare free time.


Does noone but me look at Elementary and see it as an obvious aping of OS X, circa Snow Leopard?


> Does noone but me look at Elementary and see it as an obvious aping of OS X, circa Snow Leopard?

I'd pay for that. Snow Leopard was the best iteration of OS X ever.


I actually found it much better. Then again I don't like using OS X although I'm happily recommending people trying it.

What Elementary does better for me: * works with standard free software (gimp etc) without looking ugly. * installs easily on standard hw, no need to use laptops with crippeled keyboards (fn/ctrl instead of ctrl/fn) * you can use home/end everywhere instead of a mix of ctrl+a, ctrl+e, fn+something, cmd+something depending ob application. (yes, Excel on Windows annoys me as well because ctrl + a doesn't work like it does in about every other app there, but on Windows this is unusual.)


I haven't used it, but if it's good enough at aping Snow Leopard, I'll happily give you Yosemite for it.


I recommend downloading their ISO and trying it as a Live CD.


I've been running Arch on my work box for the past several years, but I'm thinking I might move to something a bit less demanding of my time the next time I upgrade hardware. I've never been a fan of the way Ubuntu does their admin stuff (which I'm guessing Elementary inherits), but I might give it a look.


XMonad is nice, but the haskell stuff makes it a chore. There is a great tiling WM called SpectrWM https://opensource.conformal.com/wiki/spectrwm that is a re-implementation of Xmonad in C, with a sane text config file.

Works great for me.

The devs are all current/former OpenBSD guys, including Marco Peereboom (Who forked OpenBSD recently).


I use awesomewm - it heavily relies on the Lua scripting language. It is great and I use it on all my computers - I feel so unproductive if I don't have awesomewm.


What would the advantages of spectrwm be if you could configure Xmonad with a text file?


And thus the circle is complete! DWM (C) -> Xmonad (Haskell) -> SpectrWM (C)


I finally got fed up enough with OS X to ditch it ... my rMBP now runs Mint and I've never loved it more. (Note that this is still the most amazing hardware I've ever been issued). I should also say that the Thunderbolt support is pretty new in the Linux kernel and took a bit of finagling to get working.


How have you found it with retina? When I tried this, the retina support, particularly with chrome, was poor enough to make me switch back to OS X


I'm running XFCE and haven't found any issues with retina support ... though it's a bit tiny. I'd be happy to run a few experiments if you're interested. Do you mean you were haven't problems with Google Chrome (I generally use Firefox) ... care to describe the problems?

As an aside, I think this is the Linux community in a nutshell - this offer of help is similar to many I've received in the past.

I don't generally comment on the whole systemd controversy but I'll make an exception. I have two points:

1) I think we probably do need something better than init.d scripts. Consistency would be a good thing and better sandboxing couldn't hurt.

2) I've had more issues with pulseaudio than any other piece of software on Linux ... asking for a new piece of software from the guy that wrote pulseaudio is like asking Ross Ulbricht to be in charge of OpSec for BofA.

(Yup ... I'm probably going to get spanked for this comment since he's a far better marketer and politician than he is software engineer)


I was using Arch with gnome and the main issue was text size for certain applications, in particular at the time no chrome build, not even dev tip would render text in menus etc. at a non-tiny size (though I could zoom websites and get nice retina graphics)

I've now switched to having a dedicated linux desktop and just letting my mac continue to just be a mac for now. After many hours of effort there's a point where you wonder if it's worth it. I want to contribute to the linux kernel, so feel I should live in linux too, the desktop is my solution to this (if a bit extreme!)

Which offer of help? :) I am really undecided on the systemd thing. I read terrible diatribes against it, but a friend recently pointed me towards an article which contradicts many of these points - http://0pointer.net/blog/projects/the-biggest-myths.html. the reality is it's a lot faster for sure all other concerns aside. I'm glad arch defaults to using it as a user :)


Your friend did you a disservice, as that "myth busting" is attacking an army of straw men.


I'm not so sure, I've seen these precise accusations made against systemd, so he is certainly responding to genuinely perceived flaws.


Just to take on the first one, about it being monolithic or not, it is misrepresenting the issue.

Being 1 binary or 10001 binaries don't change that they are useless unless systemd is running as init.

In essence the issue of systemd being monolithic is about run time, not compile time.


> Better than GNOME. They are having and developing actual application like geary and midori.

As stated above Geary is built on top of GNOME technology (Vala, EDS), and uses GNOME infrastructure (i.e. bug tracker and git.gnome.org) -- and I'd argue it's built for GNOME.

Mirdori is also developed independently, again building on top of GNOME technology (i.e. WebkitGTK). Based on this thread[1], "Midori does not securely handle unverified TLS certificates, so it's not safe to use for HTTPS."

[1] https://lists.fedoraproject.org/pipermail/devel/2014-Novembe...


Absolutely. I think of the window managers of my youth, Motif, 4dwm, DECWindows. Then I look at "modern" systems and for every step forward we have taken 2 or 3 steps back in usability, ease of programming, and even aesthetics. Hell I'd rather use - and program - GEM than GNOME. I wonder if this is one of the reasons developers are flocking to the web even for local apps, because programming a modern Linux desktop up is such a disaster.


Forget about application menus if you use ElementaryOS.

I have it in my smallest laptop, because it is much faster than the alternatives, but it has several shortcomings as well.

Open a video and the video player detects an incorrect aspect ratio? Good luck changing it without menus.


You will be surprised if you give KF5 a try. I've been playing with it for two days and it feels really light, customizable and comes with sane defaults. You can take all the clutter out of it and just stay with what you want.


While I partially agree and I am actually running Windows on most of my computers nowadays, all the options you listed are way better that the "UNIX way", often discussed here.

As it would mean Motif, http://www.opengroup.org/standards/unix

My little travel netbook is quite happy with Unity.


'The Unix way' and Unix standards (as determined OpenGroup etc) are very different things.


I fail to see how you can certify behaviors while stating they aren't what they are supposed to be.


"The guys who capable of improving modern Linux desktop are using Mac."

Except the Mac UI is crap. It's uncomfortable and unconfigurable.


I don't understand technical Linux users who use GNOME/KDE. They are not designed to be hackable nor modular. They are intended for enterprise users who want something Windows-ish. Their core contributors aren't volunteers/users, they're people who are paid by Red Hat and SuSE.

If you're a technical user/programmer and you want something more minimal, simple, hackable, then don't use GNOME/KDE. Use xmonad or dwm, they are written by programmers for programmers. They are literally only window managers, nothing else. Only what you put in your .Xsession is what gets started when you run startx.

Sure, you don't get a file manager or auto-mounting of usb drives out of the box, that's because no sane programmer would want that by default. If you are one of the few that do, then install udisks and configure it the way you like (e.g. mount specific usb drives to specific locations with specific permissions). As a programmer you'll ultimately be happier. I promise.

Debian doesn't get in the way. It fully supports customization at this level. Take advantage of it!


I don't have time.

In the nineties I had endless time to customise and fix the system, but these days I have so much work to do beyond the OS that I really can't spend much time at all on making the OS work.

I actually like a simple-to-use end-user DE like KDE as a container for terminal and browser windows. And I like that USB sticks can be mounted with a simple click, so I can copy something and tell the person that's bugging me to disappear with the stick so I can work on.

I remember an open-source and security conference around 2000 where I was wondering why all the hackers had default RedHat (or SuSE) installations with default Gnome or KDE and default backgrounds instead of nicely customised machines like mine. It took me some years to realise they were on stage because they got things done and didn't spend half of their time playing with settings, themes, backgrounds, fonts and convenience scripts.


Same with me.

Discovering GNU/Linux in the 90's meant playing with configuration files and themes for fvwm (the original not the rewrite one), twm, AfterStep, WindowMaker, GNOME, KDE, Sawmill, Enlightment, Metacity and a few others.

I also don't have the time nor the patience to do it any longer and just take the default install.

Which currently means Unity with Ubuntu on my travel netbook.

All my other computers run something else as OS.


I respect your opinion but I gave this up when the default kept fucking changing. I didn't have time for the endless breaking bullshit introduced by distros wanting to be cool with their desktop. Now I am using roughly the same config as I did in the 90's. Openbox (was blackbox back then) + ROX filer and a custom context menu with a few items. Any machine I use now, I have a tarfile with my settings I unpack, then install openbox and rox. I will probably use this setup for as long as I live, or until they quit keeping them running on new distros.

To preempt the usual disagreement when I say this, I have never consistently been able to get projectors/second monitor working in Linux on a laptop, so I have learned to live without these. This is also why you see Linux devs doing presentations using Windows/OS X.


I used to use Gnome for the same reason; Everything just about works correctly out of the box. It's really quite a good system.

Then I got really fed up with always messing up my terminals and switched to awesome as my window manager. It's more pleasant to use, especially with multiple monitors. The downside was that it's taken quite a lot of effort to customize it enough to make it work even half as smoothly as Gnome does (which basically means running half of gnome's services in the background).

If someone took Gnome and replaced gnome-shell with a modern compositing tiling window manager with some (optional) 3d eye candy thrown in, I would switch to it in a heartbeat. Hopefully there will be a tiling compositor for Wayland eventually.


This is essentially my reason set. I run KDE + (whatever) whenever I am doing desktop linux. It works, out of the box, in a UI paradigm that I can use effectively (tip to designers: this is essentially windows 98 paradigm, with minor tweaks).

I don't want to fart with random scripts I have to cook up and futz with to get a working system.

The things I want to hack are the things that are coming out of my own work, not making someone else's work useable.


Same with me. I prefer Gnome to KDE because I've a feeling that it saves me half of mouse clicks but Gnome, KDE Unity or what else save so much configuration time. Furthermore it's easier to google solutions when something goes wrong, which in turn are extra time savings.


Yep; all I need is a terminal,web browser, and a music player.


Why does "being a programmer" mean that I don't want my usb key to automount? Why does "being a programmer" mean that I should want to hack my desktop environment? Maybe I want to work on my own stuff, and have a desktop that works?

Maybe I don't want to have to read Learn You a Haskell to configure my window manager. Maybe I hate the whole concept of tiling window managers because I like to have overlapping windows?

You are making a lot of assumptions about what other people should want to spend their time on. I don't want to waste time or brainpower coding xmonad or dwm to behave in a way that doesn't annoy me, and manually setting up a bunch of basic infra like automounting USB keys, when I can just use KDE which doesn't annoy me by default.


>Why does "being a programmer" mean that I don't want my usb key to automount? Why does "being a programmer" mean that I should want to hack my desktop environment? Maybe I want to work on my own stuff, and have a desktop that works?

I agree, it really doesn't. Just look at Linus Torvalds for example. Some people asked him at a recent DebConf [0] what he thought of Debian and Linux distributions and his reply was basically "Oh I don't really care about distributions or systemd, I just wanna install it and get on with my life (i.e. the kernel)".

[0] https://www.youtube.com/watch?v=1Mg5_gxNXTo


And when he has to care he has enough clout that yelling about it online get it fixed asap. As was seen when he ranted about his daughter needing the root password to change wifi settings.


Linux has always been the equivalent of a discussion forum with opinions expressed in code.

Everyone has an opinion about Technology X/Y/Z, often strongly held.

But you can't make a usable desktop OS out of opinions. You need a big-picture long-term strategy. There doesn't seem to be a lot of that in most of distro world.

Server Linux has done better because the problem space is (kind of...) smaller and better defined, so strategies and innovations have appeared, and there are clear goals to work towards.

Consumer Linux is like a military campaign advancing in all directions. Everyone is working on something, but - beyond development for development's sake - it's not at all obvious why.


I think there was a window of time where this was not true and the majority (user-wise) of the Linux desktop was really strong, united, and standardized. This lasted approximately from the mid-2000s until GNOME 3 and Unity.

- Ubuntu had a polished GNOME 2 desktop.

- Red Hat Enterprise Linux had a polished GNOME 2 desktop.

- SUSE Enterprise Linux had a polished GNOME 2 desktop.

They were also using pretty much the same components. Since then, we had the MATE/GNOME 3/Unity/Cinnamon split and the upcoming X.org/Wayland/Mir split.


I find myself wondering if things started going to hell when https://en.wikipedia.org/wiki/Oracle_Linux happened. Since then RH seems to have been on the warpath.


I'm skeptical Mir is going to be a serious split, and even then it'll be Mir/Wayland - both of those will still run X.Org apps via shim-servers, same way Mac OS does.


I seem to recall SUSE being a RPM based distribution but with KDE instead of Gnome.


Their enterprise distributions were very much centered around GNOME. Remember that this is when they still had Xamarin.


I agree that many people want their machine to just work so they can work on actual work, not on fixing the OS.

I don't know how the solution to that is the buggy mess of, for example, Gnome on Fedora20 which is a buggy mess.


I didn't know I'm supposed to be paid by Red Hat or SuSE? :-P I've been helping out on volunteer basis at least for the last 10 years.

Seriously: Red Hat is a very big company and they provide loads of resources to GNOME for no apparent reason. Red Hat also expands their commitment in this. Still, there are loads of volunteers. Suggest to talk to people at the GNOME stand at various conferences. Or maybe you meant developer instead of contributor. In which case I suggest to read the membership applications to the GNOME foundation. There's more to contributing than just being a developer. See ttps://mail.gnome.org/archives/membership-committee/ for the archives.


Kudos to you sir for contributing!

By "core contributor" I did mean developer but really my point was that there is an explicit gap in the roles of the people involved in these platforms. You have a small group of people explicitly focused on development, and a much larger group of people explicitly focused on usage. Windows and Mac OS X are in the same situation. The best strategy for the people developing these platforms is to focus on a generally applicable UI to accomodate the large heterogeneous group. Ultimately this will result in the platform being more omakase (if you will) and less likely to be everything for everybody, though a decent default.

I don't mean to say that this is a fault with GNOME/KDE themselves but more of a likely unavoidable consequence when you build a product for a large general audience.

If you're a programmer this is really annoying. When something is broken or annoying to you, you have the ability to fix it but because there is so much organization/process/design around these systems, the activation energy is too high.

But if you're a programmer you don't have to deal with this. You can just use a simpler system meant for hackability, a system where the users are the developers.

I see nothing wrong with big vertically integrated Linux systems like GNOME/KDE and in fact I'm glad they exist. If they did not, I would not be able to genuinely recommend Linux to my non-technical friends. Apple has shown the vertical integration is an efficient and successful way to design products for large groups of people, not unsurprising that those systems are mimicking that.


KDE is very modular and configurable - often more so than these alternatives. I'm a technical user but that doesn't mean I don't believe in using libraries to combine related functionality (would I write my own web server? Then why write my own file dialogue?), or having shared services that run in the background until needed (would I embed a database in every application? No, I run postgres as a dæmon, and applications that want a database can talk to it. So why have every program write its own tray notifications?).


"I don't understand technical Linux users who use GNOME/KDE. They are not designed to be hackable nor modular."

I use Kubuntu for work and I don't see your point. All my work is done on CLI/vim/emacs and GUI is mainly just to run a browser. Why would I spent my time "hacking" it if it suits my need?


When I plug in an USB stick every few months, I do not want to go on an hour long research trip to find out about "udisks" and how to configure it.

There are tasks I only do very rarely (burn a CD, use Gimp, tag some MP3s, use a VPN, hold a beamer presentation, etc). When I do them, I want to be done quickly. GNOME does this quite well. The last time I used a tiling window manager, I tweaked the configuration for an hour to make Gimp's weird window behavior work well.


There is nothing weird with gimp windows in single window mode. It has been like this for the past few years...


At that time Gimp did not have a single window mode.


"When I plug in an USB stick every few months"

That often? Sometimes its hard to remember Dropbox is only about 7 years old and I've only had "always on" internet for about 15 years. Its surprising how fast we forget how things used to be. The future is very unevenly distributed and it feels so weird to read stuff in 2015 about antique USB sticks that sounds like griping about mtools and 360K floppy support.

Something that annoys me about USB drives is I haven't used one for anything but making bootable installer sticks for many years, and the "auto mount" paradigm makes life more difficult for me not easier. All I wanna do is dd the freebsd installer to the USB stick, now stop trying to automount.


I just copied some music to an USB, to listen in the car later.

And the NSA is not reading my USB file names.


In the Enterprise world online file storage like Dropbox is commonly blocked.


Yet, isn't that the kind of workplace most likely to fill their USB sockets with silicone? If you've ever wondered why PS/2 connected UI devices will never die in an era of USB, well, now you know.


Real programmers use Emacs for their desktop environment. If only it had a good text editor.


Yaquake + neovim + fish


No they are written for programmers that are living in the past.

My first UNIX was Xenix and I used quite a few variants of commercial UNIX and open source clones.

Unless I have to do a remote XWindows sessions, I see no value in xmonad or dwm.


" auto-mounting of usb drives out of the box, that's because no sane programmer would want that by default."

I plug in usb sticks all the time and of course I expect them to automount on linux just like on the other OSs I use on a daily basis. (and I was very annoyed last week to discover that my linux machine did not auto-mount a sdcard, and I had better things to do than start troubleshooting it)


To all the programmers saying they don't want to spend their time building their own desktops:

My point isn't that people in general should be hacking up their own Linux desktops, even if they can. Programmers that like to do that probably should though.

My point is that if you are going to commit to using user-oriented systems like GNOME/KDE, don't do non-standard things and don't complain because system internals seem too complicated. They aren't meant to be hackable/simple!


don't do non-standard things and don't complain because system internals seem too complicated

This is a false contradiction. You can have a system with a user-friendly default configuration (for developers) and that is still understandable and hackable.

In fact, this is the entire premise of UNIX: it consists of small orthogonal programs that you can combine in various ways to solve problems. E.g. remember the old script-driven hotplug[1]. It was user-oriented: users did not have to add manual modprobe stanzas to some file anymore or manually mount USB sticks. On the other hand, it was a set of shell scripts that could be modified easily by anyone with some basic grasp of bourne shell scripting.

[1] Which had the problem of being slow due to requiring a lot of fork/execs.


It's not a false contradiction (dichotomy?). In general, non-standard modifications can break the default configuration, even in your hotplug example. If they can break the default configuration, I'm simply saying don't complain when they do.

The OP is running GNOME and complaining because he's not using systemd and it broke something (non-standard configuration) and because he can't understand the d-bus-controlled cgroups system (complicated system internals).


The traditional user/group system may be simple, but it is limited. For example, it fails when you have a multi-user desktop system when some people login in locally, and some people login in remotely. What should the permissions on on the camera be? And what about hotplugged USB devices? It's no longer possible to have a fixed device node with fixed permissions and fixed user group membership.

The same applies to suspend. The local user perhaps should or should not be able to suspend the machine. Presumably remote users should not be able to do so at all.

What about wireless network connectivity as the machine moves? Who should have permission to manipulate that?

If we did not have more a more complex dynamic permission system[1], some would be saying that modern Linux is outdated and unable to cope with the more modern reality of much more dynamic needs like this.

Instead, the article seems to be saying that it's become too complicated.

We can't have it both ways.

[1] http://www.freedesktop.org/wiki/Software/systemd/multiseat/


I wonder how often it happens that people connects remotely to a Linux PC with a camera. When I do it is because I ssh to another laptop of mine, maybe to shut it down after I lost the desktop (maybe the graphic card crashed). OK, we should support as many use cases as possible but it could be acceptable to tell people that if they want to setup a multiseat machine shared with strangers (students at school? They can get very creative) they disable mics, cameras and don't plugin dvds and usb drives. Servers usually don't have any of them. Finally, if you give somebody a sudo you accept that s/he can shutdown the system remotely, normal case for a server.

I also wonder how other OSes handle that, I'm looking at Windows and OSX. With VNC/RDP/Teamviewer/etc you get full access to the Windows desktop and all devices. I guess OSX has the same, plus sshd.

So, maybe supporting a fringe use case is making more common use cases more inconvenient?


On Windows there is the Group Policy. If your machine is domain joined, the group policy is controlled by the domain. A non-domain joined machine also has a policy, which can be edited using the "Local Group Policy Editor".

The policy contains items such as "Devices: Allow undock without having to log on" or "Deny access to this computer from the network" (user/group list).

A policy consists of a number of such settings. For instance you can set who can shut down the system, and who can do it from remote.

With Windows 8 came <a href="http://www.windowsecurity.com/blogs/shinder/microsoft-securi... access control</a> where access control lists (ACLs) now can include tests for type of device being used, network location etc. This can be used to disallow access to certain documents or applications from phones/tablets while allowing access for the same user as long as he/she uses a stationary device within the corporate network. Dynamic access control also takes most of the pain out of complex access control as it can decide access not just upon your security group membership, but also on other claims such as limits, department, organizational unit, local certificates etc.


> Finally, if you give somebody a sudo you accept that s/he can shutdown the system remotely, normal case for a server.

Sudo configs are configurable (as are default command aliases) to at least make this a deliberate decision, and not an accidental occurrence. You could also, in rare cases, just "whitelist" certain commands, although this is generally not that practical.

Sudo, is however, by definition, a dangerous tool. I try to make sure that everyone who has the right is aware of the responsibilities.

Windows and Mac both have their own privilege escalation, and shutdown commands, so there's nothing particularly different about their situations.


I wouldn't worry much about people whom I granted access to my computer, (though it could be worrisome if someone else got their hands on their credentials), I would worry about people who got access fraudulently. But that requires tighter security policies.

Regarding Windows, it's all explained on MS website (just a Google search away), and it seems quite potent, but a default consumer OS doesn't push for stringent requirements for each and every application being installed. It's up to the user to setup the adhoc policy, and hook up VNC to it, I guess. The good thing with application markets and distro supported package repositories is that in theory all this could be included in the package and verified for conformance by the repository maintainers.


It is the age old security wankery issue.

http://article.gmane.org/gmane.linux.kernel/706950

In essence they don't see the problem inherent in having the server sitting encased in concrete at the bottom of the Marian Trench.


The traditional user/group system may be simple, but it is limited. For example, it fails when you have a multi-user desktop system when some people login in locally, and some people login in remotely. What should the permissions on on the camera be?

The way this kind of thing was solved 20 years ago was that you could be given membership of some additional groups when logging in locally.

(The wart is that if you, say, opened the sound device when you were logged in to one of the lab workstations locally, you could pass that file descriptor to another process and keep it open after you'd logged out, and then use it to play Rob Zombie at full volume when some unsuspecting victim was by themselves in the lab at 2am).


A fragile and too complex to troubleshoot system is an unreasonable tradeoff to support these niche use cases. Find better solutions or live with the old level of service. I suspect the PR cost of having imperfect multiseat support is insignificant.


I don't understand why multiseat support keeps getting brought up in systemd-related discussions. It seems to me like an entirely artificial problem, all I have ever seen is computers that are used by at most one user at a time and multi-user servers with no desktop whatsoever.

I literally can not think of a situation where the multiseat usecase is relevant, small SOC computers cost too little compared to buying extra graphics cards and USB hubs.


Best i can tell, it is because some military guys wants to be able to airlift one computer and a bunch of screens, keyboards and rodents into the field, yet make sure that only the spooks gets access to the top secret stuff that shows how much the war is going to the dogs.

Then again, i can't see how to fix that with software, other than blankly refuse to assign a device that goes away and comes back to a existing seat. Never mind trying to assign a whole new device. It seems to sprout edge cases all over, like trying to zoom in on a Mandelbrot.


> What should the permissions on on the camera be?

    $ groupadd camera
    $ usermod -a -G camera $USER_WHO_NEEDS_CAMERA_PERM
Of course, this would require the group ownership of the necessary /dev/device or camera program:

    chmod 660 /dev/$CAMERA_DEVICE
    # and/or
    chown .camera /usr/bin/$CAMERA_UTIL
    chmod 770 /usr/bin/$CAMERA_UTIL
In practice, the "camera" group should be added by the driver (or whatever) install script, or maybe by the OS installer. Adding the user to the "camera" group would usually be the job of a wrapper around /usr/sbin/useradd or other admin tools. Usually, I would expect the distro to set up permissions that are apropriat4e for their intended audience (i.e. desktop vs multiuser-server vs "other").

On my gentoo desktop, my user account is in many groups for this very reason:

    $ grep pdkl95 /etc/group | cut -d: -f1 | sort | column
    audio           deskmsg         plugdev         sshpermit       video
    cdrom           floppy          portage         usb             wheel
    cron            games           postgres        users
    davfs2          pdkl95          realtime        vboxusers
Often, I find that when someone claims that the user/group system is too restrictive, they haven't considered simply adding more groups.

> login in locally > login in remotely

You would use PAM(8) for this. One method would be to use pam_group(8), by putting something like this in the appropriate /etc/pam.d/ config file, such as /etc/pam.d/login

    auth        optional       pam_group.so
...and configure /etc/security/group.conf (see group.conf(5)) with something like:

    gdm; *; *; Al0000-2400; camera
This way, the people that login with gdm are added to the "camera" group. Again, this is something I would expect desktop-focused distros to setup, at least for the common stuff.

> wireless network connectivity as the machine moves?

That would be a local permission, generally, which would be covered by a setup similar to what I describe above. Even if the computer moves, it is still the logged in (possibly through a suspend) user that needs permission to configure a network interface.

> some would be saying that modern Linux is outdated

...and I would reply that those people probably need to spend some more time researching how to fully utilize the user/group system and PAM. While there are a few cases where the UNIX style of permission is insufficient, they are rarely encountered on a typical desktop or simple server. In the case of the common single-user laptop where the one user is also the "admin", there only granularity you need is a description of when they should be prompted to be become root, which is trivial using basic user/group permissions.


I used to be able to say Linux was clean, logical, well put-together, and organized.

I don't think this was ever really the case, but I do think software is evolving to meet the list of constantly updating use cases.

Of course managing one network interface with all your peripherals connected at startup required less complexity. But if your target audience also has use cases of wanting their USB drive to "just work" when plugged in, being able to connect to a new wifi access point under their normal user, etc., you will want some extra layers of abstraction that aren't available if you limit yourself to FDs and Unix-style permissions.

In addition, some software is cross-platform and doesn't always care about maintaining consistency with the conventions on a single platform.


you will want some extra layers of abstraction that aren't available if you limit yourself to FDs and Unix-style permissions.

I have yet to see a single case where FDs and unix-permissions wouldn't have been perfectly sufficient.

I think the real problem is that we are facing a generation of developers who never learned how to properly use them.


There are cases where you want to be notified when an event occurs or the success of your application's action (in more detail than the error codes from a write(2) to a FD).

You can solve the problem by using socket FDs, but you have to roll your own format for registration, communication, error checking, making sure the appropriate applications can read, etc.

I think many developers would prefer using DBus as an abstraction for these types of problems.


Why not a library that speaks a simple[1] line based protocol over pipes and/or sockets? Or even just plain old files?

That's how unix used to work before the desktop people took over.

[1] Yes, DBus is "sort of" line based. But from all I've heard it's the opposite of simple and... well, let's just say people don't seem to have much good to say about it.


It's true that DBus is somewhat of a complex solution if you have a simple enough use case, and I think most of the criticism is the documentation rather than the design (or, rather, criticism of the design is mostly that it isn't Unix-like).

But having a daemon that relays your messages to other applications based on a standardized protocol has some merit of its own, even if it isn't strictly Unix-like.

I might have some bias, since I think the Unix-way is oversimplified for some problems, even though there are often ways to make it work.


But having a daemon that relays your messages

I agree, and I'm not opposed to the idea of a daemon.

DBus just seems to be (from my perception) rather poorly designed on every level. And the more pervasive it becomes the more of its bad design bleeds into more or less unrelated other software packages.

the Unix-way is oversimplified for some problems

The unix-way of course has limits, but it's usually worth to explore it as far as possible and only then divert to more baroque designs.

In the majority of cases you'll find that pure, file-based approaches are a lot more elegant, efficient, discoverable and debuggable than the alternatives (see e.g. qmail vs postfix, or runit).

There's a lot to be said for being able to use the standard unix cutlery to inspect and interact with the guts of your application.

Imagine DBus was just a directory with one or two files per pid. Imagine you could read past messages with 'cat', follow them with 'tail' and inject new ones with 'echo'.


The kernel has had inotify for 10 years or so.


I think the only thing really missing from the "file descriptor" model is revoke() - and that's been the case from very early on (hang-up for tty devices works like a device-specific revoke()).


revoke() is potentially harmful, since an application not written to check the error code after each interaction with each file descriptor will break when one of its file descriptors is revoke()'ed out from under it.

Why not instead use fuser(1) to find programs using the file descriptor you want closed, and then take program-specific actions (kill(1), restart, signal) to ensure that they sanely release the descriptor?


Any interaction with a file descriptor can already fail (eg. pretty much anything can potentially give you EIO).

Using fuser() in that way is racy, and how is kill() any better than returning a possibly-ignored error anyway? You can't do program-specific actions because the point is to enforce "You did have permission to open that, but now you don't and you aren't allowed to keep using it".


I agree. And actually in my book revoke() is also unnecessary, for the same reason.

Why not simply close the fd and let the regular transport error handling (retry/backoff) do its job?

If you find yourself needing any more ceremony than that then that almost certainly hints at a higher level mistake in your protocol design.


You can't externally asynchronously close a file descriptor within a process, because of the way file descriptors are re-used.

revoke() basically works like closing it on the "other side" - the process still has an open file descriptor, but any IO on it will error.


We may be talking about different things but why would you need to close an fd "externally" when (at least for pipes and sockets) there is someone on the other side who can just close it regularly?


revoke() would be useful for things like device nodes (and even regular files) - consider "you can open the sound device while you are logged in locally, but you can't keep using it after you've logged off".


I guess you've never used networking on a Unix system then.


Yes I have.


Maybe I was lucky in my choice of Linux distro but I never had problems with WiFi or plugging in USB drives since I started with Ubuntu 8.04. All I can say is that I was surprised about how a drive becomes available much faster than on Windows. That was the case with XP and still is the case with Win 8.

Could you give some examples of USB drives not working or new networks not found? Thanks.


I meant that when a user plugs a USB they probably want an immediate popup on the desktop and the ability to manage the files on the drive, or that they want to be able to see the wireless networks and configure them without switching to the root user, or even shutdown the computer with a button on the desktop.

The way this is done is with a message bus and other subsystems (there were consolekit, networkmanager, ...), and these don't really match the "Unix-way" of doing things. Though I don't consider this to be a big issue, some people do.

If you have a minimal distro and install a very basic environment, you'll often have to set up these to work if you want some of the above features. Ubuntu will already have all this set up though.


Ok, I understood why I was puzzled. Ubuntu opens a file manager window whenever I plug in a USB drive and it tells me about available WiFi networks. XFCE on my netbook does the same. Probably some other distro is more silent about those events.


Indeed, ever since I've been using Linux (mid-90's) I've heard that Linux was a big mess of heterogeneous stuff held together with duct tape while FreeBSD was a clean, logical and organized.

I've tried FreeBSD several times but at the end of the day features and compatibility beat clean and logical.


Compatibility? Yes, FreeBSD doesn't support the latest laptop stuff yet (like 802.11ac), and doesn't include the driver for the Wi-Fi adapter in MacBooks (because that driver is a piece of crap). But a lot of hardware is supported.

Features? Well, FreeBSD has a lot of them (big ones: ZFS + DTrace + Jails + pf) and they work great out of the box!


It doesn't have a driver for the last two releases of intel CPUs integrated graphics. Both linux and OpenBSD have fully open source drivers for this.

It's unfathomable that this ultra common hardware is unsupported, until you consider that even most FreeBSD devs don't run it on their laptops, they run MacOS with FreeBSD in a vm / via ssh, because it's not up to the job.

It's at least 5 years behind Linux, probably more.


I didn't grow up on Linux, but rather was accustomed to OSs that do everything for me: first Windows, then Mac. And when those have failed me, by feeling too complex and not giving me enough control as a power user, I discovered Arch Linux.

As opposed to other distros, Arch is the only user-centric distribution out there. When something goes wrong, I don't have to deal with "where does this come from", because the distro itself doesn't interfere with anything. It's either a bad config on my part, or an upstream bug.

Upstream bugs are fixed immediately due to the rolling release nature of Arch. And configuring my system is extremely easy, since 95% of the problems are solved by looking at the wiki.

I won't bash on other distros - to each her own. But I urge any power user that wants full control over their system to try Arch. It's truly another class of Linux.


All/vast majority of linux distros give you the same amount of control. The only difference is that most of the others customize a bit (debian is supposedly just setting sane defaults).

Never in my life have I had to work ON my computer as opposed to doing work using my computer as much as I have using Arch.


>Never in my life have I had to work ON my computer as opposed to doing work using my computer as much as I have using Arch.

I'd say FOR instead of ON, but otherwise, that was my impression of Arch.


I was a similar user, but lately the mainstream of Linux is forcing these monolithic, fragile systems on every use. I switched to FreeBSD a few years ago and haven't looked back - it has the same advantages you describe, only more so.


The most productive environment I've ever used is Gnome 2 using Compiz. It was incredibly easy to flip between windows. Compiz has since been abandoned and only the most die-hard fans try to fudge it into working on modern systems. I gave up after Debian Squeeze and just started using tmux instead. I felt that Linux was losing it's way after that.

Eventually I stumbled across Cinnamon (and/or Mate) with hotcorners and tmux. It doesn't have the previous configurability but it's still pretty good.

The reason people held onto XP for so long was that it just worked well and it was consistent. Many of the Linux GUI people should take that to heart. Instead they always seem to try to be in catchup or trying to surpass Windows. Gnome 3 and KDE 3+ have been so very unwieldy, resource hungry, or lacking in features. They've been the Windows 8 equivalent of the Linux desktop.

Even to this day, most people don't like the Metro style interface (unless they are playing with it on a phone). I'm blown away by how difficult it is becoming to use Windows. I will go into the Metro style settings and see that a configuration option is not present and then have to go into the good ol' control panel. Or vice versa. I'm getting increasingly drawn towards using Powershell by default so I don't have to backtrack.

Windows has been trying to force new "UI paradigms" to sell more copies of Windows. Linux should realize it doesn't need to adopt the new and flashy... unless it really is an improvement.


If by "abandoned" you mean "adopted to most DEs and inserted at their core code", then yes, it was. The less used visual effects were striped from it in the process, but you can still install them.


Peculiar. I'm running Compiz on a 14.04 XUbuntu setup. It was a bit weird to set up, but after changing the window manager, I was able to get up Compiz running with most of the bells and whistles.

I did notice the 3d windows does make some horrid tearing artifacts on rotate. Turning that off; it gives me the nice 8 desktops I'm used to.

(Have Thinkpad T61, bound rotate cube keys as above the arrow keys)


I think KDE is as XP-y as it gets. No big frill, but you can configure nearly everything you wish.

Ever tried KDE ?


The only problem of KDE is the default configuration. Every time that I install it on an machine, must hurry to change mouse to double click. Everything else is enough good to use it without changing/tweaking it.


I think it is a old-timer issue. I'm using Linux on my desktop for ca 15 years. I also had a mysterious permission problem after my last Ubuntu upgrade. In contrast, to 10 years ago, I do not like hours of debugging to get USB working anymore. It was fascinating then.

For the record, my permission problem was solved via "sudo pam-auth-update --force". Thanks Arch Wiki for giving me the crucial hint.


I agree wholeheartedly on the old-timer issue (ca 17 years). Too much work, children and other responsibilities to justify hacking hours on the OS itself. I think the current state of Linux – especially on the desktop – is in a state of flux, not to call it disarray, and it makes me grumpy that after all these years of hacking, debugging, patching and bug reporting, things end up pretty usable, and then "they" manage to break some of the most basic functionality again. Only this time, it's gotten a lot harder to make sense of the whole mess...


So, why not use a distribution that doesn't change every six months or two years? Such as Ubuntu LTS or CentOS.


I am having a similar problem (password sometimes required to shutdown or mount disks) with my latest Ubuntu, so I would appreciate a little more detail.

Do you happen to have a link to the arch wiki entry you are mentioning? What was exactly your problem?


I had problems with shutdown, hibernation, USB sticks, wifi notifications. Here is a very similar launchpad bug report:

https://bugs.launchpad.net/ubuntu/+source/pam/+bug/1317518

As in the bug report, the problem was not replicable. While my Thinkpad had these issues, my desktop was fine.

Unfortunately, my notes say nothing about the Arch wiki. Maybe I confused that with another issue.


Ok, thank you for the information. I will try to work from here :-)


I've been using linux for about as long as you, I still enjoy spending time trying to figure out why things broke about as much as I did back then (which means not much).

My impression is that figuring things out is getting a lot harder, the big mess of D-Bus, GNOME, NetworkManager, anything ending in Kit and systemd is not only in a constant state of flux but their internals are also poorly documented (or not documented at all).


I’ve googled this issue, and found all sorts of answers pointing to polkit, or dbus, or systemd-shim, or cgmanager, or lightdm, or XFCE, or…

Googling to troubleshoot Linux sucks. Part of the suck is that Google's page rank favors old pages. Part of it sucks because it's keyword finding favors forum threads where the terms are in someone's signature.

And part of the suck isn't Google's fault because GNU/Linux documentation is dense to the point of opacity. It is great for professional standings and not so great for amateur ones.

Worst of all is that FOSS documentation has a lot of "not my problem" links. If a piece of Software is doing something bad to Firefox it's documentation will mention Firefox and link to the Mozilla homepage...so to speak.


Here’s the crux of the issue: I don’t even know where to start looking. I’ve googled this issue, and found all sorts of answers pointing to polkit, or dbus, or systemd-shim, or cgmanager, or lightdm, or XFCE, or… I found a bug report of this exact problem — Debian #760281, but it’s marked fixed, and nobody replied to my comment that I’m still seeing it.

If this is the crux for the OP I don't really see how this is for Modern Linux. Maybe it's just me (though I doubt it) but what is described here is basically how working with linux (and software in general to a lesser extent) has been for me as long as I've used it. Small annoying problems here and there which take quite some amount of time to find a fix for, if any. And after a while if there are too many of them it becomes irritating and you start a blog post to nag about it :]


He's nagging that modern linux isn't like old linux, because the old debugging techniques don't work.

I feel the same way. Maybe modern linux isn't a bad thing viewed neutrally, but it's not the same thing as the one I love.


Reading the article, this is not about systemd. Debian wants to offer a choice between systemd and something else. So they have things like systemd-shim, cgmanager, etc. He says he's using systemd, but while trying to figure out where his problem is, he's seeing references to systemd-shim, cgmanager and so on. Those things are not systemd, nor do you need them if you use systemd.

Offering the alternative makes finding solutions to his problems more complex and confusing.


I think the main point of this article is that man pages are incomplete on linux. When I have to explain some unix arcane to a colleague, I often start by saying that man is the most important command in unix. But I am always obliged to say that on linux, it is often incomplete.

When I learned unix 20 years ago on sunOS, man pages were complete. It has never been the case on linux. I think man pages should give all the assumptions about the system that may someday break or give pointers to documentations.

I have often to use strace to analyse issues. When dbus starts to be involved, I am often lost.


I don't like me-too complaining, but I'm in pretty much the exact same situation: I've been running Linux since the nineties, and the last months I've been having problems with Debian that make my systems unusable and that I just don't know how to solve. Random hang-ups on right-click (KDE and DBus), screen blanking every few seconds (power saving bugs), screen corruption and windows disappearing (graphics bugs), daemons not starting because of partition layout (systemd related).

I'm actually somewhat worried about the current state (especially on the desktop). It never was a big deal to me that some flashy functionality or hardware support took longer to show up in Linux and *BSD, because once it arrived, it usually worked well and was stable. Now however there are a lot of annoying complex bugs that are hard to trace and don't seem to be actively analysed and fixed.

It's not a good sign that as a hacker I don't know where to begin to properly track some of these bugs myself, and the people who receive the bug reports don't seem to have a clue either.


Linux distributions (because this is actually what you are talking about) have not lost their ways but theirs founders.

Face it, there are only so many years you can work on the same issues for every new model of laptops. People get tired, people retire from projects.

But it is still open source. You are right that many behaviors (of Debian, apparently the focus of your article) suck. Well, jump in!


For simplicity and clarity of design, may I suggest FreeBSD? Though I am not sure I would want to run Gnome or any of those crazy things -- I run Joe's Window Manager, jwm, because all I need from a window manager is to manage windows, not handle file types or show me a directory tree or emulate the the MS task bar or whatever.


FreeBSD will not help him since "linux" is not his problem at all. If he didn't run all the fancy things, and did what you do, then his problems would be somewhere between non-existent and clearly-debuggable.


This had me laughing. Not the FreeBSD bit. That bit is serious. The 'didn't run all the fancy things' bit. That was funny.

The problem is: in Linux, pretty soon, you cannot do without the fancy bits anymore.

I run a bunch of servers. Servers need a firewall. Nothing fancy, just iptables. Right?

I'm running Ubuntu servers. They have 'firewalld' installed. Firewalld uses dark magic to manipulate iptables and ip routes. Oh, it's all great when it works, for as long as it works. You can simple open up a port by issuing a firewalld-cmd command. Until you can't. Until firewalld-cmd says it can't connect to firewalld even though it thinks it is running. And firewalld can't restart because it thinks the configuration files are wrong.

I'm not a Linux newbie by any standard, but I could not solve this problem. After restarting firewalld, all the routes on the machine were gone. And thank god for iLO.

This is on a production server providing secondary services to 200.000 users. Soon after, I found similar issues on our loadbalancers, webservers, database servers, all running firewalld.

There was no other solution other than to reboot the server. I tried debugging firewalld and ran into (no particular order) apparmor, dbus, python, firewalld-cmd, firewalld, network-manager and libvirtd. No where was a hint to be found of why firewalld didn't work.

I don't consider this to be anything fancy. This is supposed to be a simple wrapper around iptables. It is supposed to 'just work'. It is supposed to leave clear logs about what it is trying to do in /var/log/syslog.

Sure, I'm getting old and cranky. But I'm really tired of having to get called out of my bed at 03:00 am because of this 'innovation' that tries to solve edgecases. And solves them badly.


Huh? I never saw something like firewalld on Ubuntu? There is ufw but it's not enabled by default..

Looks like you mean Fedora/RHEL7? Ubuntu on the server (14.04) is quite okay in my experience.. you can even deinstall dbus and still use upstart..

RHEL7 made me furious (NetworkManager by default... ) but I'm not used to it so it may be just missing experience.


firewalld for Ubuntu in the 'universe' repo. Don't use it. It's horrible, unstable, slows network speed and is impossible to debug.


linux doesn't eat its own dogfood anymore. This would never be tolerated if the devs actually used their product. Its all about serving the imaginary theoretical market researched noob (-only) user of gnome mobile phones that don't exist and everyone else has to go away because they don't matter anymore. To freebsd! Things actually work, there. Its the universal OS of the 10s just like Debian was the universal OS of the 90s/00s but now is just a gnome bootloader for ubuntu.


I don't think this is anything new. I've certainly never felt that Linux was "clean, logical, well put-together, and organized." Linux distros are a mish-mash of stuff of various levels of quality built by people with different goals, put together by volunteers into something that mostly works in the more-commonly-tested areas.

Back when I played around with my OS as a hobby I was able to keep up with most of the issues and changes, but without investing that kind of time, I don't have much choice but to leave things at their defaults, install updates, and hope for the best. Things weren't better before, I was just more involved, and I wonder if the same is true for the author.


I have the feeling that it's that way because the usecases today have changed while the fundamental parts of the system haven't. Unix was designed in the 60s and 70s and everything that came later was bolted on what was already existing. This includes stuff that's standard unix behaviour. For instance abusing the user system to gain better isolation of daemons.

This all has to be done to stay compatible with previous systems and raises complexity.


I don't think speaking about "modern Linux" and then referencing one user's experience with one distribution is indicative of the entire movement.


Are there distros where his complaints about lack of transparency, documentation and conceptual clarity don't apply?


I tend to agree with this article in some ways .. I've been using Linux since the days of the minix-list, and have over the years been bothered by exactly the sorts of things that are described here .. it seems every year/new release of my preferred distro (Ubuntu Studio) I have to re-learn things that are not well explained or documented.

So I've just kind of gotten used to using the source. Seriously! If I find something I don't get, I build the package from source, and debug it. This is really the only way I've been able to survive as long without tearing my hair out in frustration over the years.

Its a glib response to the problem, but actually it really works.


Another thing that bother me in Debian was subpixel-hinting and font-smoothing, it wasn't enable by default (or it wasn't capable and adding a font config half fixed it), is this still an issue in Jessie? That was really bad UX.


I think that had to do with some patents on the truetype font hinting interpreter in Freetype.


IIRC those patents are related to MS ClearType and have expired (but my memory may serve me wrong)


Some of them have expired, but there are still patents related to subpixel font hinting and layout which won't expire before 2018.


I've never been able to reliably use debian without a /lot/ of frustrating configuration. I'm not a bored high school student anymore.

Back when I used sarge, it was okay - there was some configuration I had to do to get X working, and at one point I had a pretty sweet custom kernel build on a laptop that extended its battery life 35% over windows.

Decided to spin up a VM to use debian as a dev environment again recently, and -- well -- after three hours of futzing with it I deleted the VM and booted Ubuntu instead. Had issues getting virtualbox extensions working; some others that I can't remember.


This classifies as another useless systemd rant. Everybody switched to that ages ago. Can every distro be wrong?

Anyway, I've been using Debian Sid since 2000 and, for what it's worth, it "just works" on my hardware now and it's much easier to install/manage than, say, five years ago. Then again, if you mess with the default configuration you're looking for trouble and should accept that problems are harder to fix for corner cases.


The distros were all wrong when they adopted hacky sysvinit configurations with things like Makefile-style concurrency, initscript headers parsed by a preprocessor just so they could get some half-assed dependency system, not modularizing their common initscript functions in library files and instead rewriting everything from scratch every script, so on and so forth. And then ignoring all the alternative init systems besides systemd and Upstart that were around well before.

I'm not saying they're wrong this time, but there's no reason why not. The haphazard state of the Linux desktop that the original author laments over might be indication they're wrong in some places.


Well it may be a bad time to start with Debian unstable, when they are switching to systemd (or are in the process of doing it).

I remember updating to Debian testing (never had the nerves for unstable) when they switched to Gnome3 and I experienced bugs (and really nasty ones). When they make big changes it is going to break things.

If you don't want to have things broken, either use a stable distribution, don't upgrade for some time or live with it.

Right now I am using Arch Linux mostly - but I've heard the change to systemd had some rough edges for Arch Linux too (didn't use it at this time).

I can say that I am completely fine with systemd on Arch Linux and I don't understand the big problem some people have with it (on a philosphical level I do understand them, but not on a pragmatic level).


The ultimate problem is money and leadership. Open source lacks too much of both. In the end, proprietary and vertical systems tend to dominate. Interestingly, it looks like we are the verge of a new explosion in such system thanks to mobile. Besides the dominate Android, iOS and Windows, new systems are on the horizon: Tizen, FirefoxOS, webOS, Sailfish and others.


This will continue as long as so many people want linux to be a free version of Windows or Mac OS X, instead of a free PC unix.


It's funny to hear the the name Debian associated with the label "modern." I don't mean to diss Debian, it's a solid distro and has a lot going for it, but it is seriously behind all of the cutting edge distros that I'm used to.

Personally, I enjoy new technologies and systems in linux. I used to run Gentoo and other distros that constantly brought out cutting edge software. And I'm used to somewhat alpha/beta quality software that mostly works. And because it works well compared to my experience with windows software, so I wasn't too bothered.

Maybe it's because my background is in hardware, but all of the new software had always taken awhile for me to grasp and appreciate. But in the linux community, I've always been able to find info and solve problems that I've come across and learn about the system. Windows problems usually can be solved because large masses of people tried so many things, but I wouldn't get any enlightenment about computers or software when fixing Windows problems. Maybe my world view has been skewed by comparing linux to windows, but I've always felt that I'm able to learn and understand more with linux vs Windows.


> it is seriously behind all of the cutting edge distros that I'm used to.

Are you familiar with branches beyond Stable? I switched from Arch to Debian Unstable (which uses a rolling release) and find their edges to be about equally sharp.


I haven't touched Debian in a long time, and I didn't know that unstable was a rolling release. Thanks for the info, I might look into it someday.


Unstable was always a rolling release, it gets just frozen for some time before each stable release because the process of getting a package in testing requires it to be in unstable first.


How exactly is it behind? I didn't notice it. If you mean it doesn't offer half cooked buggy software for everyday use - I don't call it behind. I'm using Debian testing for regular desktop needs and it's a decent balance of stability and up to date features.

At times I do feel like packaging in Debian lags because of maintainers lack. For example KDE could get packaged faster (there is no Plasma 5 in Debian yet). But usually it catches up if it falls behind.


I guess that I mean, it doesn't offer half cooked buggy software. :-) Well, at least it works about as good as Windows stuff. This is from many years past, so that's the experience that I've had. But as other commenters stated, and I didn't know, the unstable branch is much more current than I would've expected.


Any blog post title which ends in a question mark can be answered by the word no.

- Adapted from Betteridge's law of headlines [1]

[1] http://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines


I've never had a problem with Arch linux. Why not run a stable rolling release instead?


Debian unstable pretty much is a stable rolling release. Don't let the "unstable" tag fool you, stability is a relative term. The author was describing a community problem, not a methodological one. It could also be a technical one, Linux seems to have gotten an order of magnitude more complex in recent years, whereas user patience has similarly fallen. It's easy to imagine that people with the kinds of problems described just dealing with it rather than troubleshooting and sharing what they learned.


> Don't let the "unstable" tag fool you, stability is a relative term.

I don't see why people throw this around a lot, it really isn't. Most recent example I can remember is when Debian decided to remove NVIDIA drivers from Testing and you end up with a nonfunctional desktop [0].

[0] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=755020


In my experience, "testing" is a lot more unstable than "unstable" itself, at least when upgrading. I haven't had any major problems running Sid in the last four or five years.


> Debian unstable pretty much is a stable rolling release.

I'd say testing is. Unstable is not supposed to be normally used except for debugging stuff.


This is not only debian I have had this error on one ubuntu machine but multiple times, not at boot but after a few days of use I guess

also please call is gnu linux nog only linux. linux is a very small part of your operating system


I completely agree, unfortunately. Still fighting not to get Mac for my job (programmer). But I don't know for how long.


I think I am going to switch to plan 9.


Jessie has not been released yet... And if you like good documentation, you should go with Redhat anyway.


for everyone wanting to use a modern day Debian - I recommend Linux Mint Debian Edition 2 "Betsy" [1]

http://segfault.linuxmint.com/2015/02/about-betsy/


I was a debian / ubuntu user fro many years. I switched to fedora / gnome when unity started to become standard and have never regretted it.


Don’t blame Debian for Unity, please. Sometimes, saying “Debian/Ubunty” is like saying “C/C++”. They are separate things and must be treated separately.


Is Vorlon poetry a typo for vogon poetry


Vorlons are aliens from Babylon 5.


I was also confused by Vorlon as Vogon Poetry is "a thing"

http://en.wikipedia.org/wiki/Vogon#Poetry

Maybe if he used Klingon as "alien poetry starting with a V" instantly makes me think of HHGTG

..anywho :)


Right, Vorlons aren't known for their poetry, Vogons are.


Is there someway we could just prefix these with [systemd-rant]?


Linux does not have lost its way. It has truly followed its way, and that's what you got. It was always clear (at least for me and many other people), that if you don't design a system but put it together step by step, you will end up with a mess.

I have always been on the NetBSD side, and it's bright over here. Even FreeBSD suffers from similar problems as Linux, trying to get new features fast. That's just how software development works (or does not), you have to follow proper design to get something for the future.

Linux will always be stuck in past design decisions, and in order to stay compatible, things will need be become more and more complex, until they break.


The whole point of systemd was/is to design a coherent system that integrates all the core parts. It's also what caused all the problems for OP, unlike the "step by step" approach that was working fine for him.



What new features is FreeBSD trying to get fast?


I think this is a direct consequence of the commercialisation of the Linux desktop by Canonical, Red Hat, et al, but especially Canonical. The amateurs were pushed out and the professionals swarmed in, and now the requirement was to bring more "converts" from Windows.

A number of bad decisions started to get made as these sponsors became frustrated with the low rate of uptake. One of them, I'm sure, was the perceived need to keep up with the competition, essentially getting into an arms race with the strategy (if there was one) of outdoing Microsoft.

A good example of this is the absurd introduction of PulseAudio, which introduced features that nobody asked for while simultaneously breaking audio for a large number of users including myself. All because a similar (but working) feature was introduced in Vista.


What I can't figure out is how this actually helps that goal anyhow. My fight with policykit (or polkit or whichever the hell one it was) is that with XMonad, I couldn't get network manager to work properly, even when I ran it and its configurator as root. For my own desktop, I'd be happy with either of "tell policykit that 'jerf' can do anything" or "tell policykit that 'root' can do anything", literally the simplest possible configuration.

There appears to be (at least at the time) dick-all documentation on policykit, excepting magic invocations on the Ubuntu forums to do this or that. Reading the configuration files appears to suggest the primary use case for policy kit is to work in large installs like a university lab where the permissions are being portioned out in a highly granular manner via third-party authentication services. If this isn't true, don't blame me for coming to the wrong conclusion. I do not give a shit about any of this. I'm on a single-user machine and the one user can do anything it damned well pleases (to a first approximation). But there is absolutely no clue I could find about how to accomplish this.

By just fucking around and turning off permission checking in every manner I could work out, I eventually got myself into a position where "root" was capable of adding new network, but my normal user is only permitted to switch between existing networks. (Incidentally, read that sentence again, it's actually quite surprising. The result of what I did should have been to let everybody do everything, right? No. Why not? Hell if I know.) This was enough for me to declare victory and move on, but it really isn't a victory.

And the point of me posting all this isn't so much to bitch; that was just a bonus extra. The point is, if this is a "professional" solution to the problem of system permissions, I have no idea how it meets that goal. There seems to be no way for the aforementioned University administrators to learn how to properly configure it for their use cases, no logging to help them get it right. Putting on my sysadmin hat, I'd never trust this system any further than I could throw it, it's so opaque. I would get a bug report that Bob was able to use the video camera when he shouldn't be able to, and I'd push a fix, but I'd have virtually confidence that I'd actually solved the problem, to say nothing of continuously wondering exactly what my permission scheme was permitting to people. To me, it looks worse than a closed source solution... at least the closed source has a support line you could call.


I wonder if NetworkManager or polkit is doing some sort of opaque meddling with POSIX capabilities? The CAP_NET_* options, in particular. I'm grasping at straws here, certainly, but your description of the events leads me to suspect something in that general direction.


Relevant quotes for TLDR:

"It worked, but I don’t know why"

"I don’t even know where to start looking."

"That’s about as comprehensible as Vorlon poetry to me"

Potentially non politically-correct answer to the article title: maybe it's not Linux who lost its ways, but some old-time users? (who forget the beginner's spirit to look for answers on forums, manuals, to try to understand before casting a judgment)

Linux has always innovated around the paradigms, taking inspiration from everywhere. It was never "clean, logical, well put-together, and organized". It has always been messy, but in a good way.

When I first discovered linux, I had to adapt to all these new things. That was in 1992. When I decided to use again a modern linux distribution on my laptop in 2014, I had to adapt - again. I have no doubt I will have to keep adapting in my lifetime.

The error would be to consider that the "right ways" are fixed, and that innovation should be stopped.

There will be many new things, some will be kept, some will be discarded. Change is the only thing one can constantly bet on.


Some other relevant quotes:

“For years, I used to run Debian sid (unstable) on all my personal machines.”

“Sometimes things broke. But it wasn’t a big deal, because I could always get in there and fix it fairly quickly, whatever it was.”

“I’ve googled this issue, and found all sorts of answers pointing to polkit, or dbus, or systemd-shim, or cgmanager, or lightdm, or XFCE, or… I found a bug report of this exact problem — Debian #760281, but it’s marked fixed, and nobody replied to my comment that I’m still seeing it.”

… doesn't sound like somebody “who forget the beginner's spirit to look for answers on forums [etc]”


Really? He says he doesn't want to look anymore, then that it's too complicated. He mustn't have searched really hard, or he would have found it's udisk job.

So why couldn't he find it?

It's not that a current linux system is N times more complex than the sid he used to run, it's that things changed: as was properly noted by someone else on this thread, dbus was one of these important changes. Now it's systemd. Next it'll be wayland or something else.

You're correct in that he still tries to do some search, but he expects things to be like they were in the past, where he "could always get in there and fix it fairly quickly, whatever it was", i.e. to be just as efficient, without learning the new tricks!

The beginner's spirit is to want to learn new tricks, looking for answers and being really insistent in understanding better, instead of stopping after a google search and a comment on a bug report.

As said in another post I loved for it straight-to-the-fact comment, "In contrast, to 10 years ago, I do not like hours of debugging to get USB working anymore. It was fascinating then".

It's not more complex. A Linux system is a time investment, where knowledge has a half line. If you no longer can or wants to commit to learn new things at the same pace, but still want to fix things as before, maybe it's not the best idea to stick to Linux.

IHMO, the author would be best served to try say FreeBSD, where things seem to change at a slower pace. But he would still have to learn this new thing, and I strongly believe he doesn't want that - just a system as before, without any changes. Ain't gonna happen.


Why should he have to put in the effort to do something that used to be, and should currently be, trivial?

Its an aesthetic decision.

A car analogy is anyone who doesn't like tail fins on cars is too old to be driving and should go back to a horse. An architecture analogy is something incredibly cluttered looking like gothic or victorian architecture is the only way to design a house and anyone who wants clean modernist design is aesthetically wrong and should go away. Cars and houses are supposed to be ugly and pointlessly expensive and cluttered looking and anyone who disagrees is inherently wrong because, um, well just because they're noobs to cars and houses.

The advantage of freebsd is its design aesthetic is dramatically superior to the linux design aesthetic. Its devs have better taste in OS style. As a side effect that makes it easier to use and more productive and more noob friendly, but thats merely a side effect.


Sure, the mount may be udisks job. But udisk will not do so unless pol(icy)kit gives the thumb up. And it will only do so if consolekit/logind verifies that the user has an active seat. And that in turn depends on the login/session manager doing its job.

All that do automate "mount /dev/sdb1".


*Vogon poetry




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: