Hacker News new | past | comments | ask | show | jobs | submit login
​Ubuntu continues to rule the cloud (zdnet.com)
188 points by reddotX on Aug 28, 2015 | hide | past | favorite | 144 comments



A big reason why it dominates the cloud is tons of web developers use Ubuntu as their desktop Linux of choice. It's really very convenient to develop and deploy on the same OS. I don't think its accurate to say "The desktop is a nice add-on, but it's not Canonical's focus nor should it be."


A big reason why it dominates the cloud is tons of web developers use Ubuntu as their desktop Linux of choice.

This is exactly why people were screaming at Red Hat back when they decided to kill their (then) consumer offering, and later when they started actively eschewing talk about desktop Linux. Everybody knew that developers drive adoption of server OS's and that once developers started moving away from Red Hat derived distros on the desktop that they would eventually start moving away server-side as well. But I guess the higher-ups at RH either didn't believe, or didn't care.

Personally I still use Fedora as my personal desktop OS and mostly run CentOS, Fedora or Amazon Linux on servers. I never saw the appeal in Ubuntu. From what I recall, their main claim to fame had always been "we pre-ship all the proprietary audio/video codecs that RH won't ship" and the supposed superiority of apt over yum.

I use Ubuntu on the desktop on my work provided laptop and it works well enough, but I don't see any particular advantage over Fedora. And I prefer yum (now dnf) to apt, so I doubt I'll switch anytime soon. Still, if RH cares about this sort of thing, they really should start embracing "Linux on the desktop" again.


Seeing as you can recall as far back as before the "Enterprise" appellation to RedHat you're clearly a long-term user of Linux! If you've been using Linux for that long I'm pretty sure that all the distributions are basically the same, no? Technically they all draw from pretty much the same pool of code, with various different choices due to timing. And, if something isn't quite right you can always remember how to install things from source as we did in the "old days".

Sure, there are plenty of differences, but these are mostly the equivalent of vim vs emacs - if you're good enough you can make either work. They're mostly just matters of personal preference.

Though many users don't have the benefit of your level of experience. And, there should always be room for new users benefiting from Linux. So Ubuntu's tended to focus on being a Linux that works "by default" on the desktop and make sensible choices for developing in the Cloud.


Yeah, I started with Linux about 1997 or so, and by about 2000 or 2001 was running Linux exclusively as my desktop OS.

The distros certainly have a lot in common, but each has its quirks and idioms, and once you learn them, it's annoying to have to bother learning another. Like, once I've memorized how to do a lot of things with RPM and Yum, it becomes "friction" to suggest that I learn to do the same things with apt.

I don't have any problem with Ubuntu, and I think it's fine for new users. I stick with Fedora really for two reasons:

1. The aforementioned familiarity.

and

2. They are more ideologically committed to software freedom, in terms of not shipping things like mp3 support, where you have patent encumbrance issues. I have been trying to gradually shift my life to a place where I don't need that stuff at all, or where I do, I use OSS / free software to interact with it. I mean, I'm pragmatic enough to use VLC to watch DVD's, but I don't go out of my way to use any content that isn't in a free format, etc.


> The desktop is a nice add-on, but it's not Canonical's focus nor should it be.

Given that one of Canonical's main projects is it's own desktop system that works the same on desktop and mobile, that statement just simply isn't true.


Even more so given that Ubuntu started as a desktop Linux distro.


I run Ubuntu locally because it tends to work better as a vmware guest. You can get other distro's to work with the quality of ubuntu, but not without a lot more hassle.

If I get to choose I'll run arch linux, but inside vmware it's ubuntu hands down.


I use Mint 17.2 inside VMWare and didn't have a single problem. Not sure what VMWare version you're using, but 'hands down' suggests other distros have a lot of problems, but I don't see them.


That would be because Mint is pretty much Ubuntu (it uses the Ubuntu archives and packages).

There is added UI customisation on top, but that's irrelevant to it working better or not as a VMware guest.


I'd still be interested what massive probs other distros have on e.g. VMWare?


Your wording there 'massive probs' is disingenuous. What I said was simply "works better", I never said anything about major problems. In fact, I specifically said the other distro's can be made to work just as well, but it takes more effort.

But to answer your question, I've found desktop performance to be better in Ubuntu for the same specs, many distro's don't play well with pause/unpause (it can cause time skew that messes with things like https certs), and things of that nature. Then of course there's the "easy install", although a few other distro's feature it as well.

Nothing that doesn't have fixes, but as someone who is more interested in getting the work done than tinkering with the linux install, I just go with Ubuntu.


It's a burdon to install VMWare Drivers into systems other than ubuntu, either you use open-vm-tools and loose some functionality or you try to compile vmware tools by yourself, which is really hard on anything other than ubuntu.

However open-vm-tools are getting better and better every day. In the past I had lots of trouble with them and up to Fedora 18 you mostly needed to rely on the vmware native tooling. arch had the same. Currently the only thing which open-vm-tools is missing is the 'unity'-mode. However on Arch you need to rely on AUR to get the filesystem driver and the network driver, which is really aweful on Fedora you can't install these in a easy way aswell. Only ubuntu has it which is mainly caused since open-vm-tools-dkms is not 'free'.


It's been a couple of years but the last time I tried Fedora in VMWare the video only had 3 resolutions and the audio didn't work without a lot of googling and tweaking. Without installing guest tools an Ubuntu would have 6 or more resolutions to use with xrandr and audio worked out of the box.


>but not without a lot more hassle.

Have you tried the latest versions of Fedora? The only thing I needed to do on Fedora was install video codecs and flash, which weren't automatically installed as they are in Ubuntu.


Friends don't let friends use RPM.


I have not, perhaps I was being a bit hyperbolic but I think in general the sentiment is still mostly true.


If we're talking about Desktops running on VMWare, Fedora should give you what you need. Of course it's personal preference, so Ubuntu will work as well.

However, if you are using it as a server of sorts, you really shouldn't use either of those distros. Use CentOS or RHEL, or something purpose built for a long term stable server. Yes I"m aware of Ubuntu and Fedora server editions -- but both distros are focused on being jacks-of-all-trades instead of purpose built as a server only.

Clearly, saying Ubuntu works best on VMWare is a fallacy. Any Triple-A distro will work fine with minimal fuss.


Thank you for not being a jackass who starts preaching to me about what I should or should not be doing with absolutely no knowledge of my work or my use case.

I'm so glad you avoided doing that.


I'm not sure why you feel you need to be so disingenuous here - you're all over this thread arguing with everyone about Ubuntu being the best.

But I'll bite.

From your several posts, it's become clear you are running an Ubuntu guest as a desktop on top of VMWare. It seems rather silly to pay for a full VMware license to just run some desktops, not to mention hypervisors like VMware and Xen really aren't great at doing just desktop guests.

You make claims that Ubuntu works better "hands down" on VMware than any other distro - which is absurd. It's far more likely you either haven't tried, or did something wrong. The things you point to that are "better" on Ubuntu are virtually (or literally) the same things on other distros, or have zero bearing of whatever distro the guest is (pause and resume are features of the hypervisor, not your distro).

In any event, everything I previously said still holds true. If you need a server, use a server-focused distro.


You misunderstood me, but it looks like you're not the only one who did so I probably could have worded it better.

> If I get to choose I'll run arch linux, but inside vmware it's ubuntu hands down.

What I mean is the distro I reach for first is Arch Linux unless it has to run in vmware, then the choice for me is Ubuntu, hands down.

I was making no comment about it being the best vmware distro "hands down". The entire post was just a quick note that I choose to run Ubuntu for reasons other than the environment reasons cited by the poster I was responding to.

Having said that, this will be the last time I respond to you. You've obviously got some strange personal stake in what I consider to be a good linux distro and I have better things to do than argue with you about it.


> However, if you are using it as a server of sorts, you really shouldn't use either of those distros.

You are saying around 60% of sysadmins using Amazon instances are wrong. The odds are not in your favour :)


> You are saying around 60% of sysadmins using Amazon instances are wrong.

Yet, 100% of EC2 is built with RHEL/CentOS + Xen.[1][2]

Ubuntu was the default VM when they launched EC2, so most users naturally just used the default. But even in Amazon's eyes, when doing "serious" server work, reach for a purpose-built server distro -- not a jack-of-all-trades.

[1] http://www.zdnet.com/article/amazon-ec2-cloud-is-made-up-of-...

[2] http://bleikertz.com/blog/amazon_ec2_underlying_architecture...


My experience with debian inside virtualbox on my local machine has been perfect. I've also had no issues with running it on vsphere, but I haven't tried with vmware workstation.


I tried virtualbox a while back and the reason I eventually moved back to vmware is because vbox requires you to hit some API's in order to run a vm detached, which I do for a few of my VM's. VMWare you can do it by setting a preference and closing the window.

It's not a big thing, but I was already established with vmware so I decided it wasn't worth the effort to work around it.

I've also read that vmware is more performant than virtualbox both in general and in terms of hardware accelerated gpu. All 3 of those just pushed me to staying with vmware.

However, I will say, I vastly preferred the UI for virtualbox. I've seen internet fights about it, but for me, virtualbox most definitely has the better UI.


> vbox requires you to hit some API's in order to run a vm detached

Not anymore. You can just launch the box without any GUI with one command line:

https://www.virtualbox.org/manual/ch07.html#vboxheadless


That's good to know, it always seemed strange that they didn't have a simple way of doing it.


assuming detached means without the window: you can also hold down the shift key when clicking the start button and it will run without opening the separate windows.


Setting up VirtualBox without a GUI is a royal pain!

All of the command line parameters are documented, but there are no complete examples of a typical setup. Not too many people bother with it either, so examples online in blog posts are only semi-useful.

I wound up setting one up on a different machine then working through the various commands required to get my config to match my example.


I agree with this, but sometimes this is due to outside factors as well. Right now on my main laptop I am running Ubuntu, but the main reason is it's a Macbook Pro 2014 and Ubuntu was the only distro that just worked. Fedora 22 beta/OpenSuse came a close second, and Arch had some weird wifi issues I didn't want to spend two weeks digging into to fix.


I just installed Fedora 22 on a fairly new (< 1 year) Toshiba laptop and everything Just Worked here. The only issue I've had to date was that a subsequent kernel upgrade seems to interfere with at least some Java apps. Eclipse runs great on the 4.0.4 kernel, but is almost totally unusable on the 4.1.4 and newer kernel. shrug

Not a huge deal since I can always keep booting 4.0.4, but I do hope they figure out what is going on, and fix it.


Been debating between Ubuntu and Fedora on the same machine. What made Fedora fall short?


Video card drivers, but it was because the the RPM fusion repos for 22 beta weren't out at the time, and so you were basically pulling in 21 drivers. I've been considering trying 22 again now it's out or even 23 alpha. I would like to know how it works for you if you try.

Protip: Since I am gaming/gamedeving, I need video drivers to work, and on the MBP you HAVE to boot/install from BIOS mode and not UEFI mode if you want things to work well. Also, that whole experience of trying every distro known to man on the MBP made me hate apples weird u/EFI setup, and UEFI in general.

/gimme bios back!


> Protip: Since I am gaming/gamedeving, I need video drivers to work, and on the MBP you HAVE to boot/install from BIOS mode and not UEFI mode if you want things to work well. Also, that whole experience of trying every distro known to man on the MBP made me hate apples weird u/EFI setup, and UEFI in general.

Yes, please do hate Apple's garbage EFI implementation, maybe someday they'll actually get the hint and release UEFI compliant firmware. I use UEFI daily on four different systems in my house (Lenovo W540, Surface Pro, XPS 13 (2014), custom-build desktop) with no issues, but god help you if you try to boot anything but OS X on a Mac without using BIOS Emulation.


Not to mention all of those developers are writing their own "Getting started with AWS!!!" blog posts that recommend Ubuntu as well.

If I were Amazon (and if I really cared which OS my customers used), I would rebrand Amazon Linux. Having the company name in the title just screams "vendor lock-in" to management.


What else are they supposed to call it? Its Amazon's tweak and modifications to the Fedora/Enterprise Linux family, tuned and tweaked for their platform


Exactly, my first job doing web development basically every developer worked on Ubuntu or derivative. And it was years ago.


At my current company, those using Linux use Ubuntu. One guy is on Xfce, I'm using e17, and I don't know what the others use, as they're on a different team.

I've become known for advocating its use.


> It's really very convenient to develop and deploy on the same OS.

That's also seen in the Microsoft ecosystem. Server 2012 R2 is very similar to Windows 8.1, right down to the UI. Every major release since 2003/XP has been a joined pair of Server and Desktop OSes, with only minor differences between them, such as a few DLLs and background/foreground prioritization.

You can literally build an MVP by adding the IIS feature to your desktop OS and grabbing the demo of SQL Server, and run it from there too. Of course, you'll want to scale them onto separate servers once you get any kind of load, but you can prove that your code works from a single desktop.

Windows 10/Server 2016 is almost a break in the above pattern. But they are close enough time-wise that I'm sure they use mostly the same binaries again.


I used to run Arch but as you say, it is so much more convenient to run the same OS local as you have on the server. Quickly testing scripts or even running a full Nginx/PHP/MariaDB stack locally to test things before uploading to a server environment that is (almost) identical is a nice thing.


That's partly true, I'm sure. But the other half IMHO is that Ubuntu's LTS releases are available for free. Fedora is arguably a better distro, but the 18 month upgrade cycle is just too rapid for typical server deployments. RHEL costs a ton, and CentOS and Debian lack corporate backing.

So if I need to throw up a server right now that I'm reasonably sure will be supported for the ~5 year lifetime of the image and I don't have funding and a Red Hat sales rep already set up for licensing new instals, then an Ubuntu LTS is really the best option available.


> CentOS and Debian lack corporate backing.

Not anymore, CentOS is fully backed by Red Hat now.


Honestly, the wave of containerization with things like docker make this entirely irrelevant. Me, I'm a fedora desktop user :)


I'd say your choice of OS is very relevant, on the contrary: A big part of that wave (namely LXC and LXD) is a Canonical project.

That's what docker used for a long time (until, in my humble opinion, they decided they needed to differentiate and un-tie themselves)

Source: https://linuxcontainers.org/ (page footer)


They used LXC until they realized it was pretty flaky, yes. Then they realized that all it was is just a thin veneer around linux kernel namespaces and control groups, which they could easily write in golang. So they wrote libcontainer :)


> They used LXC until they realized it was pretty flaky, yes

The only arguments I've ever heard regarding this all boiled own to NIH syndrome.

Until recently Docker was far more interested in being an open implementation of a closed standard rather than an open (or closed) implementation of an open standard.

Docker was/is trying to be the only linux container, and creating their own non-standard container library was just one component of that strategy.


I'm going to ask you for your source here.

Not because I don't take your argument at face value, but because I'm interested to know what exactly they consider flaky.


Source: the libcontainer release announcement: https://blog.docker.com/2014/03/docker-0-9-introducing-execu...

""" Thanks to libcontainer, Docker out of the box can now manipulate namespaces, control groups, capabilities, apparmor profiles, network interfaces and firewalling rules – all in a consistent and predictable way, and without depending on LXC or any other userland package. This drastically reduces the number of moving parts, and insulates Docker from the side-effects introduced across versions and distributions of LXC. In fact, libcontainer delivered such a boost to stability that we decided to make it the default. """


Thanks, but that's still not saying what was flaky.

In fact it doesn't point out any problems with LXC at all except a general hand-wavy statement about stability (which sounds a lot like NIH-justification).

I was looking for actual problems - since I've never encountered any with LXC/Docker, I'd be interested.


Just relaying what I heard from upstream, sorry but you'd have to ask them to details.


So, before I knew what linux was (and was teased on various forums) I ordered some free CD's from Ubuntu. (I didn't have the internet at home). (eventually I got them; ubuntu 5.04 I think [Horny Hedgehog from memory])

When I received them I was pleased, everything worked.. well, not everything, but it sorta worked! I had a desktop environment and a command line and I felt a small sense of accomplishment because I'd navigated the strange menu's safely before anakonda or full-framebuffer installers... Because of the peer pressure I learned about how to do my bits, and I carried on.

Later in the year I found fedora, and Blue is a nicer colour than brown (I was young and fickle) but it was less user friendly, so I committed to learn that and get off the "Noob Friendly" Ubuntu OS.

Many years later I got a small laptop for my mother, at this stage in my life I was "awoken" and I knew the power a machine could hold if it ran linux, so I put ubuntu on it- She's not the most technically apt lady in the world but was able to do most things with ease, and I put that down to having a "Good UX outside microsoft" (since most people who learn the microsoft way are generally committed to a mindset and anything outside of that is pushed away).

A few issues with Flash, some performance hiccups on some websites that seemed to try and avoid supporting linux in strange ways (that I take for granted I know how to bypass) and eventually the machine gave up the ghost.

I bought a new machine and put ubuntu on it (13.10 I think) and she was somewhat less than pleased, the UX had changed, she didn't know what was available anymore, nothing was organised in a way she understood.. and so I installed mint, she's now happy.

So I'll say this for Ubuntu, they put linux in the hands of people who we should really be targetting, it allowed me access to linux acting as a base plate and later acting as a full blown system for someone who was not interested at all in computers. And they pushed a trend for that, so we should all be thankful.


A follow on from this story and many moons after my "fickle" switch to Fedora/RHEL.

At this point in my life I'd been involved in a half dozen large companies and used linux on enormous scale.

I moved to a company that was using ubuntu LTS (10.04) (old at the time) in production, it was heavily invested and I expected that wouldn't change as Developers were very hesitant to change to debian (which is too old/doesn't make things easy enough) or centos/RHEL which suffers the same issues and has the added benefit of having SELinux (which I'm an advocate of understanding rather than disabling).

I go through my daily security advisories and a local privilege escalation means all our virtual machines and virtual machine hosts are affected, luckily it's patched as 10.04 is still supported so I apt-get update;apt-get upgrade and send out an email saying the server will be down for 30 minutes while it receives patches.

I was wrong, it was down for 6 hours.

unfortunately someone upsteam caused that particular kernel update to rebuild all initramfs' on the machine, and had also named lvm2 to lvm, so now my drives wouldn't mount.

On any kernel version/initramfs version..

Normally you can drop to shell load the module, mount the drives and continue startup, but unfortunately that stopped a lot of things from loading such as the bonding we had in place on the nics.

obviously I didn't know why it broke at the time and was attempting to get help from #ubuntu on freenode. the response was:

"Sometimes it's better not to know why it broke"

that server was smoothly running CentOS that same day, and I managed to get all the Ubuntu VM's back up and working.. and changing apt-get with a shell script which simply echoes "don't do that".

So in my opinion support and enterprise is where it falls down.


It's a lovely story, thanks for sharing it.

On the "support and enterprise" not being ready. The only comment I'd make is that what you tested was the community of users support e.g. on IRC.

In Open source there's fundamentally a trade-off between "money for time, or time for money". In a production enterprise environment where it's urgent to always be available getting professional support makes sense. Then, when you have an issue, rather than the uncertainty of a community channel you can get hold of professional support from the experts in the software.

That's true of any of the major distributions and a lot of other important OSS software used in production.

Note: I have a bias on this point since I set-up Canonical's professional support and consulting organisation.


It was "Hoary" rather than "Horny" :)


Haha, sorry, my bad.

but with names of FOSS distro's like "Beefy Miracle" you can understand why. :P


> She's not the most technically apt lady in the world

Perhaps you've realized the unintended "pun" in the "apt" word :) (hint: apt-get)


Debian still dominates the web server market, but Ubuntu is catching up there too: http://w3techs.com/technologies/details/os-linux/all/all


I highly doubt those stats are accurate.

    > Unix is used by 67.1% of all the websites.
    > Linux is used by 35.9% of all the websites.
This adds up to 103% so I guess Linux is included under Unix? That means 31.2% of servers are running a non Linux Unix? Seems a bit high...


The wording on the site is bad. That is supposed to represent 35.9% of the sites that are hosted are using Linux. Linux is considered a sub-category of Unix on w3techs.

See also: http://w3techs.com/technologies/details/os-unix/all/all


45.2% of UNIX web servers are of an "unknown" variant? How can you detect "UNIX" but not be able to have any idea of what UNIX it is? I'm going to hazard a guess that you'd be able to get numbers of similar accuracy by inspecting /dev/random.

Edit: poking around the site a bit more, I notice that they count Darwin under UNIX but OS X is a separate top-level category alongside Windows and UNIX (albeit one with less than 0.1% share). WTF are they doing? This makes no sense.


Technically you could run a non-OS X Darwin. Maybe that's the explanation?


Could be, but it still doesn't make sense. It's the same underlying stuff, so either they're both UNIX (what I'd vote for) or neither is.


I never said it made sense ;-)


I didn't mean to implicate you in this. I just checked out the site and had to share how ridiculous it is.


We use Ubuntu because our cloud provider supports it and doesn't support Debian (though we could probably hassle them into doing so if we really wanted to push it).


A cloud provider who supports Ubuntu and not Debian almost certainly doesn't know what he's doing; I'd consider it a warning sign, although not an outright disqualifier.


Your cloudiness may vary. This was the place we used to have a lot of hosted boxes with, they treat VMs much like they used to treat the hosted boxes.

We are in fact leaving them by the end of 2014, er the end of 2015, er some time in 2016. (It turns out disentangling years of encrusted dependencies is complicated.)

But in practice, Ubuntu is a perfectly good version of Debian for our purposes, so we'll probably stick with it. The devs also like it on their desktops so feel happier targeting the same thing on the server.


Could you explain why you think so?


If you support Ubuntu, it's trivial to extend that to Debian. Ubuntu is still very tightly pinned to Debian.


That's not true for the kernel, which is probably one of the most important parts for a virtual server provider, if they want the instances to run as efficiently as possible.


I'm not an expert about Linux server-side distros. I'm using Ubuntu Server, I haven't seen any cons for now. Any hint ?


Debian is more conservative with changes - if you want your systems to be rock-solid, go with Debian. The downside is that it's so stable that packages are usually outdated, but in many cases there are more recent backports which you can cherry-pick.

In my experience, Debian's security team responds to new CVEs faster than Ubuntu's, but this is purely subjective.


Also the users are a bit more, um accurate, in their answers; so if you run into problems, you may be in better hands if you reach out ( so long as people in help places aren't being hostile twats ).


Absolutely, I've found the debian community to be one of the best out there as far as support goes.


I've found them to be a highly mixed bag in IRC.


One issue I've found is support for old releases. Ubuntu only has a 5 year support life cycle, where as CentOS / RHEL have a 10 year support life cycle. For most people this isn't an issue, but in the enterprise it definitely is.

I recently had to move a bunch of Ruby 1.8 applications (where it didn't make financial sense to upgrade them) to new servers. They wouldn't even run on Ubuntu 10.04, where as CentOS 5.5 is still receiving security updates.


But Ruby 1.8 is EOL. I also got some customers running Rails 3.0 with 1.8.7. I told them they have to rely on good luck not to be hacked and they also made the financial decision not to upgrade so they are running an unsupported language version on maybe an unsupported OS (can't remember which OS they're using).


Assuming they are using RHEL/CentOS 6.

You can get supported ruby 1.9.3 on RHEL6 or CentOS6 https://www.softwarecollections.org/en/scls/rhscl/ruby193/ or https://wiki.centos.org/AdditionalResources/Repositories/SCL

Unless they are unwilling to upgrade their rails and using the ruby version as an excuse :) Best of luck to you!


Software Collections aren't "supported" in the same fashion that core RedHat packages are (i.e timely security fixes, backported if needs be, for the lifetime of the OS release).

From the ruby193 SCL page you linked to:

"Community Project: Maintained by upstream communities of developers. The software is cared for, but the developers make no commitments to update the repositories in a timely manner."


Yeah, if they are in CentOS land it will be a bit touch and go (as usual).

But if you have a redhat subscription they are fully supported. I should have pointed that out in my first comment though, thanks for bringing it up :)

"All Red Hat Software Collections components are fully supported under Red Hat Enterprise Linux subscription terms of service. Components are functionally complete and intended for production use. " [0]

[0] <https://access.redhat.com/products/Red_Hat_Enterprise_Linux/...


If you really, really must have the latest and greatest of some package, you can always use the IUS Community's RPM repos for RHEL/CentOS (it's run my Rackspace for their servers).[1]

But, we're talking about enterprise here, no young hip startups. Enterprise wants stability over everything else, young hip startups want the new shiny.

There are huge amount of Java 1.3, 1.4, and 1.5 applications still running in Enterprise all around the world with zero issues. Most of the time it doesn't make financial sense to re-build or spend time debugging an upgrade just to have the latest runtime.

"If it ain't broke, don't fix it".

[1] https://iuscommunity.org/pages/About.html


I understand to run Java 1.5, or better Java 1.6 but not 1.3 and 1.4 that's aweful. But I know that's a fact since I've used Apache Fop and they try to have a compatibility up to Java 1.3. For me I've writing all my software with the latest Java, but I don't care which os (even RHEL 5/6 would work, if they could get java 8 to run) Supporting everything down needs have so many more Lines of Code and is rareley harder to maintain / code. Especially Option Types and Java 9 + Java 10 brings your Java Code to a further level, I also don't get it why somebody would code a new project via J2EE if there are so many great servers like wildfly and netty.


Well, a Java 1.3 application isn't going to be a new application - usually a legacy application which has a lot of custom libraries built specifically for that version of Java, and would require significant effort to bring the codebase up to date in terms of running on a modern platform. At my company, one of our most used internal applications runs on 1.3 - it's an application which allows user-made plugins, however we don't have the source for the main application, which means we're stuck maintaining a 1.3 system.

As an aside, J2EE is quite good and very prevalent in enterprise, JBoss, GlassFish, Tomcat, etc...


Yes, it's not the best situation but is a lot better than it was before. Originally there were about 10 Rails 1 / 2 applications on a pair of machines running Ubuntu 6 and Ruby 1.8.4, with most services open to the world. These machines were being retired so the apps need to be moved off - a few apps were shutdown as they were no longer used, and the remaining each got their own VMs.

The apps were upgraded where possible but most of them had dependencies that would only run on Ruby 1.8 and have long since been abandoned. We considered rewriting them, but they are only used internally and are most likely going to be shutdown in the next year or so. At least the OS doesn't have any known security issues and is now properly behind a firewall, so that's something.

One of the issues with Ruby 1.8 is that it only compiles against OpenSSL 0.9.x. To compile it from scratch means you need to downgrade that (and a few other deps), which is about as painful as you can imagine. CentOS 5.5 still comes with that and is supported until 2017, where as you would need Ubuntu 8 or lower. I was thinking of creating a LTS version of Ruby 1.8 (a la Rails LTS), but I don't think the need is really there. Businesses who are running Ruby 1.8 have either weighed the risks or simply don't care :/


Ruby 1.9 is EOL.


Yup, but RedHat commit to backporting security fixes for the duration of the release (so 30/11/2020 for RHEL6)... Not sure that this is the case for software available via any of the collections.

edit: SCL's aren't supported by RedHat


They are. As I pointed out in my other comment:

"All Red Hat Software Collections components are fully supported under Red Hat Enterprise Linux subscription terms of service. Components are functionally complete and intended for production use. " [0]

[0] <https://access.redhat.com/products/Red_Hat_Enterprise_Linux/...


I can understand why enterprises need 10 year support for their server OS, but I would assume many of the third-party software running on that OS will not have a 10 year support period. This is not ideal from a security perspective.


Yep, I've still got plenty of RHEL 5 and 6 boxes running production public-facing services. I'll upgrade them probably within the next six months or so but it's simply not a priority at this point (even for the 5.x machines).


Five year support might be too short for some people. If so there's Red Hat Enterprise Linux (and its community rebuilds like CentOS), and SUSE Enterprise Linux.

A paid RHEL subscription will even give you security updates for point releases, in case your needs preclude from upgrading even to backwards compatible versions (e.g. You can use RHEL 6.4 even now, instead of 6.6 or 7.0)


I seriously believe that Ubuntu is not the best decision. SUSE and Redhat have much better support and services.

More people know Ubuntu and is why they use it.


Seconded, the support lifetime of Ubuntu makes it woefully inadequate for use in an enterprise environment. I get that the hip startup scene wants the latest and greatest, but I work in the medical industry and rock-solid stability and availability are more important than anything else - we still have a couple Windows Server 2003 R2 and Windows XP systems that we haven't finished decommissioning yet, along with a couple SQL Server 2005 installations.

When we made our first major Linux deployment this year there's no way I would have picked anything but RHEL/CentOS, we have critical services running on these systems that will be in use for a long time, and playing the upgrade dance in 5 years even (shorter than it sounds) is not an appealing thought.


Honestly, Ubuntu doesn't even try to focus on your type of usage you've defined.

Given your companies pattern of doing its first deployment of Linux this year, and needing a very long support cycle - I think it's fair to say that you're looking for the equivalent of a traditional UNIX. Slow moving, with lots of stability and strong guarantees on backward compatibility. Red Hat and SUSE focus on that type of "enterprise computing" - they've grown on doing 'UNIX replacement'.

Ubuntu is aimed more at the (as you put it) "hip start-up scene" or at least the area in the technical spectrum that is about new technologies, concepts such as continuous deployment and cloud computing.

The funny thing is most enterprises have a bit of both those types of computing. Some slow-moving "eggs in one basket" services, but also the need for fast-moving innovative areas. So there's room for more than just one sort of distribution.


This is something not to be neglected: having a patched and backported LTS is not negligible in the enterprise.


If you have an unattended server, there's an open bug since 12.04 that hasn't yet been fixed, that might make it get stuck in grub, forcing you to connect a keyboard to make it start. (Yes, it bit me with embedded hardware stored in a closet on a pole 20 feet high in the air)

http://serverfault.com/questions/243343/headless-ubuntu-serv...

The fix is in 12.04-proposed, but was never released, never applied to 14.04 and 15.04 last time I checked. Which is quite ridiculous, IMHO. (Can't find the Launchpad entry right now)


I second that this is pretty annoying. The problem is that Ubuntu's grub.cfg is configured to wait forever until the user manually chooses a new boot option if the last booting failed. However, sometimes a temporary error can happen during the booting process, and this will make the server hang after the next reboot. This is probably not what server users want.

In addition, I think this behavior is not ideal for desktop users either, because (1) spurious errors can also happen in desktop and in that case waiting a few seconds is better than waiting indefinitely, and (2) it is not uncommon to use Wake-on-LAN nowadays.

But I'm not sure this problem is unique to Ubuntu, as Ubuntu shares much of its codebase with Debian. Does anybody know this is also problematic in other distributions?


It looks like this will be fixed in 15.10: http://www.phoronix.com/scan.php?page=news_item&px=ubuntu-15...


When I checked (in 2013), Ubuntu had this problem, and Debian did not.

Ubuntu shares a lot with debian, but grub configuration is one thing that Ubuntu appear to be doing on their own.


That bug was fixed in all releases last july. This was the bug:

https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1443735


Thanks. I guess it didn't modify my /etc/default/grub (which I would have noticed) because I already added it myself.

The problem was found (and a fix committed to proposed) in june 2012, but only applied (in -updates) in july 2015. I find that weird, to say the least.


This was such a thorn in my side back in the day. So many unnecessary trips down to the server room to kick over someone's box that was hung.


Ubuntu 14.04 was one of the last distros to use upstart, which is pretty close to dead.

As .service files become normal in the RHEL & Fedora/Debian world, and 14.04 LTS long life cycle runs, upstart is going to look increasingly out of date.


Atm (assuming LTS) you'll have to use upstart, which sucks. Terrible selinux support also.


> Terrible selinux support also.

Ubuntu focuses on AppArmor instead.


Sure. I just prefer selinux (along with most of the MAC using world), so this is a negative for me.


> so this is a negative for me.

Exactly. For you. This isn't a con of the actual system.


Is there an init system which doesn't suck?



Well, it certainly sucks less

Seriously though, I've found that most things from the suckless guys are pretty great. st is a perfect terminal; surf would be perfect if it had an ad blocker.


Why would a web browser need an ad blocker? Do that in your HTTP proxy or DNS server. I recommend Polipo or Privoxy as I've had success integrating them with EasyList.


> Why would a web browser need an ad blocker? Do that in your HTTP proxy or DNS server.

In order to be able to perform URL-level blocking without a proxy; and in order to be able to perform context-sensitive blocking and CSS blocking at all.


It almost seems as if they are recreating Acme (plan9 editor).


I'm sticking with CentOS / RHEL on my cloud servers. Have tried Ubuntu Server, but liking CentOS more. Also, 5 years LTS is just not enough for production environments IMO.


I will never forget how the handled the Oracle Java license change debacle. No matter what they are saying now that was a terrible show of ignorance.

They might be a solid choice in the future but as of yet I haven't seen ANY reason to use anything else but Debian or CentOS.

And cloud deployments will abandon Ubuntu for something smaller soon enough.


Why is this surprising? it was and for the most is the only Linux distro that provides actual LTS releases with 5 years support guaranteed for free.

Sure you can buy RHEL, SUSE and some other commercial releases but it costs money, other things like Debian Stable are very new initiatives and only provide support between stable releases 6-18 months.

Ubuntu guarantees security updates for the OS and common components like Apache other's don't.

If you are an organization that's very important especially if you need to comply with various regulations e.g. PCI-DSS.

Ubuntu is also one of only few distro's that is supported across all Cloud providers, it was the 1st distro to be supported on Azure and many of the smaller cloud providers start with Ubuntu or use Ubuntu as the core of their in-house linux guest.

Amazon's AMI might have been able to topple Ubuntu with their Amazon Linux but as it is not available for download and you cannot run it outside of AWS it will never reach any true leadership position, if you can't have it in house for development you will choose something else and the 1st rule of dev op's is deploy what you develop on....

In a few years CoreOS might get a big enough market share but currently CoreOS is too complicated and it locks you into using containers which is an overkill for most cloud deployments these days unless you are huge enterprise. If you are running a small web portal on a 1-5 servers Docker and other containers will just get in the way.


Other comments have touched on most of the reasons why Ubuntu is a good choice, or why it would be a bad choice in some situations.

For me, it was the OS I was used to. And, as I've had to deploy a few CentOS and OES servers as well, I much prefer how Ubuntu/Debian configures things.

Apache, PHP, networking, cron, etc. All much easier to configure and harden on Ubuntu than on CentOS. Only thing I've found CentOS does better is starting and stopping iptables, and that's solved with a quick apt-get install iptables-persistent.

Most of this opinion comes from writing Ansible roles that work on both Debian and RedHat systems. Ubuntu was always easy to get right. CentOS always had some weird thing that required an annoying amount of work to work around. (Like it doesn't run Postfix smtpd in a chroot while Ubuntu does. Meaning I had to have different Postfix settings in master.cf on CentOS than I do on Debian.)


I tend to agree with all that.

My first Linux was Redhat 5.1 (not RHEL), but I ended up switching to Debian Slink and OpenBSD for whatever reason. Since then I just personally find Redhat based distros a bit wierd or clunky (based on lack of familiarity), and prefer to stick with the Debian/Ubuntu side of the fence.

Reason to prefer Ubuntu LTS over Debian Stable: fixed 5yr support period instead of a variable period.

Reason to prefer Debian Stable over Ubuntu LTS: all packages in repo are in the same security patching regime. With Ubuntu you have to be a bit careful using packages from Universe or Metaverse.


I did not know Debian didn't have fixed lifetimes for Stable.

Though they do have some form of LTS. [0] I do find it odd that the Debian Security team doesn't manage it.

Part of my RedHat dislike is unfamiliarity. But when I sit down and think about why I prefer how Ubuntu does something vs. how CentOS does something, I usually find Ubuntu methods make more sense.

For example, how the default apache mods are enabled. Ubuntu has the mods-enabled directory. CentOS has the LoadModule lines and mod config in httpd.conf. So to disable an unneeded module, you have to find the LoadModule lines, remove them, and the find all the various default config that is now broken. In Ubuntu, you just remove the symlinks in mods-enabled.

[0] https://wiki.debian.org/LTS


A new Debian Stable comes out when it is ready - after a long freeze.

Originally (from memory) Debian stable was only supported until the next stable release which could be anywhere from 18-36 months later (release timeframes seem more consistent these days).

Then they started supporting an oldstable (the previous stable release) for an extra 1yr timeframe.

Debian LTS is a newer initiative, but it doesn't quite have the same security service level as say Debian Stable. Not all patches make it to LTS, and those that do arrive noticeably later. Debian is trying to find ways to improve this though.


There's also Android development driving use of Ubuntu as a desktop OS for developers. I do a enough embedded work that I need to build Android-based embedded systems for some projects. Increasingly, mobile software projects need to be "full stack," too, with purpose built servers for app projects.


When interviewing candidates and you correlate the "only Ubuntu" folks to skill sets it becomes obvious that anyone can use Ubuntu. Even in the cloud. The lowest threshold to entry to the cloud obviously should be the most common.


With containerization growing in popularity I suspect this will change. Ubuntu based images are huge and include more features than needed to run basic web servers.


Last I checked there was a very pared down version of Ubuntu. I've used it for running VMs that didn't need a gui or anything like that.


Could you share a link to docker hub? I'd love to check that out.


I dont get why cannonical dont invest in development experience. forget the ubuntu mobile bullshit it wont grab the market anyway.

But there are tons of developers out there that use ubunut. They have a great opportunity to create a complete IDE to cloud platform much like visual studio and azure, but with open source tools and a great linux system.


It's funny how choice of distro can be so revealing. For example, when I see RedHat, it's pretty obvious that the software is from or the person works for a Big Enterprise, with Big Serious Enterprisey Stuff, and doesn't really care about open source, let alone free software (as an aside, that's kinda sad: RedHat was the first distro I used, and I loved it for years longer than I should have).

Or when I see Debian, I know that the system or person actually is serious, is likely to have a good operations and sysadmin mindset and will probably Just Work™.

Or when I see Gentoo, I see a kindred, albeit younger, soul.

When I see 'FROM ubuntu' in a Dockerfile, my heart sinks: it's likely written in JavaScript or Ruby; it likely Greenspuns heavily. For just about any server load, vanilla Debian is going to be a superior choice: better-engineered, smaller and lighter-weight. As far as I can see, there is almost never a good reason to use Ubuntu; it's just that in the eyes of so many folks it is Linux.

Oh well, at least it's not OS X.


I have a ton of public-facing servers on the Internet, running RHEL, Debian, Ubuntu, FreeBSD, and, yes, there's even some Windows machines in our datacenters (fortunately, I don't have to touch those and haven't logged into a Windows box in years). My personal machines (at home and in the datacenter) run RHEL, FreeBSD, OpenBSD, and Debian and my work laptop/workstation runs Ubuntu. My e-mail ends in @gnu.org and I don't work for a "Big Enterprise, with Big Serious Enterprisey Stuff". I work for a "Small-ish ISP, with Small Not-Serious un-Enterprisey Stuff" that ranges from web and mail servers to monitoring systems to file servers to RADIUS & TACACS servers. I started with Slackware in 1996, jumped to Debian soon thereafter and used it exclusively for about 10 years before "branching out". Now, I just use the Best Tool For The Job(TM). I also avoid making blanket statements about groups of people based on their software choices.


I'll bite the Ubuntu part.

Ubuntu was the second Linux distribution I put my hands on back in 2008, after giving Fedora a try and being frustrated - it was a system administration internship at my university, every server on the campus ran Fedora and our "CTO" (and also my boss) was a heavy Fedora user. Ubuntu was so much more welcoming for a person coming from Windows.

When I began doing some serious job in the cloud (~2010) it was on AWS and apart from Amazon's own AMIs, Ubuntu was the defacto standard - I even remember reading that it was the most used distribution on AWS back then (don't know the numbers nowadays).

I've never felt that I was using something under engineered, bloated or big. It simply worked very well, had fairly updated packages and a 5 years support that has always made sense to the tasks and jobs I've dealt with.

I like to think that despite some questionable choices, Canonical has always had a honest marketing approach with the Ubuntu brand, partnerships, embracing the cloud (juju), etc. When Vagrant came out, Hashicorp had the Ubuntu based box and now canonical maintains its own trusty box. It just always seemed to be present on every corner of my career so far, and it never let me down.

Please note that I'm not saying that Debian is not a superior choice, I just think it is unfair to look at Ubuntu with disdain and bring JavaScript and Ruby developers, as if the languages or its developers shared the characteristics you've brought. I simply never had a reason to move away from Ubuntu or try Debian.


> Ubuntu was the second Linux distribution I put my hands on back in 2008, after giving Fedora a try and being frustrated

For me it was RedHat, then Fedora, then Ubuntu, then Mint, then Debian.

> I just think it is unfair to look at Ubuntu with disdain and bring JavaScript and Ruby developers

The thing is, Ubuntu is kind of the Blub of distros: folks fond of it it can see where it's better, but not where it's worse.

As for server-side JavaScript and Ruby: I maintain that they are smells in any system. JavaScript is a mistake whose popularity relies on an historical accident. Ruby's a neat little language, but every single system I've used which utilises Ruby has been broken.


Actually, the reason I use Ubuntu instead of Debian is that most language runtimes I use (python, for starters) were, for quite a few years, more up-to-date. Of course I could use unstable, but I'd much rather use Ubuntu's LTS and security updates.

Yes, Debian will pretty much just work, but back before 7.x it would also be a pain to support a number of applications (we mostly rolled our own builds of "modern" runtimes and libraries we had to use, with predictable lag times and dead ends). Ubuntu takes all of that pain away and has relatively new stuff, even on LTS (sometimes through backports, but hey... nobody's perfect).

I also left RedHat because of the way the distro evolved, but never jumped to CentOS because at that time yum was slower than molasses, and I'd much rather use a Debian derivative anyway.


> Actually, the reason I use Ubuntu instead of Debian is that most language runtimes I use (python, for starters) were, for quite a few years, more up-to-date. Of course I could use unstable, but I'd much rather use Ubuntu's LTS and security updates.

It's almost definitely better to install stuff in /usr/local or in install-specific locations under /opt than to use either unstable or Ubuntu in production.

> Ubuntu takes all of that pain away

Yeah, but it replaces it with its own pain. The days I bid farewell to Unity and GNOME were so happy…


I don't think any of those apply to production servers (at least not how I picture it).


Ubuntu is just easy. You do apt-get install and you get the package you expect. Debian and Centos just don't have a lot of packages, that is the biggest reason for me.


Strange, this is exactly what I'm saying to my friends, but about Debian. apt-get has almos everything, which packages aren't there?


For not caring a lot about open source, they have many MANY more open source developers than Canonical does, or likely ever will have. Also, I suspect quite a few Redhat employees will take strong offense to your comments on them not caring about Free Software. Funny, Redhat never had issues such as this that Canonical did about licensing terms not being GPL friendly, which they have since fixed: https://www.fsf.org/news/canonical-updated-licensing-terms

I'll totally agree with you on Debian vs Ubuntu, but on Redhat vs Debian... We can agree to disagree. Here is a paste of an email I wrote up to some coworkers on the subject:

======================================================

RPM starts off with the concept of pristine sources. It is vehemently rejected that a maintainer of a rpm package use a non-upstream tarball or change _anything_ without a patch that is in version control. This is not the case with dpkgs

RPM

===

• Stores ownership, permissions, and checksums in the rpmdb. This allows for tools like [1] which are entirely impossible to re-create a dpkg equivalent. There is no equivalent of the insanely handy “rpm –V” in dpkg.

• The checksum is part of building a rpm package. The “debsums” functionality is not actually required for all packages. In fact, until Ubuntu got their act together and started fixing a LOT of stuff, many debian packages, didn’t have their checksums in the db.

• Changes are atomic. They use the bdb transactions and rollback. Either something was installed (via CPIO) or it was not. Debian package manager uses flat text-files! Those flat text files live under /var/lib/dpkg [Figure 1 below]. There are about 3-4 of these files per package, and they often times corrupt, resulting in impossible to uninstall packages. This simply doesn’t exist with rpms.

• The Debian package format lacked multiarch support until Jan 31, 2012[2]. Up until this version, installing a 32 bit deb on a 64 bit operating system involved creating a full system 32 bit chroot (I shit you not!), with hundreds of megabytes of silliness. Rpm has had multiarch basically since the majority of Fedora compiled with 64 bit compilers (around early 2005).

Package creation o RPM packages have 1 command, rpmbuild, for creating a binary or source rpm. You have a single file ${package_name}.spec, and the source tarball. That is all you need for a rpm. If you want to build the package in a “clean room” chroot, you can use mock, which runs as non-root

o For creating a deb package, you have dpkg-source, dpkg-buildpackage, dch, etc. For debs: Do you use debhelper or do you use cdbs? What version of debhelper, what version of cdbs? Which is deprecated and which is the “preferred” way? You have to edit the control file, the rules file, the package list file, the changelog has to have the perfect format or all hell breaks loose, etc. If you don’t put the exact same info in the control file and the package.dsc, woe be unto you! Once you’ve got that all done, you have to create a “debian source package”. Don’t get me started on that stupidity, seriously, it is worse than this entire thread.

• Multiple utilities. There is rpm. Then there is dpkg, dselect, dpkg-query, dpkg-reconfigure, dpkg-deb. dpkg-<TAB><TAB> results in 35 results on my test Ubuntu box and almost 30 for deb<TAB><TAB> mostly debconf stuff and almost 70 for dh_* for debhelper grossness. There are a couple of helpers like rpmquery (shortcut for rpm -q), or rpmverify (which is a shortcut for rpm –V), but they are symlinks back to rpm for convenience. One utility, one man page, less ambiguity.

• Templating. A rpm spec file is simply a shell script with some substitutions. Debian packages are all built using some extremely customized autotools and autotools like macros, each with conflicting versions and competing implementations (try to figure out if you’re supposed to use cdbs[3] or debhelper[4]). Both debhelper and cdbs exist because it is so impossibly hard to build debian packages by hand without some serious pain. There are macros for rpm spec files, but not even remotely the same complexity or necessity.

• With dpkg, it is possible to get into states which are impossible to resolve with the cli utilities (even dpkg –-configure –a). This always results in having to manually edit the pre/post hacky script under /var/lib/dpkg and is serious voodoo black magic that only experts should ever do. The problem is that it isn’t uncommon. If the pre/post scripts do ever fail bad enough to where you can’t fully remove a package, you can do rpm –e –-noscripts. Debian packages have this “rc” state where they are partially installed / uninstalled, but not fully either. Then you have to purge them using dpkg –-purge, and that is assuming that you’ve successfully hacked up the scripts that read the plain text files under /var/lib/dpkg. The entire design is unbelievably fragile. They make it a bit less fragile by writing an ENORMOUS debian packaging policy[5] to try to get people to work around silly limitations in the software via policy. This is against one of the fundamental design choices of rpm, which is that package installs should be atomic. It is either installed, or not installed. There is no “I’m half installed” status for rpm packages.

Hopefully this is a reasonable technical defense of rpm’s superiority over dpkg. What blows my mind is that Ian Murdoch made Debian after Redhat Linux existed but before YellowDog wrote yum. He had time to study the internals of RPM and design something superior. Instead, he NIH and invented something that on most levels to this day is still technically inferior. The yum vs apt debian isn’t near as lopsided where old apt > old yum, but new yum is unbelievably > than new apt. That is for another day, and only if you’re interested.

[1] http://www.digitalprognosis.com/opensource/scripts/restorepe...

[2] https://lwn.net/Articles/485349/

[3] http://build-common.alioth.debian.org/cdbs-doc.html#id250485...

[4] https://joeyh.name/code/debhelper/

[5] https://www.debian.org/doc/debian-policy/


"Redhat never had issues such as this that Canonical did about licensing terms not being GPL friendly, which they have since fixed"

Sorry, I don't think that characterisation is entirely fair. First, the issues that were resolved was making wording clearer because the FSF was concerned that it was confusing. There was no fundamental changes to principles of the policy so there should be no implication that Canonical was somehow falling foul of GPL licensing. For clarity, as the post you linked to shows the FSF would like more done.

Second, the comparison to RedHat is completely unfair because the two companies follow fundamentally different strategies. Ubuntu has a single code-base, RedHat has a commercial and a free code-base. This means Ubuntu has a single IP policy to deal with both commercial and free usage. Whereas you sign a standard commercial subscription agreement, or a different agreement if you use RHEL in other ways. For clarity, I'm not besmirching either approach, I'm explaining the context of difference [0].

Thanks for adding that they were fixed.

I think you were generally trying to support this point:

"I suspect quite a few Redhat employees will take strong offense to your comments on them not caring about Free Software"

Which I fundamentally agree with you on. I've met lots of current and ex-RH people and they all care about Free Software.

This is more broadly true. Generally, people don't put their careers in areas they don't care about. I've almost never met anyone in any of the commercial Open Source companies that doesn't care about Free Software. There might be lots of opinions and arguments, but everyone is trying to do the right thing.

[0] As a disclosure I should point out that I was responsible for Canonical's IP policy until March of this year


I don't have the knowledge to support or contest anything except your last sentence. yum's UI is the only thing I like more than apt-get. But yum doesn't have the concept of depends vs. recommends, so installing nginx, for example, REQUIRES pulling down geo-IP libraries that are entirely unnecessary.


RPM 4.12.0 brought in support for weak dependencies, which implements the tags Recommends, Suggests, Supplements and Enhances, which provides analog functionality from apt > http://rpm.org/wiki/Releases/4.12.0

This is in Fedora 21, so should be in RHEL 8.


Newest version of RPM 4.13 will implemented logical dependencies, e.g. Require: foo IF bar , Require: foo OR bar OR baz, which are far more usable than soft dependencies.


Entirely agree with you. That is the singular area that dpkg used to beat rpm as a format. That isn't a feature of apt-get or yum, but of the underlying package format.

However, it is supported in a very new release of rpm:

http://www.rpm.org/wiki/Releases/4.12.0#Generalbugfixesanden...

""" New --recommends, --suggests, --supplements and --enhances query aliases for querying weak dependencies """

So it is coming along :)


There are a number of errors and omissions here that I would like to try to clarify. These complaints seem to stem from a lack of understanding with dpkg and its workings, and some are rather... out of date. Just about the only thing I agree with is that I wish package checksums would be required by Policy!

> RPM starts off with the concept of pristine sources. It is vehemently rejected that a maintainer of a rpm package use a non-upstream tarball or change _anything_ without a patch that is in version control.

dpkg is exactly the same. A "source package" consists of a .dsc file, which is metadata, an .orig.tar.gz file, which is the pristine upstream source, and a .debian.tar.gz, which is extracted to a 'debian/' directory after the pristine archive is unpacked. Within the debian directory is the build system, including local patches, which live under debian/patches.

> Changes are atomic. They use the bdb transactions and rollback. Either something was installed (via CPIO) or it was not. Debian package manager uses flat text-files! Those flat text files live under /var/lib/dpkg [Figure 1 below]. There are about 3-4 of these files per package, and they often times corrupt, resulting in impossible to uninstall packages. This simply doesn’t exist with rpms.

I have seen the RPM database become corrupted several times over the years, including if I tried to install an RPM while low on disk space! I have never seen the same with the dpkg database. And dpkg is no less "atomic"; dpkg keeps track of a package's state, and if interrupted a new instance will clean up or resume whatever the old instance was doing.

> • The Debian package format lacked multiarch support until Jan 31, 2012[2]. Up until this version, installing a 32 bit deb on a 64 bit operating system involved creating a full system 32 bit chroot (I shit you not!), with hundreds of megabytes of silliness. Rpm has had multiarch basically since the majority of Fedora compiled with 64 bit compilers (around early 2005).

This is not a valid comparison. Multiarch in RPM means something completely different to what it means in dpkg. dpkg's multiarch system is much more powerful and complete.

> With dpkg, it is possible to get into states which are impossible to resolve with the cli utilities (even dpkg –-configure –a). This always results in having to manually edit the pre/post hacky script under /var/lib/dpkg and is serious voodoo black magic that only experts should ever do. The problem is that it isn’t uncommon. If the pre/post scripts do ever fail bad enough to where you can’t fully remove a package, you can do rpm –e –-noscripts.

This is 6 of one, half-a-dozen of the other. In the RPM world you end up without the cleanup from the script ever taking place, and so the package ends up leaving traces of whatever throughout your system.

I've never thought of editing the maintainer scripts under /var/lib/dpkg/info as "serious voodoo". They are simple shell scripts, written to dispatch off of argv in the machinism specified by Policy. Errors in good quality packages are rare and trivial to fix.

> Debian packages have this “rc” state where they are partially installed / uninstalled, but not fully either. Then you have to purge them using dpkg –-purge, and that is assuming that you’ve successfully hacked up the scripts that read the plain text files under /var/lib/dpkg.

This is a good feature of dpkg. It allows packages to be removed without their conffiles or other important data (e.g., your PostgreSQL databases) being removed along with them, unless the admin takes the manual action to purge the package.

> The entire design is unbelievably fragile. They make it a bit less fragile by writing an ENORMOUS debian packaging policy[5] to try to get people to work around silly limitations in the software via policy. This is against one of the fundamental design choices of rpm, which is that package installs should be atomic. It is either installed, or not installed. There is no “I’m half installed” status for rpm packages.

So then, what state is a package in if RPM is interrupted while unpacking files? dpkg puts the package into 'half-installed' state, from where you can run a new instance to continue the installation or remove the package.

As for CDBS or debhelper, the vast majority of packages in the archive use debhelper, and debhelper use is increasing over time. Developer documentation tells you to use debhelper. The rest of the complaints about the build process I really don't understand, but perhaps I have just been building packages for too long. I'll take 'dpkg -i foo.deb' over 'rpm -ivh', oh wait, I meant '-Uvh' any day. :p


> Just about the only thing I agree with is that I wish package checksums would be required by Policy!

Furthermore, < 350 binary packages out of ~50000 lack an md5sums file according to https://lintian.debian.org/tags/no-md5sums-control-file.html.


There is, Ubuntu is like Debian with 5 years supported.


Or when I see Debian, I know that the system or person actually is serious, is likely to have a good operations and sysadmin mindset and will probably Just Work™.

No, that's Slackware.


I went through everything before I reached Ubuntu. I've been a Linux user since the early-mid 90s. This kind of stereotyping is inaccurate and harmful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: