Hacker News new | past | comments | ask | show | jobs | submit login
CentOS Project joins forces with Red Hat (centos.org)
370 points by socialized on Jan 7, 2014 | hide | past | favorite | 124 comments



In the beginning, there was Red Hat Linux[1]. It was sold in boxes at stores such as CompUSA (remember?) but was also available for free download from Red Hat.

Then, Red Hat decided they could make more money by spinning off Red Hat Linux into a separate enterprise-only product called Red Hat Enterprise Linux (RHEL), which they declined to make available for free in a ready-to-install binary form. Fedora[2] was also spun off at this time, as the free successor to Red Hat Linux that was supposed to be only suitable for home users. Fedora development was/is sponsored by Red Hat but they did not offer end-user support, in contrast to RHEL.

Meanwhile, there was demand for a free version of RHEL. Since it was built with GPL software, Red Hat was obligated to make it available in source form, but their trademark policy prohibited anyone else from using the Red Hat name. Therefore, a group of volunteers took RHEL, removed Red Hat trademarks, and called it CentOS. To avoid confusion, CentOS explained the origins of the distribution on their web site. For their efforts, they were threatened by Red Hat's legal department and forced to remove all mentions of Red Hat and even links to Red Hat's web site from the CentOS web site. CentOS complied and began referring to its Red Hat derivations using the euphemism PNAELV[3].

Now, Red Hat has again decided they would benefit from being more directly involved in providing an open-source, freely-available enterprise Linux distribution. We've come full circle.

(Flippant depiction aside, I intend no antagonism, but merely find the history of these projects interesting.)

[1]: https://en.wikipedia.org/wiki/Red_Hat_Linux [2]: https://fedoraproject.org/en/about-fedora [3]: http://www.pnaelv.net


Red Hat is one of the most prolific single-entity contributors to open source in the history of open source. I find it really odd that some FOSS people regard Red Hat as some sort of evil corporation that should be the target of said FOSS people's flung shit.

By the way, what did Firefox do to live down its IceWeasel[1] infamy?

[1] http://en.wikipedia.org/wiki/GNU_IceCat


The Firefox trademark dispute is not the same thing. Red Hat was attempting to keep their product from the open-source homebrew market -- they wanted to charge money for their software, and did this as far as the license would allow. Red Hat was as hostile as legally permissible to anyone trying to circumvent this, like CentOS.

Mozilla simply claimed that the Firefox trademark cannot be applied to any codebase that Mozilla, the trademark owner, hadn't officially sanctioned. They began to actively prosecute those cases because some people were modifying the Firefox source to contain malicious code and calling it "Firefox", misappropriating Mozilla's trademark. Because Debian issues a version of Firefox that contains unofficial patches, they cannot legally call their distribution "Firefox", since Mozilla hasn't officially blessed that exact codebase.

tl;dr Red Hat was trying to make money from users, and Mozilla wasn't

Disclaimer: I personally fully support making money from users and reject freedom 2 as a true fundamental of "free-as-in-freedom software". I'm just explaining why some people in FOSS dislike Red Hat, as it pertains to the CentOS backstory, and why nobody cares about Mozilla's brief trademark dispute with Debian.


"Red Hat was attempting to keep their product from the open-source homebrew market "

Nothing would make Red Hat happier than having every hacker under the sun using Red Hat - what they were attempting to do was keep the enterprise customers, who were currently paying $1000+/CPU (or so) go with a free alternative and kill their company.

Simply removing three things allowed them to do that: (1) No RHN/Up2Date available for Centos, (2) No Support, (3) Most importantly, absolutely no mention or reference to "Redhat" Trademarks.

Centos had everything else.


(3) Most importantly, absolutely no mention or reference to "Redhat" Trademarks.

This is evil, because the law is supposed to allow referential use of trademarks as a fair use.

Otherwise, RedHat's existence is highly beneficial to Linux.


That's a self-imposed policy on CentOS' side, not something they were forced to do.


Yes, and the other big advantage of RHEL in enterprisey environments is their compatibility certifications with other vendors (e.g. Oracle Database).


Note that having patches from upstream doesn't stop Mozilla from being willing to license the "Firefox" trademark — they will still license it provided they are happy with all the patches. The bigger issue in the Debian case is that while they could distribute a modified version as "Firefox" (under license), some downstream couldn't then take that, modify it, and still call it "Firefox".


Ubuntu, meanwhile, was willing to accept this tradeoff and distribute blessed Firefox, as Ubuntu also has trademarks that downstream modifiers (like Mint) need to remove.

It would be nice if it was easy to remove said trademarks by something as simple as uninstalling a package, however unfortunately most marks are spread throughout the archive.


I thought there used to be such a package called firefox-branding that would turn Firefox into IceWeasel if removed?


At some point, yes, but the Ubuntu marks are still spread throughout dozens of different packages.


    Because Debian issues a version of Firefox that contains
    unofficial patches, they cannot legally call their
    distribution "Firefox", since Mozilla hasn't officially
    blessed that exact codebase.
There's a lot of sense to this. Consider the hacked up configurations of vim that ship with redhat and debian. The maintainers code intrusive personal-favourite settings into /etc/vimrc (e.g. settings that reformat your code). People get annoyed by the behaviour and think "vim sucks" whereas the default vim distro is conservative about intrusive behaviour.

Mozilla are happy for distros to put out their code - just not hacked up versions of it with the same name. Good for them.


There are some parallels, but I think customizing a default config is subtly different than code changes. Redhat and Debian likely have both, of course...


IIRC there was not any animosity between the firefox an dDebian teams though (there was plenty reported by people who saw the matter and misreported it, and Stallman waded in as is his style which didn't help the mis reporting ("RMS ponders whether Firefox is truly free" was reported as "entire open source community vs Firefox, fight at 11" by some)).

They started user the trademark thing to force some distributors who were adding patches they did not want to be associated with (either because they were just plain malicious or because they didn't want thie bug tracker filled with reports about code they had nothing to do with), the Debian people scanned the relevant legal details and decided that they either needed to stop using hte name or work put together an agreement that covered them. The latter would have been easy enough but was against their preferred WayOfThings(tm) as it would mean downstream of them would (legally speaking) need to make changes or separately arrange an agreement, so they chose a new branding instead.

Neither the Firefox team protecting their name or the Debian team stucking to their mission statement is wrong IMO (though of course some may desagree, depending on definitions of "free" and so forth, so they could be said to be wrong), but without the branding change the two are incompatible on a legal point that was only enforce to stop the malicious.

With the branding change the "conflict" is resolved, and no one is really unhappy or otherwise reasonably put out.

The RedHat/CentOS case is a bit different: the way CentOS were using the name in no way implied that RedHat was responsible for CentOS but did accurately represent how CentOS was built, so CentOS were probably on good legal ground but capitulated because they didn't want that particular fight. This, IMO, made RadHat somewhat bully-like in this case - though to be honest it takes more than one iffy commercial/legal wrangle to undo the pile of good that Redhat has (directly and otherwise) done for Linux and related projects over the years (and continues to do).


'...were modifying the Firefox source to contain malicious code and calling it "Firefox" ' - You put a little load in there. Not every Firefox patch is obscure, malicious or both.


He didn't say that - you conveniently left out the "some people" preceding your quote.


I find it really odd that some FOSS people regard Red Hat as some sort of evil corporation

Indeed, the success of Linux in the enterprise owes a lot to Red Hat, as they gave enterprises the sort of consistent, corporate-buzzword-compliant support agreements that removed a lot of the scariness that would otherwise impede use of Linux for "important" services.


It's not just the support agreements (though support agreements didn't hurt). It's that RHEL provided the sort of stable path for patches and upgrades that Linux traditionally did not, moving as quickly as it did.

It allowed ISVs to certify their software packages against a consistent OS built, hardware vendors to utiliza a long-term consistent driver interface, and end-users to not have to worry about upgrade cycles, sudden performance changes, and so on.

Basically it gave enterprises that had been dependent on Solaris and the like a comparable Linux alternative.


This was true in 1998. Cowardly companies that insisted on being able to pay support for anything they deployed were able to hand over $$$ to RedHat then tick the box of 'support'. However, as we all know, you get a Linux user/expert to fix the server, you don't call RedHat.

Due to the success of Ubuntu you have user/experts in small to medium sized companies that have 'given Linux a go' and got some good experience of Ubuntu. They might prefer the Ubuntu ways of doing things, e.g. the 'no root' security model, the modern, up to date packages (e.g. latest version of PHP), the ultra easy firewall and plenty else.

However, due to the perception that Red Hat is 'enterprise' and that small to medium companies re cheapskate, the CentOS rip-off gets specified by micro-managers because they have heard it is more 'enterprisey'. 'They know best' and go with the turgid CentOS regardless of whether any developers on the team would prefer something else.

You then have a lot of hosting companies pushing CentOS because they think it is more 'enterprisey' and what their customers want. Non-technical managers listen to them and then blame their team for any server problems.

Sure, if you know your way around Red Hat it is the greatest thing since Windows 3.0, you can get it to do what you want just fine. But, actually, if you are not an expert yet then very little about Red Hat is obvious. Far too many answers to common problems are guesswork in forum answers that you come across. Furthermore any serious claim to better security goes out the window as soon as you add random repositories that you might need just to get your work done.

Red Hat has had its day. CentOS has been a mere rip off of Red Hat and it has not added to the state of the art. I know it has its fans but I wish it would just go away.


From someone who's day job it is to manage thousands of Linux servers and has professionally worked with SLES, RHEL, Fedora, Debian, Ubuntu, and a custom Linux from Scratch internal Linux distribution, you couldn't be farther from reality if you tried.

"""Due to the success of Ubuntu you have users/experts in small to medium sized companies that have 'given Linux a go' and got some good experience of Ubuntu""". I'm sorry, but there are very few Linux professionals I've ever met I'd consider themselves "experts" who would recommend Ubuntu for their environment. Pretty much 0 except the one guy I work with on the board of Software in the Public Interest (nonprofit that runs Debian). Ubuntu did the smart thing and got onto the "cloud" bandwagon very early. As a result, Ubuntu is likely one of the more pervasive operating systems within that community. The cloud environment is a very small part of the entire Linux ecosystem and doesn't equate at _all_ with the high end "enterprise/hpc" industry. Don't believe me? Take a look yourself at the top 500 supercomputer breakdown by operating system. Exactly 0 Ubuntu clusters. Ubuntu with high end sans such as EMC/Hitachi/etc? Nope, it plays massive second fiddle to RHEL where those companies first certify their hardware for.

Ubuntu is better than Debian regarding security (almost exclusively from the excellent work of Kees Cook, who now works on security for the ChromeOS project at google and hardens the Linux kernel. However, it still can't hold a candle to the proactive security features of RHEL (and hence the awful cheapskate CentOS as you call it). Don't believe me? Look up the gcc stack smashing protector and fortify source patches. Look at the glibc canary code that also helps (in tandem with the gcc patches) to prevent buffer overflows, execshield (from Ingo Molnar, a redhat employee) before NX bits on cpus were super common, the first mainstream distribution of Linux to include a mandatory access control framework (SELinux) enabled by default. Are some of these features in Ubuntu now? Sure. Why? Because Redhat employees wrote them and got them into upstream software, which downstream distributions like Ubuntu which do precious little engineering have adopted.

CentOS is more enterprisey than Ubuntu. Why? Because it is based on the enterprise standard when it comes to Linux, Redhat Enterprise Linux. Ubuntu still sucks with big enterprise SAN gear, it also sucks with some of the more high end networking kit (infiniband on Ubuntu, possible, but a royal PITA and the vendors laugh at you), it is terrible for realtime stuff, but it is fantastic if you want the same interface on your desktop, tablet, and phone. If you don't know your way around Linux (your comment about not knowing your way around Redhat), perhaps you shouldn't be managing Linux servers and you're helping contribute to the list of botnet nodes due to not having a clue what you're doing? Again, I work on Linux fulltime and have for awhile, the major serious differences between Redhat and Ubuntu/Debian:

    - /etc/network/interfaces vs /etc/sysconfig/network-scripts/ifcfg-*

    - /etc/default vs /etc/sysconfig

    - metapackages for everything vs yum groups

    - dpkg/apt vs rpm/yum

    - Building debs vs building rpms (I could rant for a day on how much ridiculously easier it is to build redhat packages)

    - Preseed vs Kickstart (wth was Ian Murdoch thinking here? Preseed is still years behind kickstart in being awesome)


If you know Linux, you can learn those differences well in less than a week. Linux isn't obvious, it requires a lot of reading and experience. My whole point is basically that you are completely wrong and quite clearly don't realize you are wrong because you don't seem to have an idea of what you're even talking about. I do personally think Mark Shuttleworth and the Canonical crew are doing wonderful things for desktop Linux, and general Linux marketing, but they've done tons less when it comes to Linux engineering compared to what Redhat has done.

Sorry for the rant. It isn't normally my style, but this is just ridiculous. Feel free to downvote this, but please do some reading and learn Linux. You'll realize I'm likely right.


well then how would you compare Redhat Enterprise with the Gentoo distribution and other more custom type distros? what exactly is the 'enterprise standard? It seems like Redhat enterprise is used in corporations because it has become a so called 'standard rather then being superior to other Linux choices. it took a long time to even get Linux into the corporate world because other Unix's were 'standards.


Please forget enterprise. It is used and misconstrued until it means nothing. Lets talk about manageability. How do you (easily) manage 1000 gentoo (or arch linux) servers? You could have a distcc farm to build your base distro from stage1 (if you needed to) or just copy down the binary stage 3 builds and then bootstrap using binary ebuilds, but it is still a whole lot more difficult than a full binary distribution such as Redhat or Debian. Dealing with large clusters of servers, the tools that they include or write and then open source are what really blow me back.

Just a few in no particular order:

- the RHEL kernel. Redhat has consistently topped the list of Linux kernel contributors for years. The first google hits for it were http://lwn.net/Articles/451243/ and http://lwn.net/Articles/507986/, but that hasn't changed for a looooong time. They basically have as much of a monopoly on core Linux kernel develops as is possible in such a large complex project. Quite literally, there isn't a company in the world with more Linux development chops than Redhat. If you run critical applications on Redhat servers (think banking or wall street exchanges like NASDAQ or hospital systems that downtime could result in real problems), Redhat will be able to fix it if anyone can. I'm not pretending working with Redhat support is fun, but they are better equipped from an engineering standpoint than virtually anyone. The numbers back that up. Due to this, the Redhat kernel is an interesting hybrid of slightly older and battle tested stable with newer features backported. This is achieved because super often the people who write the features upstream tend to be redhat employees, so they do both. If I was asked to pick one thing that set RHEL / Redhat / CentOS apart, it would be the work that goes into their kernel for QA and testing / backporting. Look at a company like Canonical, they have a bit more than a dozen (https://wiki.ubuntu.com/KernelTeam) kernel developers. They simply can't compete on engineering resources due to their limited number of engineers. As a result (and a smart business move) the are more consumers of patches from upstream than producers. Also, look at the lwn "who wrote linux X.YY" articles. You'll rarely and almost never find Canonical on that list except for when they got the apparmor patches merged (props to them!).

- sssd[1] - a solid implementation that essentially unifies pam ldap/kerberos, pam_ccreds/nslcd/nscd/pam_access all in one very nice implementation. This makes (for instance) joining your Linux nodes into an Active Directory domain (without using commercial software from likewise or some other cruddy vendor) just work out of the box. It also makes single-sign-on and migration from standard ldap to kerberized ldap (a very hard problem) super duper simple.

- cobbler[2] (and now the foreman[3]) - These tools along with redhat's kickstart make pxebooting a cluster of 500 new servers very trivial to turn into 500 new ready to use for production servers. Gentoo has nothing I'm aware of that allows installing completely automated like kickstart, but someone please enlighten me via a reply if this is incorrect. Michael Dehaan (big HN commenter and wrote the ansible config management tool) wrote cobbler.

- abrtd[4] / faf[5] - abrtd will collect crash reports (segfaults, coredumps, python tracebacks, kernel oopses, etc) and parse the info / store the relevant bits locally or forward on to a faf server. It will allow you to things (for example) like figuring out easily every single system that is reporting a specific kernel oops, which is then tracked down to a specific type of hardware and kernel combination. Sure there are tools like crash and netdump, but abrtd is simply a very modular management tool ontop of all of those things. The public fedora project faf is located at: https://retrace.fedoraproject.org/faf/problems/hot/. faf is good stuff

- freeipa[6] - Honestly up until this project, Linux never had anything that competed with Microsoft's Active Directory for a turn key easy to setup and manage kerberized ldap user and group / policy management product. IPA changes that and integrates very well with Microsoft AD through a kerberos level trust. sssd (above) is the ipa client. It allows true single sign on between Linux and Windows clients, something that is still elusive for most companies.

- standards. Linux's biggest strength is also it's achilles heel. Not having package standards or kernel standards (or stability) prevented a lot of companies from using Linux or certifying their software for Linux early on. Being very conservative in what they will support and supporting it for very long periods of time allowed companies like Oracle (as a horrible example) to port their database to Linux and certify that things are good. Try getting big complex commercial pieces of software working on a build your own distro. It is possible, but is buyer beware. Redhat made this their business model and has done a great job of it. At this point, Debian has also done a wonderful job at standardizing things and being consistent, albeit different, from Redhat.

TL;DNR: Redhat is building tools that make Linux easier to deploy and easier to manage in large "enterprise" environments. These tools make it equally easy to manage in smaller environments. No single entity has pushed Linux further in the "enterprise" than Redhat. I could list plenty more, but this hopefully answers your question fully. If not, click through to my profile, find my resume, and from it shoot me an email.

[1] https://fedorahosted.org/sssd/

[2] http://www.cobblerd.org

[3] http://theforeman.org

[4] https://github.com/abrt/abrt/wiki/ABRT-Project

[5] https://github.com/abrt/faf

[6] http://www.freeipa.org/page/Main_Page


Speaking as someone who has "given Ubuntu a go", but has no expertise whatsoever... can you explain what your list of RHEL/Ubuntu pros & cons mean?

I have no idea why one arrangement of /etc/ is preferable to another, for example. Is it just security, isolation, and better package management?


Please read again. Those are not "pros&cons", but mere list of differences between the two distro families.


Exactly. They are just the differences. If you'd want a technical pro/con of Ubuntu/Debian vs Redhat/Fedora, that is an entirely different post equally as large (perhaps more-so). In summary, from an ease of sysadmin standpoint for large numbers of servers, redhat and the redhat ecosystem (cobbler, pulp, freeipa, sssd, abrtd, kickstart) just beats the living pants off of anything Ubuntu/Debian have. It is much easier to manage thousands of Redhat machines (without building everything custom like google) than it is thousands of Ubuntu/Debian machines. I know this because I've done both as part of my day job.


> I find it really odd that some FOSS people regard Red Hat as some sort of evil

Why do you find it odd when it is succinctly explained in the comment to which you are replying?


GP doesn't "explain" why RH is evil. Discomfiting legal maneuvers do not cancel their thousands of commits to the Linux kernel, init.d, GWT, Cygwin (!), etc.


> GP doesn't "explain" why RH is evil.

It explains why some people in the FOSS community think of them that way.

There seems to be a problem here of people not being able to understand that it is possible for people to have a different point of view, and no amount of "explaining" is going to fix that.



An attempt to "explain" why is in GP's 3rd link:

http://www.pnaelv.net


True but Redhat ditching support for home users was pretty lame. I had bought a boxed copy with support two weeks before they made the announcement. I had already downloaded it, I just wanted to support the company. Luckily I knew the manager at best buy, and she let me exchange it for Suse. That being said, I think much of the resentment towards Redhat comes from memories of dependency hell before yum was reliable.

As far as IceWeasel, I asked a Mozilla employee about it a few years ago. He said they generally approved of it, and were just glad that people were using the code.


Debian modify Firefox and so can't package it as an official build.


Actually, under the GPL, Red Hat is only obligated to make sources available to its customers. What they have done is make the sources available to everyone on the Internet for free [1]. So CentOS would have to pay for RHEL were it not for Red Hat's openness. Probably not a big deal. However, Red Hat is under no obligation to make its non-GPL packages (e.g., python, ruby, apache, postgresql, ssh, etc.) available to anyone in source form, including their customers. These, too, are available free of charge to the general public. Finally, Red Hat is under no obligation to make any of the source of their own internally-developed projects (e.g., package management, OS installer, and all of the other projects that differentiate the distribution from a software perspective) available under an open source license, but they do (admittedly, this was not always the case). Finally, Red Hat employs many developers who work full time on critical projects (kernel, gcc, gnome, etc.). They are pretty model open source citizens whose business model is not to use open source as a gateway to their own proprietary products like IBM and Oracle.

If they wanted to shut down CentOS, it would be very easy to stop distributing the source of their own projects and of permissive license packages. Hopefully sponsoring CentOS is not just a play to exert influence on the project and retard its progress, but I am willing to give Red Hat the benefit of the doubt here.

[1] http://ftp.redhat.com/redhat/linux/enterprise/6Server/en/os/...


> Actually, under the GPL, Red Hat is only obligated to make sources available to its customers.

That is true, but the GPL allows their customers to freely distribute the GPL source they receive. Not saying Red Hat doesn't help or isn't doing good here, but it's not quite as altruistic as you make it out to be. I thank the GPL for that.

Note that I'm a huge fan of Red Hat and preferred their distribution since the RH 4 days, I don't want to denigrate Red Hat in any way. Also a huge fan of CentOS as well.


> they were threatened by Red Hat's legal department and forced to remove all mentions of Red Hat and even links to Red Hat's web site from the CentOS web site.

Doesn't trademark require you to enforce your trademark (or you lose it)?

Also, Oracle did/does the same thing as CentOS, and sells support for it. There is also ScientificLinux too.


I believe RedHat would get a lot more followers, and a lot more support money if they did two things:

- have support contracts that make sense, and trust their customers. Let their customers choose which server they want to put under contract etc...

- have more software in their default repo (like, I don't know... Ubuntu)

They managed to corner the market for pay-for software (to a certain extent, Suse has managed to capture a piece of that market), but they make support and lack of standard software so bad, that people go to extreme length tu run on CentOS and ScientifiLinux, and have a single server running Ubuntu.


Every time a market collapse is hanging over the finance world, they know their customers will do what a prior employer does/did: put support on 1 of 1000 servers.

It's sad, but this wasn't at a .com rev3 company, this was an old-school hedge fund with billions under management. IT support is just something that gets neglected if there isn't a contract that is enforceable. Clearly the company could afford a thousand systems worth of support, and could make it worth the money (autofs bugs galore!). There is a something missing in making the social contract of open source pay for the people needed to maintain open source.


People do that today.

I know of a few companies which tried to pay for 24 hour support for prod servers, email for QA servers and no support for dev servers, and RedHat insisted on making them pay for anything running RedHat, all or nothing... Companies switched to CentOS and SL, and bought contract for one RedHat server.


I wonder if they are also doing this so they can get an idea of who the user base of CentOS is and also try to convert them to paying customers.


My guess is they are feeling the hit from Oracle Linux's business model and wanted the flexibility to compete directly.

Oracle Linux's distribution model is that you can take CentOS, change a couple repositories and make it Oracle Linux. Then if you want support, you can pay for it. With Red Hat, RHEL and CentOS before today were separate products with separate release schedules and separate userbases. This move gives Red Hat the opportunity to act more like Oracle Linux if they choose to.


Surely no one uses Oracle Linux unless they are running Oracle?


It' almost certainly designed to take advantage of enterprise customers that are already paying Oracle support dollars in some fashion or other, with the slightly advantageous notion of being able to collapse another support provider.


Oh, wow, I didn't even know that. This sounds much more likely. Thanks!


I think RH recognized that

* CentOS has significant market share

* people sometimes want to switch from CentOS to RH, but not vice versa (natural evolution in growing companies)

Thus they will cooperate with CentOS, with the end result that switching from CentOS to RH will become easier.


Congrats to the CentOS team. You filled a much needed gap when the Fedora / RHEL split happened.

As someone who used Red Hat Linux in the 90's / early 2000's, you had to be there to know how large of a gap the CentOS team filled.

Historical background:

Red Hat Linux (RHL) was the most widely used Linux distro in the late 90's / early 2000's. Overnight, RedHat destabilized RHL by turning it into Fedora with it's rapid release cycles, lack of back ports, bleeding edge packages etc. RHEL became a closed distro with only source distributed, but none of the tools to easily replicate the build.

RHL users (who were the majority of Linux users) were faced with a choice. Pay for RHEL or switch distros. This really sucked b/c RHL deployments were largely servers that were designed for long term deployments. The community was faced with a large scale migration of servers which involved a large population of web and edge of network deployments.

This is when CentOS stepped in, created a binary compatible build of RHEL, and allowed long time RHL users to continue with a RedHat-like distro.

RedHat has been a major contributor to OSS. However, projects like CentOS have filled very important roles in the Linux and OSS communities. Again, congrats to the team.


I really wish that, when RHEL/CentOS branches from Fedora to make a new release, they would also keep and provide a snapshot of Fedora's repository at that time, just like what Ubuntu does when they sync with Debian Sid (i.e.: packages in 'main' and 'restricted' repos are supported, all other packages in the Debian archive are imported and made available in 'universe' and 'multiverse' repository for your convenience).

That would work a long way to make CentOS a viable Linux distribution for everyday use. In my experience, EPEL isn't enough and rebuilding packages seems like a wasted effort since they were there when they branched off to prep a new release.

Add a somewhat predictable release schedule on top of that (again, in my opinion Ubuntu hit the sweet spot with 24 months here) and that would be the icing on the cake. Heck, RHEL 6 was first released in 2010 and there's still Python 2.6 on that!

I know that I could shut up and use Ubuntu (I do), it's just that I like RedHat way more than Canonical but they don't make it easy for me to use and love their products (speaking as a former Fedora user and contributor).


> I really wish that, when RHEL/CentOS branches from Fedora to make a new release, they would also keep a snapshot of Fedora's repository at that time

It isn't always so easy. I don't know what RHEL7 is like, but for 6 and 5 there wasn't a complete correlation between a Fedora release and a RHEL release. For example, 6 is mostly based on Fedora 12, except for a bunch of backports from 13 and a few from 14.


7 is definitely a mix of bits from 19 and 20.


Then who provides security and bug fixes for that snapshot for the ten years during which a RHEL version is supported? EPEL is basically what you're asking for, but limited to packages where someone is willing to make that commitment.

"RHEL 6 was first released in 2010 and there's still Python 2.6 on that"

Or Python 2.7 or 3.3 can be installed via Software Collections. These have a predictable release cycle- a new version is released every 18 months, and each version is supported for three years.

https://access.redhat.com/site/support/policy/updates/errata...

https://access.redhat.com/site/support/policy/updates/rhscl/


OT: Software Collections lets you get Python 2.7 and even 3.3 on RHEL and CentOS 6 :)



A bit more information in the official announcement: http://lists.centos.org/pipermail/centos-announce/2014-Janua...

"With great excitement I'd like to announce that we are joining the Red Hat family. The CentOS Project ( http://www.centos.org ) is joining forces with Red Hat. Working as part of the Open Source and Standards team ( http://community.redhat.com/ ) to foster rapid innovation beyond the platform into the next generation of emerging technologies. Working alongside the Fedora and RHEL ecosystems, we hope to further expand on the community offerings by providing a platform that is easily consumed, by other projects to promote their code while we maintain the established base."

(continues)


Additional useful highlights:

> - Some of us now work for Red Hat, but not RHEL.

> - Red Hat is offering to sponsor some of the buildsystem and initial content delivery resources

> - Because we are now able to work with the Red Hat legal teams, some of the contraints that resulted in efforts like CentOS-QA being behind closed doors, now go away and we hope to have the entire build, test, and delivery chain open to anyone who wishes to come and join the effort.

> - The Red Hat Enterprise Linux to CentOS firewall will also remain. Members and contributors to the CentOS efforts are still isolated from the RHEL Groups inside Red Hat, with the only interface being srpm / source path tracking, no sooner than is considered released. In summary: we retain an upstream.


Why would the build, test, and delivery chain be subject to Red Hat's legal team? Did the process to remove Red Hat's marks from their GPL'd source trigger trademark issues?


I think the problem is that if they missed a trademark then CentOS was distributing Redhat's Trademarks putting them in a sticky legal position.


This is one of the reasons for being a fully independent distro with no ties to a "corporation". Debian comes to mind, as does Slackware, Arch, Gentoo, a couple of others. Being able to go about your business as a distro without corporate oversight is desirable these days.

CentOS now has a "master" where before, the GPL allowed them to simply take the source, remove trademarks, and re-compile as CentOS, getting the benefits of a corporately-funded distro without the legal constraints of evil IP and what not.

RH also may choose to play ball with certain organizations that I don't agree with. This may affect CentOS in some way. An indy distro can give them the finger and tell them to get bent. My goal is not money, it's freedom from oversight, freedom to do as I please, freedom to have an unencumbered distro not tainted by the likes of the false notion of IP, legal nonsense, you name it. Debian is growing for a reason. One of those reasons is because it's an indy distro.


I understand what you mean by independence from corporation, but from CentOS it's the other way round.

CentOS has always been a "slave" of Red Hat by design and, before this move, the master could even sue it for misappropriating trademarks. Now, QA of packages can be done in the open, because it would no longer be as problematic to ship test-quality packages that still happen to include a Red Hat trademark.


It appears the link has changed (used to be http://www.centos.org/about/governance/ ), so all is well but I wonder how.



Does this mean we can finally stop referring to RedHat as an unnamed "Prominent North American Enterprise Linux Vendor"?


I used North American Vendor of Enterprise Linux - NAVEL - but it never caught on in time... :/


Yes (according to one of the core devs).


I think that's kind of sad, actually. Always made me laugh.



I hope this will make it easier for me to use CentOS at NASA. They only want us to use Linux distributions that are actively supported with security fixes and for some reason they don't think CentOS qualifies, since it's not a "real" company. They prefer us to use RHEL, Ubuntu, or Suse. But if this new arrangement increases the perception of timeliness for updates, then maybe we can start using it and save some money.

Edited for clarity.


Although I cannot personally vouch for it, I read that Scientific Linux gets updates more often compared to CentOS.


Why does it matter? If they have the budget for it what's the problem with spending money on RHEL? I don't see the benefit in pushing CentOS over RHEL when licensing isn't a factor.


I am not sure if i feel happy or not about this. I just hope they don't go the Fedora way on mangling, moving and changing everything around with every iteration.

I don't see what interest has RH into Centos except for trying to disrupt the base and push that way more businesses and customers to RH.

Maybe i'm just biased but Centos is doing pretty good in my opinion except maybe the late code changes and updates.


> I just hope they don't go the Fedora way on mangling, moving and changing everything around with every iteration.

Why would they? CentOS is RHEL, they'll ship whatever RHEL ships.


RH may try to cripple CentOS, gather user statistics or advertise to promote more consumption of RHEL instead.

"Keep your friends close, ..." because it's just business.


More likely they'll provide a more official way to convert CentOS to RHEL and buy a support contract. It's always been possible, and many people use CentOS with the understanding that if anything get really crazy, they can do just that an buy support.


Any attempt to do so would be amazingly obvious to everyone and would be bad enough to force a fork. Redhat isn't stupid.

This smells like FUD.


They wont change much around, I think. CentOS is a clone of RHEL and there's very little that changes around in RHEL.


I really hope you are right and it will go this way.


So, in general nothing much will change for users? Red Hat is introducing stability by employing core developers to work solely on CentOS and possibly streamlining changes between RHEL <-> CentOS. Or, if you prefer to view it like that, Red Hat is exercising more direct control over CentOS.


The comment below https://news.ycombinator.com/item?id=7020134 says build and test in centos will open up, which is great news.


> So, in general nothing much will change for users?

Sounds like the CentOS QA process will be more transparent.


I've largely made my living off CentOS software. I hope this new friendship doesn't cause CentOS to become diluted or eventually shuttered. Great project run by a very small number of very dedicated supporters. Best of luck to them.


My concern is more along the lines of a sudden injection of tons of energy causing the CentOS ecosystem some extreme growing pain. That could filter down to individual service providers' servers becoming less reliable or even just needing more frequent attention, which would be painful enough.


Just re-read the announcement. The third sentence isn't a sentence at all.


This is my fear as well, i like Centos for what it is and implement it daily on most of the servers i run.


I think Red Hat had to do this, otherwise CentOS would be drawn closer and closer to Oracle.


lol!


I wonder if this will lead to CentOS getting security updates at the same time as RHEL (or very shortly thereafter) or if CentOS will continue to have to "play catch-up".

It is for this reason that I moved to Oracle Linux about a year ago when deploying a bunch of new machines. I am certainly no fan of Oracle the company but they were getting security updates out much quicker than CentOS.


I see this as RH not wanting competition and/or wanting to somehow control CentOS. It's no wonder, honestly, that Debian is gaining in popularity as they are the last of the main Linux distros who control their own destiny. I feel very awkward about this news.


I know what you mean about 'feeling awkward'.

But think: how could Red Hat 'control' CentOS when CentOS is simply a clone of RHEL produced from the srpms that Red Hat have to distribute under the terms of the GPL?

As Karanbir says, 'we retain an upstream'.

There are other clones: Scientific Linux (CERN/Fermilabs and a lot of Universities) and Springdale Linux (Princeton/Institute of Advanced Study) are two other RHEL clones. And of course, there is Oracle Linux, a third clone, and one that provides commercial support.


The absolute worst that can happen is a return to the state of today. Also, CentOS didn't do anything new and creative and visionary that all. Their goal from the beginning was to replicate RHEL with all the non-free bits (mostly artwork and trademarks) replaced.


The concern isn't to have anything new and creative. That's not always the goal of OSS. The goal is the have free and open equal alternatives to corporate-controlled software. Debian, for example, is likely the last of the truly unencumbered distros.


What about Slackware, Gentoo, Arch...?


Tell me what CentOS will no longer be able to do that they could previously.


The idea is to have a distro with no "corporate" oversight -- an independent distro. This is the reasons why I heavily lean Debian and OpenBSD, because they are independent.


Since CentOS blindly reproduced a product generated by a corporation, I'm having a hard time understanding your argument.


I'm trying to get you to describe your reasoning about what actually differs if a corporation is involved. "Independent" is not in itself a word that confers any benefit.


Most Redhat customers use Centos (eg for build, test machines etc), and many Centos users are potential RH customers. Fedora does not fill the same niche, so having all three makes sense.


Don't worry, be happy. CentOS was never real competition to RH, instead it was providing an invaluable ecosystem. In a way RH always controlled CentOS since the latter is just a clone.

If CentOS gets shitty you can always move to SL, Puias etc or make your own.


And if you need multimedia codecs, a version of VLC that works properly, and if you live somewhere where it is legal to play a DVD on a machine running linux, there is the Nux desktop repository.


What would the consequences/benefits be for the Fedora project ?


RHEL and CentOS are aimed towards enterprise customers, Fedora is for individual users. I don't think there's any kind of real competition or even overlap between CentOS and Fedora. In addition, Fedora is basically a community project (although led by RH). It seems unlikely that RH would be able to shut down or materially alter the Fedora project even if the wanted to. In the worst case scenario the community could just branch the code and form a new organisation to lead its development.


The Fedora Project is also going through some pretty significant changes. See https://lwn.net/Articles/569795/ (although slightly OT).


Now if they can just get rid of RPM I might come back!


Do come back, it's better than the competition nowadays. :)


It's been years, but RPM was always frustrating compared to the ease of apt.


How so?


I'm not sure what the OP 'mergy' intended, but I'd guess that the intended reason had to do with better ecosystem curation. Under the hood, my understanding is that .rpm is marginally inferior to .dpkg, but not remarkably so. E.g. using cpio instead of gz as the format basis. So by elimination, the only reason to prefer .rpm would be the rpms, if you follow.

As a longtime Fedora user and a recent emigre (post-Snowden) to OpenSuSE, I agree that there is much to love about the selection of RPMs available, especially under Fedora. Although I'm sticking with suse for the lightning-fast zypper (still noticably faster than yum or even its slated nextgen replacement dnf!) and well-thought-out snapper utilities. But that is OT.


Mostly it's a matter of picking your poison. dpkg allows some things that RPM doesn't, and vice-versa. dpkg does have some nice features (well, features I can't remember, except for I went "Ooo, I want rpm to have that" when I saw it)

One thing that dpkg has that really annoys the hell out of me is allowing for user input during the transaction. It makes unattended upgrades impossible


In general Debian/Ubuntu packages do not prompt the user with debconf questions these days, although there are occasional leftovers in old packages if you upgrade via a terminal with apt-get update.

You can define an environment variable, DEBIAN_FRONTEND=noninteractive, to force even the worst-behaved packages to never ask a question.


You can set those configurations, certainly, but it's up to the package maintainer to actually respect them. At my previous company, I occasionally ran into packages that insisted on trying to read from the terminal, even when installed noninteractively, with all configuration flags set to noninteractive. I can understand the potential appeal for a lone individual working with a small number of hand-maintained systems, but when working with a large cluster, interaction during package operations is just a horrible misfeature.


Just to provide a similarly brief and pointless argument: I strongly prefer yum/RPM to apt-get/deb, both as a package maintainer (for tens of thousands of users) and as a system maintainer with dozens of servers to maintain.


Not my experience. I hope it has improved, but the missing dependencies were always a total pain.


Both yum and apt-get have excellent dependency resolution. I don't really know what else to say about it. I can remember the days of "RPM hell" (though I rarely found it all that problematic) as I've been using Linux as my primary server and desktop OS since 1995, but yum was in widespread use by 2005. It's been a long time since missing dependencies was a thing you needed to think about on any modern Linux distribution.

My preference stems from the following:

1. yum repos are so simple to create and maintain! One command, one directory, no files except the packages themselves. Contrast this with apt-get...it literally took me weeks to figure out how to create an apt-get repo. It requires several configuration files, which are generally human maintained (as far as I can tell, the docs for maintaining a repo separate from the Debian repo are awful and leave more than half of the process out completely, to be guessed and googled...they assume you only want to add a package to the Debian repo, not create a new one). It is also inefficient as hell. Generating our Debian/Ubuntu repo meta-data takes an order of magnitude longer than the yum repos.

2. Packaging RPM is much nicer, IMHO. If you don't need patches, it's just one file, the package-name.spec, plus the tarball. If it's a standard "./configure; make; make install" process, your spec file can be almost empty. The spec file is well-documented (Epoch is tricky, and a couple of the macros aren't immediately obvious, but in general, I can answer my questions by reading the docs). Debian requires several directories full of files for every package, and the documentation is obtuse. Again, it took me a long time to figure out how to package for Debian, and there's no good source for what tools you use to create and maintain packages. Wanna sign those packages or repos? Good luck! There are three or four different ways documented, and it was not at all clear which was the right way. I had to dig into the Debian repos to see how they did it, and try to replicate it. I think I'm doing it right.

3. As a user, I dislike apt-gets tendency to want to remove packages based on dependencies no longer existing...so, if Apache were installed to satisfy a dependency in another package, and you remove that other package, apt-get will (with some configurations, seemingly the default) ask if you want to remove Apache. Even if you have a hundred VirtualHosts configured and rely on Apache for everything! I find some of apt-gets other defaults alarming.

4. yum has mock. mock is amazing. Building packages for any RPM/yum-based distro with one command and one configuration file (and maybe a custom repo, if there isn't already one) is so awesome. As a package maintainer, this would be enough to win my heart forever. apt-get has some kind of fake root thing, or something, but I can't figure it out, so I have to maintain a VM for every Debian/Ubuntu version we support. This sucks and is tedious as hell.

But, if I had to live with apt-get, I certainly could. They are, honestly, both amazingly great technology, and I can't imagine life without a good package manager. I hate Windows and Mac for this very reason. I can't believe people think Mac OS X is acceptable, given the awfulness of package management on the platform. Even Windows has better package management than Mac OS X.

Some amazing things about both:

yum has groupinstall and apt-get has tasks. Super cool! Install a huge swath of packages, for achieving a specific goal (like "Development Tools" or "Gnome Desktop") with one command. Brilliant.

Easily install all the dependencies you need to build a source package, based on it's specified build dependencies. How amazing is that? Not only can you install the source code and the config files, patches, and data files you need to replicate the package from source, you can also replicate the needed build environment easily. (Add mock on yum-based systems, and you have packager heaven.)


You don't need those virtual machines if you use pbuilder. All builds will run under the same kernel but if it's affected by that you're doing weird things. Also, your package shouldn't depend on any meta package as it's bad style (what if that package drops a dependency you relied on?). A good package lists only the bare minimum dependencies needed to get it to work (but under Debian may recommend and suggest other packages to go along with it).


I read about pbuilder, but couldn't figure out how to make it work. Again, the documentation for the apt-get/deb ecosystem is awful. If I thought I knew how any of it worked, I'd write new docs...but, our Debian/Ubuntu repos are awful (they work fine for end users, but they are awful to maintain...I must be doing it wrong because it's so slow to regenerate and requires so much human involvement, but I don't know how to do it right). And, not only are the docs difficult or non-existent, the tools aren't very discoverable. Once I knew mock existed, I was able to get it up and running mostly by playing with it and looking at examples. I wasn't able to figure that out with pbuilder.

A quick googling just now revealed the Ubuntu wiki has what looks like good documentation for pbuilder, so I may be able to make it work now. It doesn't actually look that difficult (though it seems to require pretty advanced shell scripting for some things that seem like they ought to be in there with a command line switch or a config file option, but my reading of it was cursory and it may just be a weird bit of showing off in the docs, and shell scripting isn't actually a necessary part of the process). But, with truly awful docs, as they existed when I was trying to use it several years ago, I couldn't make it go.

I really don't hate apt-get as much as all of this makes it sound like I do. As a user, apt-get it fine. As a maintainer, I find it extremely frustrating. If the docs were better, I'd probably like it more (though not as much as RPM/yum...the simplicity of maintaining packages for a yum repo is really hard to beat).


What's wrong with RPM?


My impression is that they got a bad rep long ago before there were decent package managers. As I recall, you had to install dependencies manually and updating a system could get nightmarish. Nothing really wrong with rpm as a package technology per-se, but before yum it was a lot less friendly.


Just a couple of days ago I was doing an upgrade from 6.4 to 6.5 and yum died in the middle; fixing that was quite hellish as rpm somehow decided that multiple conflicting versions of several packages were installed at once (eg glibc v131 and v137), and it wouldn't do anything until that was fixed (it wouldn't even attempt to fix the problem until it was fixed... "yum-complete-transaction" just complained about things being inconsistent and wouldn't roll back or forwards - "transaction" my arse). Ended up fiddling with "rpm --no-deps" to fix individual packages after drawing out the dependency graph by hand :(

I've had problems with individual .deb packages before, but never broke the whole package management system quite like that before :P


> "transaction" my arse

That’s not something that can be made reliable with either rpm or apt. The only options for reliable upgrades are 1) nix model or 2) filesystem transactions/snapshots. Especially, yum has official plugin for making LVM/Btrfs/ZFS(?) snapshot before committing each transaction. I imagine there’s something similar for Debian/Ubuntu.


Right. RPM can get hosed pretty easily.


package-cleanup --cleandupes


I did that, it bailed out because it wanted dependency problems to be fixed first (and the dependency fixer bailed out because it wanted the dupe problems to be fixed first)


Indeed, but even with YUM, the process is unsavory and the number of sources is surprising limited. I ended-up just compiling from source because it was faster than trying to deal with RPM dependency problems.


What you mean number of sources is limited? The distros come with their own repos with a large selection of software.

When you go outside of that, you need to have a specific reason...like going to RPM Fusion to grab things like non-free codecs.

I'm just guessing, but it sounds like you're going to rpmforge and downloading a random RPM, maybe it's not even for your distro, and installing it and then trying to find all the other dependencies and in the end you've got Fedora, CentOS, and SuSE packages all forced installed on the machine and maybe whatever you were trying to do worked. That tends to be what people who bitch about yum/rpm are doing: they're doing stupid things.

RPM/Yum works just fine for 99+% of its millions of users.


For me in past years on Red Hat, Fedora and SuSE the RPM repos have a way of corrupting themselves almost irreparably. On Ubuntu (Debian based) if things go wrong I've always been able to run a few commands and get things nice and stable again.


Indeed.


Always nice to get -4 on a comment harshing RPM. HN folks are nutty.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: