Hacker News new | past | comments | ask | show | jobs | submit login
Red Hat is pretty good at being Red Hat (redmonk.com)
152 points by ridruejo on Sept 22, 2017 | hide | past | favorite | 79 comments



I've been hearing the "boring" comment a lot lately, and I totally agree. I don't want the exciting filesystem (btrfs which just took me for a ride), or the exciting programming language/library that will strand me in 6 months with multiple man-years of work.

No, what I want is boring. That means everything works as I expect it too, and it doesn't create drama every few months when the cool kids decide that the old way of doing things isn't cool and rewrite a library i've based my application on or change the syntax of the compiler so my code doesn't work.

A few years ago, I started challenging people at work when they were trying to pull in the latest language/library/toolkit, with a simple question. "How does that help our customers". Sure, I got plenty of the 2/3rd order arguments about how making the dev team happy/more productive would result in better feature development/etc. But I've seen these kinds of changes enough to know that usually these 2nd order affects are crushed by the unseen problems that _ALWAYS_ seem to crop up after your already committed to that new technology. Its truly rare to have a new technology that is so much better than the last that the cost of switching pays for itself. That doesn't mean it doesn't happen, it just means that the immature crap everyone is yammering about is likely not that technology. Give it a couple years and if it turns out to still be around (and people are still actively switching to it) then its probably good. This mindset has produced a long list of "successful" products that pay the bills and make people happy without making a lot of noise and crashing/burning regularly.

Bottom line, I will take the old "cruddy" technology with the long list of known problems, to the new cool one with the long list of unknown problems. At least part of that seems to be what RH provides for the opensource community. The old guys that actually fix the bugs, rather than switching to some cool new product.


The thing that makes distributions like Red Hat and SUSE "boring" is not that we only do maintenance work, it's that we've effectively solved the problem of maintenance and release engineering -- though of course there's still improvements to be made (and on the SUSE side, openSUSE Leap and the plans for SLE15 are quite interesting developments).

We do a lot of work on exciting and interesting projects, some of which crash and burn as well. But part of the benefit of working for an enterprise distribution is that generally if some project is not ready to ship yet, we don't ship it. Having an out-dated version of some package is fairly common, and as long as we can do the maintenance work on it, it doesn't really matter. So we have a lot of time to improve upstreams to be stable enough to ship.

[I work for SUSE.]


> The thing that makes distributions like Red Hat and SUSE "boring" is ...that we've effectively solved the problem of maintenance and release engineering

Hmm.

There are a lot of developers that could probably really do with some exposure to this kind of environment.

I could definitely see myself hugely appreciating, say, a 6 month stint doing... something relating to the nitty-gritty of keeping a distribution going, learning what the pitfalls are, what the unintuitive stuff is, etc. Basically being some key point-person's assistant (the one who deals with major disasters), or something.

It would be even better if some of this work could involve direct enterprise support: learning what customer requirements are, getting a sense of scale, etc. I can see this kind of information (enterprise support) being the most valuable - actual bespoke software would likely be a major focus of such training, and this would tech would-be developers about the disasters that can happen 5 years or 10 years on from "I have an awesome idea" "wow it's so elegant" etc etc. (Continuing from the previous point, perhaps there could also be an opportunity to be the assistant to the on-call person who sometimes wakes up at 3AM.)

You could position this as something for people looking for DevOps jobs, and maybe make some kind of cert for it as well (this would likely help with/create traction for/be needed for enterprise-level interaction).

I personally would have zero issues with signing a small book of NDAs to get this type of training myself. The NDAs would just cover how I learned, not what I learned, and in this context the how would be largely irrelevant (but at the same time incredibly invaluable). FWIW, I'd personally greatly benefit from a good debrief at the end of the course to help me find all the little ways I might accidentally leak something (eg a few too many words fall out before I stop myself etc) and learn to catch myself quickly.


I am super interested to see how the SUSE stemcell and rootfs work turns out, now that SUSE is contributing to Cloud Foundry.


Agreed. I would add a second question to your first: What are the risks of implementing this into production? Get them thinking as a matter of habit about both of those in tandem - what's the customer benefit, and what are the risks of doing it, and does the former justify the latter?

Also, all programmers have a burning desire to build stuff from scratch, which is part of what's driving yours to incorporate cool new stuff like this at a risk to your revenue. No can do. But it may be worth exploring side or 20% projects specifically dedicated to building stuff from scratch. Get their itch scratched on non-revenue projects that serve as potential useful on-the-job training and tech evaluation, while possibly getting something of value out of it.


The burning desire to build from scratch is simply because it is easier and because it tend to be more verbally valued by management then working on older product which tend to be snarked at as merely maintennance.

The moment management praise also people working on less new and gives them appropriate salaries and autonomy, burning desire changes - except in few people who are truly naturally like that. It is win win, because the remaining people tend to be trully better in cutting edge tech problems.


I'm starting to have doubts about this line of reasoning. It just doesn't explain enough of the risky behavior I see.

I'm starting to think it might be the same thing that gets young developers to put up with 10+ hour days. That feeling you get when you lose yourself in a problem for hours and hours.

Being in a Flow state is seductive. Nobody gets that feeling when looking through the bug database or knowledge base, or hunting through the source of an OSS module trying to figure out how to ask it the right questions. But they do get that feeling when trying to rewrite it themselves. All that open space to run as fast as you can... in the wrong direction.

It's thrill seeking. BASE jumping for programmers.


Programming is based on hero worship. Anyone who makes even half an effort to better themselves as a software developer will encounter famous names from the past - those that "made a difference" by creating languages, operating systems and algorithms. Keyword here is "create". This is not an industry that celebrates maintence programmers. There's no glory for the guy that spent 20 years keeping the system that your banking transactions depend upon.

So naturally young programmers will see to be the next Richie, Torvalds or Knuth and to do that you gotta create something new. Gotta change the world


As a fairly young programmer, I think it's not necessarily hero worship (though that does play a part). I think it's because a lot of young programmers define their identities in terms of what they do. This is especially prevalent in the free software community. If you're going to define yourself by what you work on, it doesn't seem very fulfilling to just keep an existing project running without having the freedom to make drastic changes that would improve the project.

It should noted though that the above points about Red Hat and "boring" (as much I hate calling people's work boring) don't sit right with me. I know quite a few people who work at Red Hat on very interesting technologies. At SUSE, I work on fairly interesting technologies as well. The key aspect that makes SUSE (or Red Hat) "boring" is that we have effectively solved the problem of release engineering and maintenance (though in recent years SUSE has been improving on it further through openSUSE Leap and the recent plans for SLE15). The funny thing is that if upstreams could do their own release engineering correctly, SUSE and Red Hat would have a weaker value proposition.


The "boring" comment isn't meant to denigrate people's work. Rather, it should be taken as a complement for a job being done well. In fact I've heard the "boring" comment from a couple RH employees myself. So, I think they understand what is being said, rather than taking it in a derogatory manner.


You think that Torvalds or K&R are famous because of their creation and not for the long-term maintenance of their projects? That's pretty shallow thinking in my opinion. Yes they created things that we use and love, but we use and love them because those creators took years of their lives and dedicated them to the maintenance (both technical and political) of their projects.

I for one admire them for their dedication and vision far more than for their creations and wish more programmers would try to be like them.


There are some wannabe-heroes, but they aren't so widespread.

The vast majority is just being pragmatic. Trendy toys and 'designed and implemented new X' in resume looks so much better for career than 'maintained something old so customers were happy'. Of course, there are some technologies that helps to make pretty good money even being ancient, like COBOL, APL, AS/400 and IMS. But for how long? So it seems rational to move forward instead.


That said, if you've got COBOL, Fortran, etc on your resume you're pretty much guaranteed to get asked about it in an interview.


Maybe discussion on mainframes transaction processing and COBOL features helping to implement it the way we know it thesedays as 'Cloud Functions' or 'AWS Lambda' might be a little bit more fruitful than reversing linked list pointers on a blackboard.


I love the feeling of being lost I solving and how it can extend for hours in any particular day. It can happen with both old tech and new tech, old project or new project - the only requirement is large task. (Impossible in agile max 4 hours long task environment).

I havent seen people in that state for weeks on end. It explains peeks of productivity followed by drops of productivity. Does not explain long term crunch.

And I mean, programming is not nearly as risky as Base jumping. You don't die coding in wrong direction and you still get paid for it. It is annoying to be wrong, but imo, we are overusing word risk.


Also, all programmers have a burning desire to build stuff from scratch,

I don't think this is true at all. 80%+ of software work is maintenance related. People who have a burning desire for green field development won't last long so they are few and far between. In fact, this is one of the biggest differences between real projects in industry and fake projects in CS undergrad education.


Amen.

Sadly it seems in a sense RH is funding both sides at the same time, as many of their employees are the very source of the "exciting" developments happening in the Linux ecosystem these days.

Thing is though that in a sense the Linux community has painted itself into a corner. Thanks to the likes of RHEL, Ubuntu LTS and Debian Stable, there is little reason for the upstream to be anything but "exciting".


Out of curiosity, what are the exciting developments happening in the Linux ecosystem these days?


Checkpoint-restore of processes (CRIU) is a fairly cool technology that is quite unique to Linux, and has gotten a lot of work in recent years. eBPF (both for seccomp and tracing) is also quite exciting in terms of its applications. User namespaces are also allowing for developments such as rootless containers. There's also some interesting stuff for trusted computing that are happening with IMA and TPM developments in general.

In user-space there's even more interesting developments, you just have to look a little closer. ;)


> Checkpoint-restore of processes (CRIU) is a fairly cool technology that is quite unique to Linux

DragonFlyBSD has had process checkpointing for more than 10 years.

https://www.dragonflybsd.org/cgi/web-man?command=checkpoint&...

While the functionality is admittedly & unfortunately quite limited, there are no technological reasons this couldn't have been furthered in the meantime given sufficient funding/dev time, etc.


CRIU has so many more features and is so much more powerful that I don't think it's a fair comparison at all to make. You might argue that you could have implemented all of the features on another system like DragonFlyBSD (and in fact I would argue it would've been easier than doing it on Linux) but I don't see why that statement matters. What makes CRIU "unique" is that it does have those features and it is used on production systems for live migrations. I remember there used to be very old Linux tools that did the same thing in the same timeframe as checkpt, but they weren't widely used either because they didn't support any of the things CRIU does.

For example, not being able to restore sockets of any form (such as TCP sockets) is a massive limitation that CRIU doesn't have. userfaultfd allows for CRIU to have a slave process that is used as a source of lazy page loading from another machine (allowing for viable cross-host checkpoint/restore with on paper no downtime). And you can use CRIU to checkpoint/restore entire containers (DragonFlyBSD doesn't have Jails, but it looks like you can't even checkpoint process trees).


CRIU looks amazing, live-migration of LXC containers :)


There are rough edges to be worked out before we arrive at solidly productionised systems, but I believe Red Hat and Docker are both heading in that direction.

Mind you, ask yourself: why do you need it?

If it's for apps, isn't the point of these platforms to not need to care about individual instances?

If it's for data, don't you have an existing way to manage HA or parallelism?


You’re making assumptions that I’m thinking of Docker, which couldn’t be further from the case. I’m looking at using LXC/LXD to replace full-fat VM’s for highly stateful, single-instance services - game servers. Being able to live-migrate a LXC container to drain a host for maintenance is indeed a useful feature to have.


I usually assume all sorts of things[0]; I've become accustomed to living in a world where strict division between state and logic is assumed, even for relatively intensive stuff like trading platforms and very large ticketing systems.

Kubernetes has been more open to absorbing sticky, less-factored workloads than more opinionated platforms are. Red Hat have definitely made that point to folk whenever we and they are competing for business.

I had occasion to look at CRIU relatively closely in recent times. It is, as I said, early days, with annoying corner cases that need someone to patiently fix and polish (we ran into problems with networking, environment variables and process IDs).

In your case it will probably become attractive in the next year or two, because both Red Hat and Docker Inc (which is why I mentioned Docker) are touting checkpoint migration as a feature and I expect both will devote engineering effort to productionise it.

As an aside, if you're using the JVM, there was a really clever paper on coordinating the JVM G1GC and CRIU to greatly improve migration time: http://www.gsd.inesc-id.pt/~rbruno/publications/rbruno-middl...

edit: [0] Which is why as usual I am a dimwit for starting with answers instead of questions.


Yep, every time I hear people get excited about live migration of containers I have to give them a sideways look. If you are running only one instance, than availability must not have mattered that much to you.


Flatpak is pretty damn awesome - a layered container-like attempt at tackling application distribution and Dependency Hell.


I've said it before but basically every other post on HN makes me want to learn Ada out of spite.


Also people don't remember how the old proprietary Unixes were very boring

Few things changed between versions and they would tell you exactly what would change between versions so you could be prepared.

(however I have much less trust in a distro running SystemD, it is not stable or reliable)


I thought the same thing. RedHat has become the open source Solaris. I guess that's not such a bad thing. Solaris had some truly astounding engineering -- dtrace and ZFS felt like using software products that had been shipped to me from the future.


Preach it!

Current job shows all the scars of this, every 6 months people hate the current system and want to rewrite something using some framework/lang/library that will really make everything better. The one positive aspect is that my resume now has every single buzzword. The downside is that the current team inherited 5+ years of failure to focus and address the problem at hand and instead try to run away from their problems.


> A few years ago, I started challenging people at work when they were trying to pull in the latest language/library/toolkit, with a simple question. "How does that help our customers".

I take a similar approach. Anyone trying to win my vote has to prove what is the business value of adopting technology X.


My company (20,000+ employees) just started an engagement with Red Hat this week. We went with them for exactly what the article describes: open source software, packaged in an enterprise (i.e. expensive) fashion to make it palatable. We could have implemented these tools ourselves, but the executives appreciate the support contract and training, and I'm already seeing how we can benefit from the consultants' knowledge and clout.

Red Hat is making a very strong play in the microservices/container space. Openshift is a pretty dang good k8s distro, and they're also doing a lot in the API space (especially with 3scale).


Thanks for this. Makes sure such feedback gets back to your TSM, SDM, PM, etc. Email, CSat, or otherwise. We at Red Hat REALLY, REALLY appreciate all feedback. It is how we improve our services and products.


Im sceptic none the less- usually buying into enterprise software, they try to lock there customers into a walled garden, where they are the sole provider of upgrades, new software and tailored products.

I have yet to see the enterprise software, that takes the step back like steam did and provides a plattform for cutomers to trade sofware additions.


I think this comment is misleading; in many situation i believe you can upgrade to upstream if you want, simply you lose RH tech support.

In steam case they are not the one taking the blame when a game crashes (they take a image damage if they have too many unplayable games and they are getting criticism for it as of now for various reason) but the developer is expected to produce a game working on all platform it support as-is


That's funny how things are done in big shops. It's pretty much okay to spend a good six digit sum for some tool done by 3rd party, but just impossible to pay 100x less to someone in-house who could add a couple of features into existing in-house software on a weekend.


This is not how paid software development works (beyond maybe stealth mode/no customers start-ups- if you have no money then you do what you have to). If you have a going business, the trade-off is totally in favor of paying someone not at your company to do this sort of thing.

All up, even a new grad (= the cheapest you will find) in $NotSeattleOrSVButStillUS costs your company about one hundred fifty thousand dollars a year (more in SV or Seattle). You could have one of them spend several months coming up to speed on how to fix weird device driver issues necessary to get the latest kernel patches to run on your DB server, which costs you minimum 50k and gives you tremendous risk (what happens if a necessary security patch comes out while she is on vacation? If you train two people that's twice the costs.) Or you pay someone else to worry about that. Red Hat will charge you 2k for patches for a year for a standard server and if it doesn't work, will fix it.

If writing OS network stacks are a part of your company's core mission, go ahead and do it. If the one sentence description of where your companies money comes from doesn't mention anything in kernel land, having programmers spin up on that is a waste of their time and your money. You cut RH a check and get your programmers to write you something that does bring in money, because every minute they spend porting in patches is a minute they aren't earning you money.

As a side note, if your business plans for your developers to be working on a weekend you work for a terrible business and all the employees should leave and go to a place where they aren't exploited.


> but just impossible to pay 100x less to someone in-house who could add a couple of features into existing in-house software on a weekend.

They don't spend more money for fun. They do it because they've had experiences of many on-a-weekend projects become critical-path nightmares.

Owning your own bespoke platform is a giant money hole. The arguments in favour of doing so are plug interchangeable with "let's write our own operating system kernel / database / programming language / HTTP server / CPU architecture / web framework".

I also think it's a bit rude to assume that Red Hat has hundreds of brilliant engineers just phoning it in. I work for Pivotal and we are flat. out. every. day. writing this stuff, because it turns out that this stuff is a lot harder than it looks from the outside.

And the reason it looks easier is because we did the hard work for you.


> They don't spend more money for fun. They do it because they've had experiences of many on-a-weekend projects become critical-path nightmares.

yet strangely these sorts of straw-man organizations adamantly refuse to open source any such projects even if they have nothing to do with their business because of the perceived loss of intellectual property value..


Because strangely, it's really expensive to audit your code, it's really expensive to get sued if you flub it, it's expensive to replace all the bits you can't opensource and it buys you approximately nothing in return except a comment on Hacker News accusing you of some ulterior motive.


if you're a business with capital-M-Money, and you put "hello world" up on github, if someone wipes out their machine running it they're going to sue you. Doesn't matter what license you put on it, you're going to need the lawyers and they cost more money than you would have theoretically gained by putting your crappy "hello world" with the accidental "rm -fr" command in it up on github.


Has this ever happened? There are plenty of companies that are not software companies releasing open source software.


Often times, if the work is critical enough, you could argue paying the 3rd party with a track record, insurance or a strict contract for delivery, support is the logical choice. Sure someone in-house could do it, but if they screw up or create something that bites you down the line, is it going to end up costing more.

The cynical way to interpret this behavior though is the old "No one got fired for buying IBM". Paying a big name outside company is the safe choice because you can always blame them, but if you decided to go with the in house guy, there's no one else to blame.


You got downvoted for the hyperbole, but there's a lot of truth to this, in my experience. I don't think this is limited to big shops, either. There is a psychological phenomenon where people tend to place more value on things that have cost them more; I've experienced this myself when I've purchased a new vehicle—all of a sudden, it's the best vehicle on the market (it must have been, since I made the decision to buy it). The same thing happens with large business contracts.

As an in-house employee, it is incredibly frustrating when a vendor is automatically more credible than me, even though I've spent years building knowledge of my own company's business and systems. But I have experienced this both at my current 20,000+ employee enterprise, and a previous company where there were 5 employees.


OpenShift is a fork of Kubernetes. Case in point. OpenShift has functionality called a Route. That isn't in Kubernetes. Kubernetes of course went and added something similar called ingress.

This means that anytime Kubernetes does a release that RedHat has to manage merging all those changes in with their local changes that aren't part of their project.

This is exactly the same sort of thing that RedHat does with their Linux kernels. It's exactly why you RHEL has been such a terrible platform to run Docker on. Because RedHat is only pulling in some of the upstream changes into their kernels rather than upgrading everything.

This sort of merging is slow, error prone and costly. If you use anything that they've added that isn't in the upstream it creates vendor lock-in. Sure OpenShift's Route code is open source, but you can't take that to any other vendor without building everything yourself. Want that new feature in the latest Kubernetes release, you better be prepared to wait.

As time goes on it will become harder and harder for them to maintain this fork. If OpenShift doesn't become the dominate Kubernetes distribution then RedHat may lose interest in it and then you'll lose your maintenance of that fork. For that matter, if RedHat loses the maintainers as employees they may lose their ability to maintain this fork.

It's my opinion that anyone buying OpenShift is playing with fire.

The fact that RedHat has failed to get their modifications integrated upstream and have to maintain themselves is a massive failure on their part. This sort of thing was understandable in the past, but we really should expect more of them now.


> The fact that RedHat has failed to get their modifications integrated upstream and have to maintain themselves is a massive failure on their part.

Not really correct. We treated ingress as routes v2. Some of the design choices for ingress (which is still beta, and may change again before reaching stable) were improvements, but others created more problems.

RBAC, most of the authentication code, a huge amount of performance work, podsecuritypolicy, egress network policy, and many others all originated in OpenShift, and then we moved or helped move them into Kube. When we did that, we worked in the community to improve those features. And then we do all the work in OpenShift to make them transparently (mostly) available to the early adopters. For instance, OpenShift RBAC APIs in 3.7 now sit on top of Kube RBAC, and you can use either API. We'll continue supporting that for a long time so that users can switch at their leisure.

It's just what we do.

Edit: templates are the only thing that didn't get upstream, and it was because Helm was good enough at that point that we didn't need it in Kube. We continue to support templates, and they are exposed under the new service catalog work as a broker so users can be completely oblivious to their consumption like we're hoping to do for Helm. Everyone wins.

Edit2: deployment configs also are an example of predating Kube deployments - the fundamental design choice is actually different (DC can fail and inform you something is wrong, deployments just try forever). We continue to add capability to deployments to make them better than DCs, and then add the same improvements to DCs. If we picked a new name it would be DeploymentJob - it can run hooks, have custom logic, and fail. It's not upstream, but will be an extension API soon.


This is easily confirmed. Just look at the companies upstreaming into k8s, and you'll see redhat is dominating. They have people in almost all SIGs, and are very active in the community. Thanks for all the contributions.


The extension API feature is cool, it will make k8s ecosystem grow more rapidly.

BTW, will all the extra features in OpenShift be ported as extension APIs?


Kubernetes owes its success largely to Openshift and Redhat's efforts. Without Openshift, Kubernetes would just be an interesting POC. Google doesn't dog food Kubernetes. Openshift has since the beginning, and contributed significantly to K8s as a result of actual production usage. Just take a look at the top contributors and you can see the kind of contributions of the Redhat guys.

While I sort of agree with you on RHEL, I don't think this is the case with Openshift at all. I wouldn't hesitate to recommend it to people looking for a full solution.


> Kubernetes owes its success largely to Openshift and Redhat's efforts. Without Openshift, Kubernetes would just be an interesting POC.

How did you arrive at this conclusion. Curious to know more about redhats role in this.


Red Hat have done a lot of the productionising and packaging and got it running at a lot of companies.

I don't fully agree that Red Hat has more ownership of the success of Kubernetes, though. They may have been necessary, but by no means sufficient. The aura of Google has probably had far more importance in the momentum to date.


CNCF owns the Kubernetes project. Google provides a lot of resources, and so does Redhat. Both are heavily involved in k8s, and both contributed significantly to its success.


My involvement w/ Kubernetes over the last few years. I've always considered k8s co-led by Google and Redhat. It's not hard to find this out for yourself - just take a look at the Github and the mailing lists, it's all open. K8s changed dramatically with Redhat's involvement.


It is important to note OpenShift predated Kubernetes. RH reimplemented the underlying platform by adopting Kubernetes only a few years ago.


That is a fair point. I'm only talking about the "Kubernetes" implementation within OpenShift. It's modified from the upstream one and includes things that aren't part of upstream Kubernetes.


Routes were created about a year before ingress.

Part of what we do is give users a path of adoption. For instance, routes will continue to work forever. The openshift routers have almost every bell and whistle possible for routes - and we've recently added ingress support. We also took all of the security features of routes and applied them to ingress - for example, if two different namespaces ask for the same hostname with a route, the oldest one always gets served. We also adapted the security rbac around routes to ingress, so you can set a role that controls whether end users can use custom host names or custom certs. We also are in the process of adding all of our extended cert validation to the router for both routes and ingress, so if a user puts in a bad cert other users aren't impacted.

Basically, you can use vanilla Kubernetes if you want. But openshift is a sundae with sprinkles. Come for the ice cream, stay for the toppings?

Edit: also, every add on in openshift is either something that will eventually be in Kube, or an extension to Kube. Red Hatters are the ones who added extensibility to Kube (with help from others) like api extensions, CRD, initializers, web hook extensions, binary CLI plugins. We did that so that OpenShift can extend Kube to solve real user problems, and also so that everyone else in the ecosystem can do the same.


My understanding is that OpenShift is more of a superset of k8s vs. a fork.

I don't think the situation is going to be as bad as you are implying.


[Full disclosure: I work for Red Hat as an OpenShift Consultant]

The truth is a bit more nuanced as Kubernetes and OpenShift are actually made up of dozens of projects and integrations. Our company contributes to (as upstreams) in support of our OpenShift Enterprise offering: Kubernetes, Linux kernel, HAProxy, Jenkins, Hawkular, Heapster, Cassandra, Elasticsearch, FluentD, Kibana, Jenkins, JBoss, Tomcat, Apache, Ansible, Go-lang, and probably many more. We do almost all of our work completely in the open (our docs, container images, templates, examples, blogs) via github and trello. In fact, you can run just about the same OpenShift (officially called OpenShift Container Platform (OCP or OSCP) or OpenShift Enterprise (OSE)) we sell by using our upstream project for it, OpenShift Origin [0][1]. If that looks complicated, you can try minishift [2][3] to start which also has an upstream Kubernetes project in minikube [4].

In terms of superset vs. fork: it's not quite a superset because almost everything we commit to OpenShift gets committed to Kubernetes and/or vicea versa. You can almost always say if it works in OpenShift, it works in Kubernetes; if it works in Kubernetes, it works in OpenShift.

It's not really a fork either (as we say often "Best idea wins!") and so our people (including management!) try to make sure we are adding value to Kubernetes so that our customers & the community can then extract that value. OpenShift/Kubernetes metrics is one place that affects me & my customers that we're following the community's lead on that and implementing new developments in OpenShift as Tech Previews when appropriate. Our code is not diverging from Kubernetes as much as you might think in supporting some of the "enterprise" features we've added.

So, I would say OpenShift is a Distro of Kubernetes in the same ways RHEL, SuSE, et. al., are GNU/Linux Distros. You might say Kubernetes provides the "kernel" for a modern Data Center (compute resource scheduling and management, internal/external data structures and interfaces to use such compute resources). OpenShift is intended to help provide everything else you expect your data center to do for you or to support your Application Development in Java, .Net, node.js, Ruby, Python, PHP, Perl, etc (UI & CLI management interfaces, simplified build and deployment processes (S2I), Jenkins integration, external logging integration, external monitoring integration, sample 12 Factor Applications, etc.). We partner with companies when they want help bringing on their storage systems, frameworks, databases, applications, etc., just like you'd expect when companies provide drivers for their databases, hardware, or storage systems for OS kernels.

[0] https://www.openshift.org/

[1] https://github.com/openshift/origin

[2] https://docs.openshift.org/latest/minishift/getting-started/...

[3] https://github.com/minishift/minishift

[4] https://kubernetes.io/docs/tasks/tools/install-minikube/


A fork can imply many things, but all supersets are forks. I realize that RedHat doesn't like calling OpenShift a fork, but it fundamentally is. The only people that will maintain their additional functionality is RedHat themselves.

If you manage to avoid all of their added functionality then you can avoid the vendor lock in. But you can't avoid the delays in getting upstream features/bug fixes.

If Kubernetes was super stable and you weren't likely to want any of that stuff I would agree it probably wouldn't be a big deal. But it's not. Significant improvements are coming in every single release.

RedHat is managing a decent pace in keeping up with Kubernetes for now. But all that it takes to upset that apple cart is for something to get added in a way that breaks their additional functionality.

If RedHat doesn't have the clout to get their improvements in upstream, why should I presume they have the clout to avoid their additions won't be broken by other changes?


The thing is some people want super stable, improvements are great but the ability to plan is also great. That breaking change that prevents Redhat from integrating Kubernetes into Openshift is just as likely to be a breaking change for Company X using Kubernetes

That's why they pay Redhat, so they can plan.


You're sort of twisting GP's point by saying that because K8s is not super stable, Red Hat will provide stability via OpenShift. I mean yes, Red Hat will certainly provide a more stable version, but it's a double-edged sword, because K8s might leave OpenShift in the dust with some incompatible change, and then a few years down the line you could end up with serious buyer's remorse when K8s is an order of magnitude better and OpenShift is left in some well-maintained purgatory.


I can assure you that openshift will always be Kube++. It's just a Kube distro. The fact that today you need to compile in those extensions is a detail that we and others spend most of the time addressing.

Odds are, most of the things you use in Kubernetes were because someone working on OpenShift wrote, tested, performance tested, and stress tested in production.

When LTS OpenShift is a thing, there will still be an OpenShift trucking along right behind the latest Kube. We always try to strike the balance between being on the bleeding edge and making sure end user clusters continue to work. In fact, a lot of the bugs in patch releases are found by the teams working on openshift and opened upstream right away. But an OpenShift user never sees that, because we only ship once it's stable.


You've got a mighty big crystal ball there then.

Sarcasm aside. A major part of the allure of Kubernetes is that it's not a single vendor project. It's unlikely to die if something happens to RedHat. Say someone like Oracle comes along and buys you guys. But that's not the case with OpenShift.

Maybe you're right that things will continue as is and OpenShift will always be better and that RedHat will always maintain it.

But not particularly a risk that I think is worth taking.


Yup, when I talk about stability I'm not talking about stability of function, which I presume is what people buying OpenShift want. I'm talking about stability as in few changes. Anyone doing things in the Container space shouldn't be expecting lack of changes, even if you're using OpenShift.


You could equally end up a few years down the line with Kubernetes being an order of magnitude better but also having required two orders of magnitude of work (over those years) porting your companies infrastructure to it.

or a few years down the line the zeitgeist has moved to Locutus and no one but Red Hat is driving anymore.


I don't understand why people are so fork-a-phobic, and anti-patching these days. Distributions (like Red Hat and SUSE, but also Debian, Ubuntu, Fedora, openSUSE, etc) have been doing this for decades. Forking a project is something that is unique to the free software community, and we're doing ourselves a disservice by not taking advantage of this freedom. Forking a fast-paced project like k8s is fairly ambitious, as you've said, but that doesn't make it a bad idea from the outset.

> If RedHat doesn't have the clout to get their improvements in upstream, why should I presume they have the clout to avoid their additions won't be broken by other changes?

That's not how free software development or maintenance work. Believe it or not, the engineers at Red Hat (or SUSE, Canonical, etc) are actually pretty clever. An upstream not accepting a change can be for any number of reasons unrelated to the technical aspects of the patch itself. It could be a conflict with their roadmap or scope, it could break something else they're working on that is of higher priority, it could require more discussion on whether the use-cases can be solved by existing features, it could require further research into whether the proposed feature is the best way of solving the problem, etc. I've seen all of those reasons (and more) for some of my changes not being merged upstream (and I also maintain some upstream projects, so I've used those reasons before too). Not to mention that usually "no" in an upstream review means "not yet, I'm still thinking about it".

If an upstream rejects a patch, but a customer needs the patch in order for them to be able to effectively use the project, then Red Hat (or SUSE, Canonical, etc) are entirely within their rights to add that patch to the packages they ship. And that's the correct thing to do. Upstreams generally are not good at release engineering, so in order for hotfixes a distribution would have to patch the project anyway. What makes a feature patch any different? Not to mention that Red Hat (or SUSE, Canonical, etc) also provides documentation on how to migrate to the upstream feature (if the upstream feature ends up being different).

Kernel development has worked this way for more than 25 years, with distributions carrying patches that eventually get pushed upstream asynchronously (usually with some improvements through discussions that make them more generic for all kernel users). While stable kernels have made the massive patchsets much less of a burden to maintain, this model still is in practice today.

[I work for SUSE.]


I'm against forks like OpenShift because as an upstream maintainer on a major open source project, distribution patches caused us nothing but headaches. Their well meaning patches almost always caused problems. Users routinely ended up at our doorstep with the issues they caused. We then either told the user to go bug their distribution, spent time digging into the issue, or got lucky and the distribution maintainers actually paid attention to our lists. The latter one was actually pretty rare.

You presume I don't have experience with open source projects and don't understand why things might not be accepted, which isn't really true. I used clout as short hand for doing the work to actually get changes into upstream. I'll admit I probably chose a poor word there.

I really can't fathom why Routes wasn't just adopted into Kubernetes directly instead of a complete rewrite of Ingress being added. I don't know all the details, but when I went digging what I found was RedHat folks explaining what they did and Google engineers writing Ingress.

Please also understand, I totally agree that RedHat and other packagers are fully within their rights to patch things. I think they really shouldn't. They cause at least as many problems as they solve in my experience. But it's also my right to say that I don't want to use their patched stuff.

I don't see the situation with the Linux Kernel as a success story. I see it as a failure. It's downright impossible to tell someone if something is going to work with their kernel because the version numbers are utterly meaningless since the distributions patch all sorts of things in and out. I have been running Linus' mainline kernels for the last several years and I've been broken exactly once and even then only very slightly.

I tend to think that if distributions avoided patching unless absolutely necessary and worked with upstream to get things included first we'd all be a lot better off. Those reasons why the patches weren't accepted quicker would get dealt with before things were in the hands of users.

But frankly the distributions incentive is to create value for themselves. Not help the project along. Helping the project is utterly secondary to any value creation they are doing for themselves. In fact you give an excellent example. You say that upstream is terrible at release engineering. So rather than just applying a patch to a distribution, it's really beyond me why distros don't work with the upstreams to improve their release processes if that's the problem with staying with a pure upstream.

That's not to say that distros don't create any value for the overall community. It's just my opinion that they don't create as much value as I think they should. These companies are taking in massive amounts of money off the open source projects. Sure, in some cases they have maintainers/contributors to upstream projects on their staff. But those are usually the cases where what I'm talking about isn't what is happening.

Now all of that sounds like I don't think distros should ever patch. Which is probably an exageration of my position. I think there are times when it's needed. Security fixes, unresponsive upstreams, etc...

But adding completely distinct functionality. I don't want to touch that with a 10 foot pole.

Edit: Forgot to say, your comment about the migration to upstream feature bit. I flat out asked RedHat how they planned to get people to migrate to Ingress. Their answer was they didn't have a plan.


Hrm, we've had a plan for a while, so whoever told you that may have been misinformed (sorry about that, not everyone always catches up).

https://github.com/openshift/origin/blob/master/pkg/cmd/infr...

Is in 3.6, and other improvements will continue to be added. The one downside is that you have to grant the router proxies access to secrets, which means if someone compromises your edge ingress controller they can root your cluster unless you are very careful about only giving the routers access to exactly the secrets they need. That's partially why Routes contain their own secrets, so that you can't accidentally expose yourself to a cluster root.

This sort of stuff is the details the OpenShift team spends most of its time on. Kube will eventually get most of this. But most people are running single tenant Kube clusters and so in Kube we spend more time focusing on making that work just right. It's pretty difficult to build a fully multitenant Kube setup without making choices that we're just not ready to do in Kube yet.


They do have the clout to get their improvements upstream

http://stackalytics.com/?project_type=kubernetes-group&metri...


You might be interested in CNCF's Kubernetes Software Conformance Working Group, which has been working closely with the Kubernetes's architecture and testing SIGs and providers of most of the Kubernetes distributions (including Red Hat) to ensure interoperability.

https://www.cncf.io/certification/software-conformance/


Redhat might be doing well but many open source projects clearly aren't. There are routine SOS posted here about projects in dire straits. So how much of this revenue does downstream see?

Something as critical as Gnupg was recently in trouble and got funding from Stripe and Facebook.

Companies like Redhat acquire projects or hire project leads for influence but few really seem to 'support' open source. Docker is a VC funded company that was entirely built on open source and has made many acquisitions so they had the money but how much support did they give the LXC project, Overlayfs, Aufs and all the other open source projects they use/used?

Github uses Redis but let alone support they never even bothered to tell the author. Do they even support Git in any meaningful way?

If open source is used to provide free software to startups who them promptly forget about it we will get more SOS and interesting projects will dry up.

We will then be left with companies like Redhat and others building open source software requiring large teams which cannot be easily replicated by the open source model.


Redhat supports the community by providing tons of maintainers/maintenance of projects. So, they may not support $PROJECT with dollars, but for a few thousand projects they base their business on, they provide hundreds of thousands of dollars in year in engineering time fixing bugs, or updating things which are bothering their customers. Particularly on all the unsexy portions of the OS that everyone else has moved on from.


Who do you think maintains overlayfs and user namespaces for containers?

Red Hat :)


'Maintenance' sounds like a takeover without compensation. Surely with $3 billion in revenues Redhat should have a open source fund to sustain people whose code they use? Shouldn't some money find its way back to all the open source projects?


Red Hat (and SUSE, Canonical, etc) also employ a lot of upstream maintainers to continue doing their job. That's a much more sustainable model than having companies donate to upstream.


Redhat is technically doing the enterprises a big favor by adding support on top of tested open source technologies and making the applications enterprise ready. Redhat (IBM, and so on) is also doing open source community a big favor by making the open source technologies ready to compete with the closed source enterprise application providers, such as- Oracle, Microsoft and so on.

If it is about packaging open source for profit, then Redhat has a lot to learn from Google. There is nothing that comes close to Android OS and Google's strategy to make the use of the term "open source" practically vague.


I work for Pivotal as an engineer.

Red Hat was my first distro, back in '97. I'm grateful.

I wasn't around for the discussion about why Cloud Foundry originally standardised on Ubuntu, but if you're Red Hat, you're hardly going to bet on that. Red Hat's lifeblood is RHEL, anything which threatens it is existential.

And PaaSes threaten RHEL, unless Red Hat controls their own.

Red Hat deserve credit for making a big bet on Kubernetes so early on, when the picture was by no means clear. It's definitely put them well in the hunt against us as k8s picked up momentum. I think Openshift 3 plays to their historical strength, which is in packaging upstream and supporting or feeding back to the upstream.

The article mentions Kubo. We're adding PKS (based on Kubo), jointly developed with Google (Kubernetes, Kubo) and VMWare (Harbor, NSX).

In Labs, from the balanced product team up to the C-suite, we can transform the way you build software. A lot of companies come to us to learn how to thrive in the world of disruption.

So it would've been downright silly not to spot one happening to us and ... uh ... pivot.

(Disclaimer: Nothing in my comments should be seen as official comment by someone who decides anything of any consequence. Not forward-looking. Consult your lawyer, doctor, dentist and gardner before taking this comment as medication.)


as author of the original post above, i am gratified by the quality of the commentary. thanks all!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: