Hacker News new | past | comments | ask | show | jobs | submit login
SUSE to Acquire Rancher Labs (zdnet.com)
392 points by flyingyeti on July 8, 2020 | hide | past | favorite | 152 comments



Well congrats to the team for the exit. But I am really hoping that this will continue the great momentum that Rancher has.

I've come to quite enjoy Rancher products. I think the work they are doing is fantastic and lowering the bar for entry into Kubernetes, especially for on-prem/bare metal. Just deployed 4 production RKE clusters on bare metal and also using K3S.


One more good experience. I created an cluster of dedicated servers ( 64 cores, 6TB of SSD storage and 256 GB of RAM, 1 GPU) using Rancher, for about 250 Euros/Month. This would cost at least 2k in a cloud such as AWS. There is a post about how I did with persistent storage here (https://medium.com/@bratao/state-of-persistent-storage-in-k8...)

It really transformed my company DevOps. I´m VERY happy. If you can, use Rancher. It is just perfect!


We're in the same camp with a cluster ~2x as large for Squawk[1] and it would cost us many multiples in the cloud (excluding our TURN relays which aren't k8s). However, the one killer feature that the cloud still has over self hosted is the state layer. There is nothing that comes close to the turn key, highly available, point in time recoverable database offerings from the cloud providers. We're running Spilo/Patroni helm charts, and we've really tried to break our setup chaos monkey style. But I'll admit I'd sleep better leaving it in Amazon's hands (fortunately, with all the money we save, we have multiple synchronous replicas and ship log files every 10 seconds).

[1] Shamless plug Squawk: Walkie Talkie for Teams - https://www.squawk.to

_EDIT_ I've just read your blog post. We went the other direction and have used the local storage provisioner to create PVCs directly on host storage, and push the replication to the application layer. We run postgres and redis (keydb) with 3 replicas each with at least one in sync replication (where supported) and shipping postgres wal logs to S3 every 10 seconds.


You can also try databases that are natively distributed with replication and scaling built-in. If you need SQL you have many "newSQL" choices like CockroachDB, Yugabyte, Vitess, TiDB, and others.


Why did you keep your TURN relays out of k8s?


Because we needed geographic distribution so that we don't end up hairpinning our users, and they only run a single service so the value prop is much lower. We use route 53 to do geodns across a number of cheap instances around the world (which is also nice, let's you pick regions with cheap bandwidth but good latency to major metro areas). We currently have TURN relays in Las Vegas, New York, and Amsterdam and that gives us pretty good coverage (sorry Asia...you're just so damn expensive!).

But all of our APIs sit in one k8s cluster across two datacenters (Hetzner, with whom we couldn't be happier).


Really interested in hosting at Hetzner, as their prices are fantastic by comparison to AWS, Azure & GCP.

I'm particularly interested in what an HA Postgres setup might look like. Assuming you are running some kind of database (whether Postgres or otherwise), what are you doing for persistent storage? Are you using Hetzner's cloud block storage volumes? What is performance like?


Interesting! Is that a single K8 control plane across one cluster? We've gone with fully isolated clusters across 2 data centers to protect against a network isolation incident between them causing a split brain/borking etcd.


Yes the control plane is only in one of the data centers. The other only runs admin services like offsite backups, our development infra (gitlab, etc) and CI/CD.

We could definitely do two clusters and probably should, but the secondary data center has few services that it wasn’t really worth the extra work.


Oh cool, interesting. Thanks for the overview


Longhorn synchronously replicates the volume across multiple replicas stored on multiple nodes https://github.com/longhorn/longhorn

At first look the numbers in the colourful table near the end, Piraeus/Linstor/DRBD seems 10x faster than Longhorn 0.8. The article goes into great depth of the (a)synchronous replication options of Piraeus, but doesn't mention that Longhorn always does synchronous replication. I wonder why?

SUSE being full into btrfs and CEPH, I wonder if they will allow Yasker https://github.com/longhorn/longhorn/graphs/contributors to continue developing. At Kubecon EU & US 2019 https://youtu.be/hvVnfZf9V6o?t=1659 Sheng Yang explains how he tried to make Longhorn first class citizen Kubernetes Storage.


Longhorn serves a very difference use case than btrfs and CEPH so continued investment makes sense.

Disclaimer: I'm the Rancher Labs CTO


drbd is really really hard to use. (ceph aswell tough)

also performance is extremly dependant on so many factors which are not always given. i.e. drives, network, etc.

for some stuff even a distributed fs is enough, like glusterfs


I should have made it clearer that Longhorn is sync as default. Linstor is also synchronous as default, but you can mess with it to make async in some situations (In reality you allow it to be out-of-sync).

I´m really rooting for Longhorn. I´m a sucker for GUIs. But in my tests the performance is not there yet.

However, they opened a new epic ticket to focus on performance, and hopefully they will keep improving Longhorn after the acquisition.


You mentioned somewhere that your servers were hosted with Hetzner - are you using their "cloud volume" block storage? Really curious to know what performance is like with this cloud attached SSD storage!


Thats a great depiction of the power of one person with proper knowledge!

Get a little bit of money (in comparision to all those shiny great things), build it, wing it and provide a huge benefit :)


Agreed, Rancher rocks.


Where did you rent dedicated servers?


Hetzner


Yeah I would say similarly. My team is working with Rancher and found their permissions management to be a solid selling point, among other things. And you can terraform 99% of the things you need.


One thing that continuously irks me about K8S is that the bar is so high. Does it really need to be so complex? Does it really need so much mandatory complexity?

Is that complexity needed or do more complex things actually tend to win in certain markets because nerds like knobs?


Distributed cloud computing is complex, k8s provides a solid abstraction based on decoupled reconciliation loops that work together in a common control plane. One of the most compelling facets of k8s is this declarative and extensible architecture.

The collaboration between Service -> Deployment -> ReplicaSet -> Pod -> Container is a great example of how these reconcilers work together.

Yes, it has a lot of knobs and dials but you don't need to understand them to get going. Just pick up something like skaffold.dev and you can be productive very quickly


Actually K8S itself as a standard is not complex/hard. If you are a developer and user/consumer of K8S, use it! If the cluster is managed by someone else, K8S is great.

It only gets complex when you have to provision & manage your own clusters. That's where Rancher really shines, as it makes it so much simpler to deploy and manage K8s everywhere.


I place provisioning and management of your own clusters in a category I call "installability" or "deployability." It's a fundamental category of UX especially for technical and infrastructure applications.

I once tried to deploy a minimal test instance of OpenStack. Granted this was years ago, but I have been doing Linux since 1993 and I could not get it to run. That's an example of absolutely horrible UX at the deployability level.

K8S is nowhere near that bad but it definitely seems much harder than it needs to be to provision a basic default configuration for a working cluster.


K8S is a lot easier than OpenStack to install, but when comparing something like Rancher to OpenStack, it should be compared to something like OpenStack Ansible, or a vendor version of OpenStack (RIP HPE Helion) which were a lot easier than apt-get install openstack is.

K8S has a lot less moving parts - a couple of binaries / containers and etcd. The issue start coming up when you go beyond the single control plane node, and want a HA API.


Why not compare it to a instead to a contemporary competitor - Nomad - which has simplicity as a core value? It has _far_ fewer moving parts than Kubernetes.


I was talking about the GP comparing their OpenStack install experience to something like Rancher, which is not an apples to apples comparison.

- on a side note - OpenStack and Kubernetes are not competitors, they are quite complementary collections of applications, that both have their place in a modern open source infrastructure.


My experience with it has always been that it is delightfully simple for the task at hand. There's a wide surface area because it covers a wide problem space (distributed computing), but any individual task has always felt simple and very thoughtfully considered for me.


Eh, I’d say k8s with the help of Helm is about as simple as it can get to deploy and manage large clusters of networked applications. The equivalent done using e.g. Ansible playbooks would be far more complex.

If the complexity seems too much, it’s probably a sign you don’t need k8s.


You can use Docker Swarm. Mirantis since back-paddled on its plans to deprecate it. It's a great piece of SW if you don't need thousands of containers, but rather low hundreds (and if you don't need additional stuff like Istio, operators, etc.)


I'm very glad that SUSE bought it and not Red Hat (or Microsoft).

This might give SUSE more inroads to the North American market, considering it's largely a European player at this point.


I was actually a bit behind on the current SUSE ownership: last I remember was it getting acquired by Novell and entering into an agreement with Microsoft. I was thus confused by your comment.

For those like me, SUSE was sold off again in 2018 after like 5 acquisitions, and has been an "independent business unit" for a while now.


TL;DR: https://www.eqtgroup.com/Investments/Current-Portfolio/suse/

But we are mostly independent now. I.e we choose the directions.


> mostly independent

Sounds like something Red Hat could say too...


With RH/IBM IBM certainly wants to leverage the RH brand and separate it a bit from the IBM brand, while leveraging customers and technologies across the company.

Suse's current owner won't have much synergies between Suse and their real estate business (except that Suse probably rents office space there) You don't do deals like "rent an apartment, get a Suse license for free" Thus Suse, under obersight can likely set their business strategy and objectives more freely.


> You don't do deals like "rent an apartment, get a Suse license for free"

Well maybe they should? That'd certainly sweeten the deal for me.


SUSE's owner (EQT) is not doing anything in the IT sector themselves, unlike RedHat's owner (IBM).


SUSE is owned by EQT which is an investment organization. They basically provide the capital and resources to grow a business and then get out when it reaches a target. This is very different from IBM/Red Hat.


interesting...why a real estate group invest in software company?


It's a private equity company.


Why would Red Hat (who created OpenShift) want to buy an inferior competitor?


You may well ask, but you are far, far too late.

Red Hat already bought CoreOS in January 2018, some 18 months before IBM bought Red Hat.

The question would have been more relevant to either of those.


CoreOS was not a direct competitor to RHEL, it was intended to be a "next-gen" rearchitecture of the OS. No one was going to seriously consider using one in place of the other; if you were considering CoreOS, then RHEL was already off the table.


It replaced Project Atomic for them, some parts from atomic ended up in core os though.


OpenShift is expensive and requires a significant O&M investment. If you want to use standard tools it’d be nice to have a managed standard Kubernetes option without paying for a lot of complexity your teams don’t want.


> OpenShift is expensive and requires a significant O&M investment. If you want to use standard tools it’d be nice to have a managed standard Kubernetes option without paying for a lot of complexity your teams don’t want.

The parent's point though is that this isn't the space IBM wants to be in. They're in the business of selling high margin, enterprise-y stuff that includes all the bells and whistles, so there's no reason for them to gobble up something like Rancher (RHAT's OpenShift solution is what they want to be selling already).


> If you want to use standard tools it’d be nice to have a managed standard Kubernetes option

What standard tool doesn't work on OpenShift? It's certified to have 100% compatibility with Kubernetes, it just adds stuff, doesn't it?


I think there are some things which are either disabled or complicated by policy, not to mention the lag between Kubernetes updates shipping and OpenShift updating, but I was more going at the angle of paying for things you're not using. OpenShift's license costs are enough that you really have to justify it based on those services. The people I know who've avoided it did so because they couldn't justify the price when they mostly wanted Kubernetes but their teams had no interest in going away from their current build tools.


It takes away privileges which arguably is a good thing but some things that require root containers wont't run. They pass the Kubernetes conformance suite only by removing those constraints.


That's not true at all. You can read their CNCF results yourself, nothing is disabled. And the conformance tooling works around these constraints by defining their own PSPs.


It would help if you provided a link to the CNCF results.

From what I see in https://github.com/openshift/origin/blob/master/test/extende... there are additional policies granted (search for "Disable container security").


Yes, to run tests that root your whole cluster, the test running for conformance grants “root your cluster” permissions.

I occasionally regret the defaults we picked because people get frustrated that random software off the internet doesn’t run.

That said, every severe (or almost every) container runtime vulnerability in the last five years has not applied to a default pod running on OpenShift, so there’s at least some comfort there.

To grant “run as uid 0” is a one line RBAC as assignment. To grant “run as uid 0 and access host” is a similar statement.

https://github.com/openshift/origin/blob/master/test/extende...


And you can do the same for your environment. You can run root containers on OpenShift, it's a settings, not a baked-in compiled choice or something similar.


That is true. It's a tradeoff when you consider to turn off SELinux, too, however.


OpenShift Container Platform removes the need to build your own platform around Kubernetes, which would also require a significant O&M investment. If you don't want that, there's OpenShift Kubernetes Engine: https://www.openshift.com/products/kubernetes-engine


It's also open source.


Yes, but then you’re still supporting the additional platform components. If you’re using those services, that’s reasonable but if you have other tools you might reasonably want something smaller which doesn’t require you to learn and support things which you aren’t using and which delay upstream k8s releases shipping.


It wouldn't be Red Hat as such... it would have been IBM. And the same question can be asked there with IBM Cloud Private and the UbranCode suite.

Look at it from IBM's perspective - battling business units are good and one of them will certainly be the best.


... and a potential competitor is no more


You can buy an "inferior competitor" and integrate their feature set into your own products, and end up with a much better integrated solution for your customers, which will strengthen your business.

But switch "inferior" to "superior" and your question makes sense. Red Hat's products mostly suck balls. I can't think of a single one which has been an enjoyable experience to use. People paid for them because they were the IBM of the Linux world.... And now they're the Linux of the IBM world.


Or people just trust Red Hat for being always "heart, mind and soul" of the community of contributors that makes open source great. And keep doing the same thing, that inspired many others such as SUSE, Rancher, etc.. and IBM :)


Red Hat is great - they have done some great things, however -

Heart, Mind and Soul of the Open Source community is possibly a bit of hyperbole.

and - SUSE is (slightly) older than RedHat, so I am not sure you can say RedHat inspired them :)

IBM was inspired by RedHat's earnings (pre acquisition, open source in IBM was .... interesting ... ) and their ability to have a relevant product in the cloud space.


If anyone else is thinking, how much? I tried to do some research and I found this:

"The companies announced the deal Wednesday but didn’t disclose the terms. Two people familiar with the deal said SUSE is paying $600 million to $700 million."

Source: https://www.cnbc.com/2020/07/08/suse-acquires-rancher-labs.h...


Just enough to pay off their debt


What do you mean? Rancher has raised only $100 million in funding from what I can tell.


Yeah, I did not really understand that comment either. According to the same article they had more than 1/3 of their raised funds still in the bank (1/3 of $95 million).


I see it more as acqui-hire. SUSE was missing some savvy technologist in the VP Level.. Sheng_Liang[1] balances it back. Beside him all upper management are ex-SAP with boring strategy, no innovation..

Disclosure: I work for SUSE

[1] https://en.wikipedia.org/wiki/Sheng_Liang


Acquihire usually indicates the company failed to make a sustainable business and you are just buying the talent. I can say that is not the case here. Rancher's business was/is phenomenal on all counts. But regardless I look forward to working with you in the future.

Disclosure: Rancher Labs CTO


that's right, Rancher's business was/is phenomenal, but not extreme necessary for SUSE. In another hand skilled people like you all are what will help SUSE in the long term!

Welcome!


Are you saying the upper management of SUSE is boring or the upper management of Rancher is boring? It’s unclear from the wording.


sorry at SUSE.


This is great. I've always felt that Rancher was underappreciated in the DevOps world probably because it's deceptively simple and easy to use and we tend to gravitate to complexity. I know a number of companies that have switched to it after trying to roll their own Kube management unsuccessfully.


Congratulations!

K3s is something that I think could be a big impactful product in the kubernetes space.


k3os aswell. but yeah k3s is really something that should be kept alive.


I really liked the idea or RancherOS, but somehow it never quite lived up to its promise. In particular, the need to distinguish between root and non-root containers was surprisingly confusing in practice. It effectively broke the promise of “just worry about docker”.

Has anyone here adopted it over the long term? What made it stick?

Any ideas why SUSE would need/want this?


> Any ideas why SUSE would need/want this?

Momentum and credibility in Kubernetes land. Rancher has more of it. SUSE has a lot of experience in Kubernetes but hasn't gotten much credit for it or, I am guessing, many sales from it.

Disclosure: I work for VMware.


I tried it briefly for homelab use and came to the same conclusion— I ended up feeling like I had a lot more of a safety net with a conventional Ubuntu/Docker/Portainer type setup than I did with Rancher.


Yup. I should have specified that mine was a homelab setup as well (and was several years ago).

It’s a shame because I still feel there’s a gap between docker-compose and Kubernetes. I don’t have k8-size problems, but I do have well-beyond-docker-compose-size problems.


I can vouch for Nomad, coupled with Consul and Vault. You can start simple, it scales well and with the recent and undergoing integrations with Consul Connect and Nomad you can go service mesh with mTLS if you want.


Docker Swarm fills that gap pretty well in my experience. The best part is that it's an almost trivial migration to move from Compose to Swarm. Swarm to K8s is not so easy, even with tools like kompose.


Not sure about RancherOS and how much it factored into the sale. It could end up merged/transitioned into some Suse-container OS offering

The enterprise K8S business is compelling, especially all the shops using metal/on-prem. I settled on using Rancher and RKE for production clusters just because it was the simplest way to get HA clusters up within minutes without a PhD in K8S.

But I think a lot of the work they are doing on the other parts of K8S are really interesting: K3S, for example, could become very popular for running on IoT and ARM. K3S really put a smile on my face. You just run it and boom, you have a K8S cluster.


SUSE already has a container offering, CaaS Platform:

https://www.suse.com/products/caas-platform/

(Source: I worked on the documentation for v3.)

The partly-SUSE-sponsored openSUSE Project also has a container-centric distro, Kubic MicroOS:

https://kubic.opensuse.org/

So it is already active in this area, and yes, I agree, there's a good chance that RancherOS will end up merging or even replacing this.


K3S is nice but I have found microK8s to be even easier to get setup and configured for the IoT and small ARM server cluster scenarios.


SUSE just bought a golden ticket into the upper echelons of the CNCF. Rancher is also very profitable, way more than their kube offering.


I think everyone and their dog is realizing that there isn't all that much money in the traditional OS business anymore, and are betting on cloud platforms of one kind or another.

Redhat invested quite a bit into openshift, IBM bought them (and mentioned Redhat's cloud strategy in the announcement as one of the major reasons).

Microsoft is doubling down on Azure, now SUSE wants a piece of the pie.

I'm not familiar enough with Rancher to tell you why those chose exactly them, but they had to do something.


> I think everyone and their dog is realizing that there isn't all that much money in the traditional OS business anymore

I wish my dog was as smart as yours, he continues to pay for windos and runs IE since no one on the internet knows he's a dog


RedHat has Openshift, now SUSE has Rancher.


Rancher was the last independent kubernetes distribution (that was company backed) as far as I can tell.

There was also CoreOS, which has since been bought by RedHat, and Deis, since bought by Microsoft.

So now it's been turned back into an OS war. RedHat, SuSE, and Microsoft.This is fitting because kubernetes feels like an operating system for container clusters. After all, operating systems are just resource managers and schedulers like kubernetes is.

(For those interested, there are several kubernetes distributions that are not company backed and open source. Two of my favorites are Kubespray[1] and Typhoon[2].)

1: https://github.com/kubernetes-sigs/kubespray 2: https://typhoon.psdn.io/


What about KOPS? That's distro independent. I use Ubuntu but others are supported


Curious about the price.

This appears the first significant acquisition of a k8s ecosystem start-up. (Remember the hadoop frenzy) The price might set a sentiment for the entire market segment for a while.


Heltio or CoreOS both predate this.


Deis too. And Deis was acquired twice, first by Engine Yard and then by Microsoft.


Heltio AFAIK is mostly a educational and consulting business? They dont have a kubernetes product, right?

CoreOS dont have a kubernetes product either, right?


I assume that was a typo for "Heptio", who were acquired by VMware in 2018.


CoreOS had Tectonic


Oh right, I forgot that. I think at the point they were acquired, Tectonic was at very early stage. I could be wrong.


Yes, Heptio, sorry.



Is Rancher profitable? How did it rank in the "Managed-Kubernetes" business? Anyone using Rancher in production for large-scale applications? If so, what's your feedback?


Anyone else using Rancher's Rio? They're PaaS offering? It's still early days, we're using for a microservices project and it seems good do far.


I've used it and liked it. I hope they continue to develop it and get it to be production-ready, but I'm not betting on it.


The real news for me here is that SUSE still exists.


SUSE is a still moderately large outfit. Their last valuation was $2.5b in 2019 https://techcrunch.com/2019/03/15/suse-is-once-again-an-inde...


They're still quite popular in Europe and especially in Germany whereas Redhat has utterly dominated North America.


I’m in the UK and even though i have a SLED boxed media set somewhere from circa 2005 (YaST was cool, their Xen user experience beat any other distro at the time) - it was news to me that they’re still alive.

Anecdotally, i feel like RHEL dominates the UK enterprise linux market as well as NA


> Redhat has utterly dominated North America

nods head Yup, sounds about right.

SUSE is used pretty widely used in the European financial sector, right? That’s what I remember hearing the most about it.

EDIT: I’m referring to non-personal workloads, i.e. enterprise. Pretty much everything I’ve come across in a working environment has been Red Hat based, I’m not talking about personal local or VPS environments. Ubuntu and Debian do have a presence, but not at the Red Hat scale from my experience.


SuSE is big on HPC and big beasts.


Redhat is bigger on both.

Scientific Linux, which was built exclusively for HPC, was based on CentOS.


Luckily it's not winner takes all, nor is it "second place is the first loser".

SUSE may be overlooked from a US perspective, which is why they get much less coverage than they deserve on sites like HN. They are huge in Europe and employ some exceptionally good people, and have been making probably the most solid distro out there since ~1994.


Oh I totally agree with you. Suse Linux 6.0 was my very first linux distribution around 1998 or 1999. I've professionally managed SLES, SLED, and even migrated a netware server to open enterprise server. It is great stuff and I'm glad to see it still alive.

The market is big enough for multiple large Linux distributions (Redhat, SUSE, and Debian^WUbuntu*). The market continues to grow as more things transition to computers.


> SUSE is used pretty widely used in the European financial sector, right? That’s what I remember hearing the most about it.

SLES is recommended for SAP installs, a huge enterprise market.


Afaik one big reason for this is that SAP supports only Suse with on-prem installations.


Support is also available for RHEL and (to some extend, e.g not for HANA) even for Oracle Linux.

https://wiki.scn.sap.com/wiki/display/ATopics/Supported+Plat...


Every now and then I try a Linux distro other than SUSE (currently, openSUSE) and am left disappointed.

Mostly it is the size of the repo. openSUSE has everything. I guess that shouldn't be too surprising, the Build Service also builds packages for distributions not their own: https://build.opensuse.org

And of course there's really nothing as good as YaST for system configuration.

They also provide a full aarch64 OS for the Raspberry Pi, something which is surprisingly still quite rare.


SUSE 6.4 was my very first Linux distro, my dad had bought the full CD set because he wanted to try out "that Linux thing". We never did get it to work with our modem, but I had fun playing around in KDE and Windowmaker.

I've been through a number of distros; SUSE as mentioned, Mandrake, Debian, Gentoo, Arch, Mint, KDE Neon, probably others that I've forgotten. I installed openSUSE Leap 15.2 on my laptop last week and immediately I felt right at home. KDE as the default desktop with no weird modifications nor excessive vendor branding, plus a well thought out default Btrfs partitioning scheme with good use of subvolumes (and CoW disabled on /var, nice detail) and snapshots for easy backups and rollback in case of botched upgrades or config changes.

The only minor gripe I have is their choice to ship Firefox ESR, but I understand why they do it, and it was easy to add the official repo for the latest release version.


Aren't they 2nd place after Red Hat? I'm talking about distributions that are made for enterprises.

I heard for example that Intel was/is a huge user and used it on their computing farm.


I talked with someone from a North American company which runs their product on SuSE. And asked why SuSE was chosen instead of more popular Red Hat. Apparently Red Hat wasn't providing the premier level of support the company needed so they went with SuSE.


For me the secondary news was "Telstra has a venture arm".


What's amusing to me is that Rancher's Support Matrix makes no mention of compatibility with SLES or openSUSE: https://rancher.com/support-maintenance-terms/all-supported-...


That's a link to an old version. The latest versions do officially support SLES. I think it was added in the 2.4 release. https://rancher.com/support-maintenance-terms/all-supported-...


Weird. That "old version" is where e.g. https://rancher.com/quick-start/ points (if you click on the "supported Linux distribution" link).


Another great product acquired by a mediocre behemoth. Here's hoping they can maintain enough independence to continue innovating.


I think that is this a bit unfair for SUSE. We (I am employee here) have a long tradition of innovation and failing in communication.

We had OBS, that is some kind of build system as a service that guarantee reproducible builds and traceability of packages before that was a thing. We develop an automatically and deeply tested (openQA) rolling distribution (Tumbleweed) at the same time that other was telling in the forums that this was simply impossible to do. We have crazy ideas like MicroOS with transactional updates, together with good old classics like YaST, Zypper or linuxrc.

We are just a few, but we have tons of contributions in the kernel, gcc, btrfs, qemu, runc, openstack, saltstack, kubernetes and whatnot.


This is fair, and I'll admit it's a knee-jerk reaction to a product I like disappearing into a larger organization and possibly being neglected or shut down, as I've seen happen many times before. I hope it means bigger and better support for Rancher.


One thing I would say about SUSE - they are never mediocre. (nor what I would usually consider a behemoth)

The engineering team inside SUSE are exceptional - they do amazing things, and build really interesting features. The product planning / joined up thinking / visionary direction is where they fall down. As a sibling pointed out they have worked on some really interesting stuff, but (when I was there in any case) failed to pull it together into something that could have been outstanding.


There are many words I'd use to describe SUSE, but neither "mediocre" nor "behemoth" are among them.


Ex-SUSE engineer here. Without a doubt SUSE is the best place that I have worked for in my career of 15+ years, across multiple companies.

They are neither mediocre nor a behemoth. They have some excellent engineers. SUSE has worked on Ceph even before RedHat acquired Ceph. OBS and SUSE Studio had envisioned containers even before the market was ready. SUSE has some prime contributors for Linux kernel, GCC, Linux HA etc. Greg KH was a SUSE employee once, before moving to linux foundation. Technologically, they are far from mediocre.

In my personal experience, SUSE always felt that they had good engineers but somehow lacked the sense of making enterprise sales or generating postive news. The company being head-quartered in EU and not in California may also be a reason for the lack of the news. During my times, they were going through multiple rounds of acquisitions and nothing was stable in the vision of the company. The SUSE management did not feel like a Behemoth because they were trying to satisfy their investors in Novell, Microfocus, etc.


This is a good acquisition.

I was thinking that AWS would acquire Rancher to make inroads into multi-cloud and hybrid kubernetes.


I'm doubtful.

AWS has a case of not-invented-here syndrome that is so severe that it doesn't technically qualify as NIHS. For one thing, NIHS requires you to accept that there is such a place as "not here".


Agree. All the while other are happily studying the ideas/services from AWS, and building better experience.


TIL: Suse is still in business


We changed the URL from https://rancher.com/press/suse-to-acquire-rancher/ to what appears to be the most substantial third-party article.

It's true that the guidelines call for original sources (https://news.ycombinator.com/newsguidelines.html) but we sometimes make an exception for corporate press releases, which tend to use obscure language, omit relevant information, and so on.


Makes a lot of sense to sell now since Rancher doesn't offer a lot of value anymore compared to vanilla Kubernetes and a few Helm charts.


How are you using vanilla Kubernetes? I've tried provisioning vanilla K8s on bare metal clusters and I found it to be pure PITA, even with Kubespray.

Rancher's RKE is the first installer I've come across that "just works". Run rke up against a cluster.yml and within minutes you have a HA cluster with ingress ready to rock. K3S is also looking quite good.

In contrast I've spent days staring down the abyss of vanilla K8s. If you have good alternatives for launching K8S on bare metal/on-prem clusters, would be game to try.


I stood up a vanilla baremetal cluster (on the latest Ubuntu) a few days back using Kubeadm ... it was fairly trivial to do. I used the NGINX Ingress and it was also generally straightforward (maybe took an hour or two to understand what was going on). Curious what we did different?

I saw Rancher's offering afterwards and it does look really slick .. the UI is bloody awesome. Wish I could get it for regular kubernetes.


Kubeadm is the "official" and certified tool, not Kubespray. It is easily scripted as well. If you're used to graphical installers though and don't like automation then by all means continue to use Rancher.


Over half a billion of USD looks like real value to me, I wish I could make that much with vanilla Kubernetes and Helm charts. Let's be respectful for Rancher's fantastic exit.


They recently when through a Series D round for $75 million, but yet you believe they have half a billion in the bank? They were on their last leg begging for seed money to keep them afloat.


That's a lower estimate of the acquisition based on the info about this deal linked elsewhere in the thread and on the net.


Did they publicly announce the finances? They wouldn't have sold to SUSE unless their investors were getting desperate for some return.


The real value is the Install / Life Cycle orchestration - vanilla K8S has really marked that firmly as "not their problem" - which is the correct thing for them to do.


Cluster API should hopefully obsolete that problem, sooner rather than later.


Hopefully, but Cluster API relies on something like Rancher (or AKS/EkS/GKE) to do the deployment underneath it - it still kind of outsources the life cycle.


What does life cycle orchestration mean?


Replacing nodes, helping repairing broken things, upgrading the control plane, upgrading etcd etc.


All easily done using Helm and kubeadm.


Some of that is easily done with Helm and kubeadm, but not all of it, and definitely doesn't scale as you grow the number and size of the clusters running.


You clearly haven't used or heard of kubeadm, which is a certified installer and lifecycle tool from CNCF.


Rancher only sells support, which includes all things related to Kubernetes, not only their products


This sounds really good. Rancher has some cool products but to be honest I've been uneasy about the link of their name to cattle farming. I hope they change the name and continue the focus on bringing more efficiency to Kubernetes. For example I don't think the kubelet of a brand-new almost empty Kubernetes cluster should frequently be using 4% of a CPU doing who-knows-what.. (I've tried profiling it but with little luck - most time seems to be on futexes [for go channels?], and there are also heaps of system calls to gather data from cgroups).


Why?!? Why the worst enterprise distro out there had to?!? Canonical would have been perfect -________-


Rancher and Canonical were partners for a while, and Rancher was supposed to be their frontend for a Kubernetes solution.

I was talking to both of them about on-prem solutions, and found the Rancher support covered Ubuntu hosts, and Canonical support covered Rancher. Was trying to understand which support contract we would need.

But something happened between the companies and they parted ways. And neither party would comment.

In the end, we ditched Rancher support. The price almost doubled from one year to another and covered very little. I was also unimpressed by the technical chops of the Rancher solutions architect they gave us, which didn't seem to know anything beyond the basic documentation on their site.

But we are using Rancher 2 and Rancher 1.6 in production and have been happy with the solution itself.

We are migrating our on-prem from VMware to OpenStack and may stick with Rancher as k8s provisioner if charmed k8s doesn't live up to the sales pitch.

We are a team of 4 people doing on-prem datacenters on 10+ sites around the world, so we need a little bit of plug and play.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: