Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Automated Pipelines to Your Kubernetes Clusters (distelli.com)
81 points by arrsingh on Dec 12, 2016 | hide | past | favorite | 45 comments



What's the preferred workflow when continuously integrating and deploying in containers? At what step do you run your automated tests? Do you run them in the same image that will go into staging and then production? If using the same image, do you ship to staging and production with test dependencies included, or how do you strip away test dependencies first?


There are many way to do this but we (distelli) recommend the following:

1. Run automated tests during container build (maybe in the AfterBuildSuccess step)

2. Have a single image that goes to both staging and production. Pass in environment variables or configs to operate the image differently in staging or prod

3. Don't include test dependencies in the image so the image is smaller. So if you're running tests etc don't add the tests and dependencies in the Dockerfile. Instead have your CI system run the tests.


> Instead have your CI system run the tests.

Is "your CI system" typically distelli or another vendor?


We (distelli) do provide our own CI system but you don't have to use ours. You can use your own CI system and kick off the pipeline from that.

Some of our customers use Jenkins while others use one of the many hosted CI systems.


I like the flexibility from the point of view of minimizing the initial configuration work when we already have a CI vendor in place.

To be precise - Can distelli work with GitHub's CI API so that the distelli container build kicks off only after GitHub reports a CI pass?


Yes, we can kick off a kubernetes deployment based on the pipeline rules you specify from any CI server that can make an API call to our service.

So you can use Jenkins or Gitlab CI or any other CI you like and when that system reports a CI pass we can kick off the deploy to your k8s cluster.


Thanks for explaining!


our strategy is to have two dockerfiles for each repo. The main Dockerfile and a Dockerfile-test, that builds FROM the main one and contains the test dependencies. during CI we first build from the main Dockerfile and then from the second one. since the test one builds from the main one, there is no significant overhead and it's usually a very fast build. we run the tests on the test image and if they pass, we push the main image to the tests. This means that we do not in practice, deploy the same image we test, but it's pretty close to that. It just requires some discipline to make sure that the test one just adds test dependencies and nothing more to be as similar as possible to the main one. We use circle for continuous integration but distelli looks really cool. Something that circle and travis don't give you is a pipeline feature. Having a system that is aware of your cluster technology enables some nice pipelines and better control


I'm using Circle as well. The big problem with Circle is they don't have a ready solution for caching layers across builds.


Very true. Our builds are very slow, and they could be much faster if docker caching worked properly. Seems like these issues will be solved soon with the next major release of the platform. Let's hope


The general idea is to test and develop on the same image that will go to prod. Environment-specific configuration should generally be injected at container runtime.


We use CircleCI. The build container is provisioned with docker/gcloud/kubectl. The repo webhook fires on commit, the image is built, a test entrypoint is executed on the image to run tests. If the tests pass the image is pushed to the project repository, and then kubectl is used to update the kubernetes deployments with the new image ref.


> The build container is provisioned with docker/gcloud/kubectl.

What does this mean? I thought one of the key things that Circle provides is the build container.


Yes, circle provides the build container and docker, and our scripts install gcloud and kubectl and authorize a service account that can push to the associated project image repository.


Gotcha. We're doing the same thing here. The only problem is that Circle doesn't cache the image layers, so we do a full build every time, which takes about 10 minutes.


Yep same here, that's one of the things we need to improve on to get our deploy times down.


How is this product different from hosting your CI like Drone, which can automatically do your CI testing, then with the help of plugins create a Docker image for you based on conditionals if you will, and finally upload the generated image to your public or private registry??

BTW, I can confirm your site still not loading, time again for a kubectl scale.


Distelli offers a visual dashboard around Kubernetes that makes working with Kubernetes extremely simple. It allows users to set rule-based visual pipelines that make automatic updates to one's Kubernetes cluster while also allowing one to work with the Kubernetes YAML system.

By looking at the distelli UI one can easily see who did what build or deploy and when. One can even restrict certain actions from certain users by placing users in permission groups.

Additionally, distelli provides a dashboard to provision and manage clusters on multiple clouds providers.

Finally, while distelli offers a build system for CI, it also works nicely with other CI systems so one can use Drone and Distelli together.

edit: grammar


We do this by checking a `kubernetes.yaml` in with the source code. That way it gets bundled with the image as part of a Jenkins build. Deployment is just a docker pull, extract kubernetes.yaml, and kubectl patch.



hi @kt9, in the kubernetes dashboard, when adding an existing cluster, under "Select a Provider" there is AWS. that means that it also support ECS? I am not aware of AWS supporting kubernetes directly. I have a k8s cluster running on an aws autoscaling group, but I guess in that case I should just click "Other". What's the AWS option for then?


Hi,

AWS doesn't support kubernetes natively. We allow you to either launch a new kubernetes cluster on AWS by launching new VMs and installing kubernetes, weave and etcd or we allow you to sync an existing kubernetes cluster running on AWS by providing the k8s master endpoint.

We don't currently support ECS clusters but that is on our roadmap.


just for curiosity, what do you use for creating a k8s cluster on aws? Do you use any of the available open source tools like kube-up, coreOS on aws, kops, etc? And if I decide to stop using distelly and preserve the cluster created, will I be able to do so?


We have our own software to create k8s clusters in AWS using weave for networking. We looked at kube-up etc but our customers wanted functionality that didn't force them to create a brand new VPC created but instead just launch multiple clusters in an existing VPC.

FYI we are going to release our kube laucher as open source in the next few months. If you sign up for our mailing list or email me at rsingh@distelli.com (or follow us on twitter / fb) i'll notify you when thats available if you're interested.


Hi I'm an engineer at Distelli. The "AWS" option is a UI utility. When selecting this option while adding an existing cluster you will be able to easily identify where your cluster is situated by inspecting the Distelli UI under the Clusters tab. This helps users to separate where their clusters are located/hosted visually in the event they are running clusters in multiple clouds (as some of our enterprises customers do).


But that option requests a Key Name. What that key would be? there is no arn identifying a kubernetes cluster on aws. It also requires aws credentials, but if distelli manages the kubernetes cluster, you can't do much with the aws credentials


Only partitioning a new cluster in AWS requires a key name. For adding an existing cluster, we need client certificate and key information. Message us on intercom (the small badge on the bottom of our site) -- We are happy to help.


Hi everyone,

We found that the site was slow because we were getting throttled by a table in DynamoDB which explains why kubectl scale wasn't helping as much as we had hoped. We were adding capacity, just not in the right place.

We've added read capacity to our table and things are faster now.


Perhaps your marketing pages should be static? Rookie mistake?


They are static pages served from the same webserver but there was a bug where we were erroneously hitting a DynamoDB table.


Was that bug, now another bug?

CloudFront is your friend. :-)


I'm the founder at Distelli. I'm happy to answer any questions.


Hi. I know Distelli was originally less targeted and could deploy to bare metal, snowflake VMs, k8s, etc. Is this latest version a narrowing of focus or is all that just less promoted now?


We still support the VM, bare metal use cases. We've just created a new landing page around k8s because its our latest offering and its on our list to add additional pages around our existing VM use cases.


Your pricing page loaded horribly slowly and then was too complicated to "know at a glance" what I'd want.


Sorry we're scaling up under the load from HN traffic. Email me at rsingh@distelli.com and I'll be happy to help you with pricing or any other questions you might have.


No problem ... I just thought you'd want to know. We're in the beginning stages of container orchestration and have to keep parts of our pipeline and server infrastructure on premises. I'll email you if we have portions that can realistically be put on the cloud.


Thanks for letting us know. btw we also offer our service as an on-prem solution so ping me and we can get you setup if you prefer that.


Looks like a cool product, but the site is really slow right now. Time to 'kubectl scale' up...


yeah the sites not even loading for me...


right after i make the comment, it loads.


hehe. Yup we just scaled up.


What all does distelli do to make image builds fast? For example, where do you store cached layers so they are most rapidly pulled by the build worker?


You can use your own build servers with distelli and any images and layers pulled down during one build will be cached and available on subsequent builds. So even though distelli is a hosted service you don't have to run builds on our shared build machines. You can fire up your own build servers and connect them to our service and you'll get all the benefits of a dedicated build server including caching.

https://www.distelli.com/docs/kb/using-your-own-build-server


Gotcha. I am seeking a vendor that handles this for us somehow (even if that means we must pay for dedicated resources managed by the vendor). CircleCI has spoiled me so that I'll never maintain a Jenkins or other build server again if I can avoid it. CircleCI has hinted that their v2 (which is in alpha) will address layer caching.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: