Hacker News new | past | comments | ask | show | jobs | submit login

The best way to describe a service, rather than 1000 words of buzzword heavy text, is a service diagram, a hello world example of the config, and a more complex example showing off the best/main features.

I don't want to hear that 'it's a bit like kubernetes but not quite'. I want to try it inside 30 seconds and see for myself.




> I want to try it inside 30 seconds and see for myself.

So let me get this straight, you want to try something as complicated as Kubernetes, inside of 30 seconds?

I'm thinking you might have some unrealistic expectations.


I think it is a realistic expectation. Think of EC2 in 2006. Getting started was almost immediate if you already had an Amazon account. You downloaded a CLI tool and your credentials, and started a VM. Prior to this, it might have seemed unrealistic to provide such a streamlined experience.

I would love to see the same done for Kubernetes. What I want is a kubeconfig file that links me to a paid account somewhere, and whenever I run `kubectl apply -f foo.yml`, I pay by the millisecond for whatever resources get created. Zero ops for me the customer, and all the complexity will be on the side of the company offering this service.


I think Okteto is an example of what you're describing. (This is for K8s, I just learned about this service yesterday and it does what you're describing, kind of.)

okteto.com

I'm not sure this is relevant, but it's a great example of a service that has a tutorial, that covers multiple use cases, and lasts less than about 2 minutes, leaving you with a pretty clear understanding of what else you're meant to do.

This is how all elevator demos should be.


Thanks for the link, I will check it out.


See also, k3sup

Since it was already easy to get a cloud machine on the Internet!


> I would love to see the same done for Kubernetes. What I want is a kubeconfig file that links me to a paid account somewhere, and whenever I run `kubectl apply -f foo.yml`, I pay by the millisecond for whatever resources get created.

Have you tried digitalocean's kubernetes offering? It takes about 10 clicks to create a cluster and download a config.


What I'm envisioning wouldn't involve creating a cluster. As soon as I set up the account, I could download the kubeconfig and start creating resources. The cluster would be invisible to me, aside from the namespace I was given.


I have built this on Digital Ocean's dropletkit API client gem for Ruby, but without the "we send you a bill" part.

Two or three clicks to kubeconfig, then your cluster is deleted in about 4 days. I call it Hephynator and it's not open source yet, but I would definitely consider it. (This model works for me, because I received Open Source credits from DO. :thanks:)

I don't know how much that helps, but DigitalOcean's built-in interface to creating clusters is about that easy. It's nicer than my stripped-down version. Things like "how long until your cluster is ready" -- I didn't take care of that in my DropletKit client, but DOK8s does in their web interface.

My driver to build this little widget was the fact that their Kubeconfig files that are issued by DigitalOcean's interface by default expire after 7 days, so I either needed a way to be sure that my OSS contributors who I hand these clusters out to, could get another kubeconfig when it expired, but not wait for me... or, their cluster would not live as long as the expiration date, which seemed to be a more reasonable economy-driven decision.

I decided to make the clusters last 4 days and then delete themselves. It was a fun project, and now we can use it to make more fun Open Source.

I should open source it. It's a very simple rails app. It does exactly what you describe, I just push "Create cluster" and then confirm some parameters, then get a "Download Kubeconfig" button which is the last step where you have to interact with anyone other than K8s API. It needs to be made pretty, before I'd consider publishing it. But for now, it does the job and my team is using it fruitfully :)


I don't think he meant 30 secs to "evaluate" the service. But you should be able to get enough from a pitch page to know whether or not it would be worth clicking through to "Getting Started" and installing the prereqs.


Yep. For new AWS services, I always end up having to open the web interface and click through the GUI before any of the rest of the explanations make sense.


Yes, I generally will read about a service, then click around the GUI to make "something". Then I'll put something in terraform (roles, needed buckets, images) and then try to make it go. As someone else said, a diagram with a quick-starter implementation would go far in getting people interested.


K8 is a software suite that allows you to partition and use a set of resources (eg, partition CPU/Memory/Space to run docker containers). Naively, it consists of a master app and a bunch of agent apps. Each agent app needs to run on the resource you want to put together as a cluster. If you want to run your own k8 cluster, not only you need the actual resources (eg a bunch of computers), but you need to run the master app and the agents on each computer you want part of the cluster, with all the work required by this, for keeping things up to date, making sure the master is available etc.

EKS and GKE are the "managed" k8 solutions offered by Amazon and Google. Not sure how much of the management they do here, never researched the subject, the advantage being that your automation that uses the k8 "language", can work with both. I repeat, I don't know the low level details, eg how do you get some resources in such a cluster.

ECS is the pure AWS alternative for K8. Eg, is a software solution to manage docker containers. Is not compatible "language" wise with K8 tools afaik. But, surprise surprise, is very well integrated with a host of AWS services.

ECS can work with two different types of resources, EC2 clusters and Fargate clusters.

EC2 Clusters are sets of EC2 instances, you start them in whatever way you want, make them as big or small as you want, you just need to run the ECS Agent on them. If you use the Amazon AMI optimised for ECS, this is trivial. The drawback is that you need to manage these instances, software update, you need to start new ones if you need more capacity, stop them when you are wasting resource etc. Obviously, AWS has a bunch of other services you can use to make this easier (CloudFormation etc). The advantage is that you can customise the AMI if you need to and you can leave data behind, after a container terminated. For example, all my tasks in this particular ECS Cluster need access to the same S3 bucket of data. So I'm precaching it on the machines and then simply mount a volume in each docker container that needs that.

A Fargate cluster takes away all of this complexity, you just need to specify a name. Then you tell ECS to start tasks/services (services are just tasks that need to run in perpetuity) on the Fargate cluster and the thing will scale up and down, and scale your price up and down, as you need it. Two drawbacks: you can't customise the underlying host and you can't have persistent data left on the host after a container exits. Therefore, the project I'm working on atm would end up costing more to run, since I can't cache the s3 data and every container that needs that data will end up re-reading it from s3.

Best I could do :)

/edit: GKS -> GKE


Hello,

I have a different question, if you don't mind.

Under what conditions or use cases, do you think it may make sense to migrate from ec2 to ecs? What are the advantages or disadvantages that one needs to take into consideration?

Thanks


What do you mean by that? Ecs can work with clusters made out of ec2 instances. You need to give me more details on how are you running things now.


So..I have couple of services (sentry, logstash and some others) which are running in their own ec2 servers. I am deliberating whether it may make sense to move them to container instances.

I guess what I am asking is that what points should I take care of or give thought to if I decide to migrate from ec2 instances to ec2 instances inside of ecs?

Does it clarify now?


Well, I'm 4 pints in and still typing this (started about 3 hours ago), so I have to apologies that I didn't really answer your question. I'm really fascinated about this kind of tech and it makes me sad when people get hung on buzzwords and fail to understand the underlying problem/solutions. You could solve all of the above using very old tech like PXE boot with POE and some custom code. We could have a chat at some point, when I can learn more about your use case, your choices and why you picked them, and then I might be able to come up with a proper strategy. But trying to solve a very generic thing is not working and I'm afraid the very best answer I can give you is to RTFM and play with all the toys you can find, learn what they do, how they do it, how they are different and apply all of that to your problem. I'm going to leave what I typed in the past 3 hours, I did try to answer your question, but I'm afraid I can't. There is no one size fits all answer here.

-----

I have just a very very shallow knowledge about these two services, so I'm not going to comment on them. Also, to make sure there are no missunderstandings, clusters of EC2 Instances and Fargate clusters are the things you put stuff in, like barrels, ecs is the tool you use to get water out of the river and put it in the barrels.

At this point, I don't really see any benefit on running 3rd party apps, like I assume logstash and the other one are for you, directly on the ec2 instance.

As a note, I would not run my own db server/cluster or redis or what have you, if I can avoid it. I would pay for it (and do atm, we are using RDS dbs).

The question is if you can benefit from using ECS to start docker containers or not.

You don't need ECS to run a docker container on an ec2 instance. All you need to do is ssh on the instance, install docker, docker pull and docker run. There, assuming you are building your own docker images with all the configs required, this is all it takes to start a new instance of your app. You can even automate some of this with a Launch Template. Two big advantages of doing this: fairly simple setup and might get cheap - if you prepay for your instance, say for 1 year or more, you can get huge discounts.

There is a drawback to this method, but it depends on your use case: you can't react quickly on spikes. The way to increase capacity, for this setup, is to simply replace the ec2 instance with a bigger one. Then put back the smaller one. Since you have to manually do this, I think is fairly obvious why this is a drawback.

So far, the assumption is that your app can only scale vertically, eg to get moar power you need a bigger server. There is the case when you can scale horizontally, eg if you need more power, you just spin up another instace of the same app and share the load.

If you can do this, eg scale horizontally - instead of replacing a t2.medium with a t2.2xlarge, you simply start 3 more t2.medium and all 4 can cope with the load, then shut the extra 3 off when the spike is over (I have no idea if 4 x t2.mediums can do the job of a t2.2xlarge, just giving an example). You can manually do this or automate it via autoscale/CloudFormation or even your own scripts that read crap from CloudWatch and use the api to do things.

You noticed that I didn't say when to use ECS yet and this is because ECS is the tool you use to automate most of the above. The truth is you can live a happy ops guy life without ever touching ECS, or EKS, or GKE, or K8, or chef or ansible or what have you. But knowing how these tools work, this is going to make your life soo much easier.


This is really helpful, thank you!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: