Hacker News new | past | comments | ask | show | jobs | submit login

I had the impression that the $1.23 are for 10 workers?



My mistake:

> For example, your service uses 5 ECS Tasks, running for 10 minutes (600 seconds) every day for a month (30 days) where each ECS Task uses 1 vCPU and 2GB memory.

5 workers, at 10 minutes each. That's still $35 / $10 = 350% of the cost, and they're not full-time. The point was that Fargate is not priced to be the cheapest option, unless your product is designed specifically with Fargate in mind.

(And ok, not specifically Fargate, really any architecture that permits the workers to come and go, but mostly go.)

Fargate is a tool for a job, and in the suite of Amazon's offerings that includes Lambda, S3, CloudFront, you really can make some cool stuff for cheap, if you pay attention to what kind of resources it needs, and if it _really needs_ those resources, or maybe there's a cheaper offering that is a better fit, or at least that could do the job just as well.

Don't get me wrong! But if I want to take, say, my Ruby on Rails app and host it somewhere without reinventing my whole stack from scratch, I definitely won't use Lambda because any Rails app just won't fit the model very well, and I won't use Fargate because it's [N] integer times as expensive as comparable offerings.

If Fargate was capable of auto-scaling tasks to zero workers during periods of inactivity while remaining mostly responsive, like Lambda "cold start" vs "warm start", then it would be a much easier sell for me.


Sure, I never had the impression it was cheaper for the same workload.

My take was, that hobby projects probably don't run 24/7, and at some point, you pay while not using them.


Yup, you're both right. I use my $10 kube cluster for 2 apps, one of which has 3 users, the other has a dozen. So it's idle >99% of the time, but it needs to be available 24/7. And they're rails apps, so it's neither easy to "scale to 0", and if it was, the startup times would probably be horrendous enough that it wouldn't be worth saving that fraction of $10...


Keep an eye on Knative and/or Project Riff. They _can_ scale pods to zero during periods of inactivity, that's one of the bigger selling points, and the startup times might not be as horrendous as you think. (YMMV depending on your runtime and your requirements.)

Riff is really a Function-as-a-Service library, built on Knative, but Knative can run any arbitrary workloads. Riff will soon have support for Ruby again (I promise, I'm working on it[1]) and Knative has this capability now[2]

So, there's a convergence coming and it's going to make a big difference in this cost/benefit trade-off. You need a cluster to run your Knative workloads on, and that cluster has to be persistent and always available to make the magic trick work.

But say you have a small to medium enterprise that runs 75 custom apps, and any 10-30 of them could be in use at any given time. You'd like to take advantage of office hours and turn things off when they're not in use, but you can't guarantee that nobody is going to need some app, some time after 5pm.

The cluster itself can autoscale up and down to a smaller or larger size, depending on how many balls are in the air. Your apps remain responsive even at nighttime, when your cluster footprint is a tiny fraction of what supports the company's daytime operations.

(By "I'm working on it" I really mean, they've made it as easy as they possibly can for new runtimes to be added with their buildpack v3 framework. I riffed on Ruby back in v0.0.7 and it was possible to reuse the work in v0.1.0, after they ported their stack over to Knative. Now in v0.2.0, the work I did on my Ruby invoker is not wholly reusable, but I'm hoping it will be a pretty smooth transition.)

And you didn't say you're using Ruby, but the point again is that anyone can add support for any language, and it's no big deal. I'm basically nobody here, and I can do it...

So that may sound a little crazy, but keep in mind that there are also K8s "virtual node" solutions in the works or already out there. So binpacking your containers into nodes could soon be a thing of the past, and as clunky as what I'm proposing sounds, it may not always be that way.

Sure, AWS could do it with Fargate too, and it might turn out to be just as good, but right now that's speculative. This is pretty much stuff that is all out there now. It's just parts that need to be put together.

[1]: https://github.com/projectriff/riff/issues/1093

[2]: https://github.com/knative/docs/tree/master/serving/samples/...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: