Hacker News new | past | comments | ask | show | jobs | submit login

So you’re avoiding “proprietary features” of the cloud. Congratulations! Now you have the worse of all worlds. You’re spending more on resources than you would at a colo, spending just as much on babysitting infrastructure and you’re not saving any money or time by outsourcing “the undifferentiated heavy lifting”.

You can’t imagine how many times I have seen “software architects” use the “repository pattern”, interfaces, and be sure not to use any “proprietary features” just in case one day their CTO decides to move away from their six figure a year Oracle installation to Mysql.

If being cross platform or cloud is a requirement in the beginning, you have to constantly test on all of your supported platforms. If not, it’s a waste of time to design for some unknown eventuality most of the time.

In reality, at a certain size, you’re always tied into your infrastructure and few companies move from one cloud provider to another. The expense, the chance of regressions are too high and the rewards are too low.

But to the Consul+Fabio use explicitly. I’ve duplicated the functionality:

The KV Store: A DynamoDB table behind an API. The API can be used from anywhere.

Health Checks/restarts/service discovery: Fargate (Serverless docker) with a load balancer/target groups that do health checks and automatically launch new Docker instances or just regular EC2 instances behind an autoscaling group.

Nomad: we used Nomad because I didn’t want to introduce Docker and it can orchestrate anything. These days I would just use Docker or of it was simple enough lambda triggered by either scheduled events or other AWS events like queues or SNS messages.




I tend to avoid cloud deployments in the first place, exactly because it is usually far more expensive than even managed hosting, even if you don't go all the way to a colo. And I've yet to do a migration off cloud services that didn't result in substantial cost reductions (including when factoring in devops costs).

But sometimes it's impossible to convince people of this, even with detailed cost breakdowns, and/or people have other reasons for using cloud providers, some more rational than others. My current system is hosted on AWS because it's convenient and it won't grow large enough (in terms of resource use) anytime soon to be a cost issue relative to the engineering resources put into developing it. But what you can do in those instances is to plan out an architecture that ensures you maintain flexibility and minimizes the impact of changes while taking advantage of what you can.

That does not imply avoiding all proprietary solutions is always the right choice. It means avoiding solutions that are particularly onerous to transition off, and to limit different services exposure to the proprietary aspects. How far you take this certainly greatly depends on how likely you are to migrate.

My experience is different than yours - a very substantial proportion of the systems I've been contracted to work on have involved migrations between providers sooner or later.

(But part of that may well have been architectures that made migrations simple and cheap. In one case the company in question was explicitly constantly chasing the cheapest deal, and so were on AWS and then GCP thanks to huge amounts of free credits, and moved to Hetzner the moment they'd used them up; the credits added up to at least two magnitudes more than the cost of the migrations)

E.g. for anything deployed in AWS I'll happily use RDS because it's no harder to migrate off an RDS Postgres instance than migrating any other postgres instance, and the servers talking to it does not need to know whether they're talking to an RDS based instance or something that has been moved to another cloud provider.

And I'll happily use many services that are fully AWS proprietary too like SNS ans SQS because their interfaces are narrow enough that migration tends to be relatively easy, and that the actual interaction with AWS can be narrowed down to a limited set of services.

The point is not to be able to move on a seconds notice, but to avoid pointless coupling.

Being cross platform/cloud is rarely a requirement; not making decisions that make it harder to migrate in the future when faced with choices where picking the cloud agnostic solution is about the same cost/complexity as the cloud specific one, often is.

Using things like Fargate can fit perfectly fine into that kind of architecture.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: