Hacker News new | past | comments | ask | show | jobs | submit login

Moved from Google Cloud -> Digital Ocean -> OVH.

Running our own stuff on high powered servers is very easy and less trouble than you think. Sorted out the deploy with a "git push" and build container(s) meant we could just "Set it and forget it".

We have a bit under a terabyte of Postgresql data. Any cloud is prohibitively expensive.

I think some people think that the cloud is as good a sliced bread. It does not really save any developer time.

But it's slower and more expensive than your own server by a huge margin. And I can always do my own stuff on my own iron. Really, I can't see a compelling reason to be in the cloud for the majority of mid-level workloads like ours.




> Really, I can't see a compelling reason to be in the cloud for the majority of mid-level workloads like ours.

I work on a very small team. We have a few developers who double as ops. None of us are or want to be sysadmins.

For our case, Amazon's ECS is a massive time and money saver. I spent a week or two a few years ago getting all of our services containerized. Since then we have not had a single full production outage (they were distressingly common before), and our sysadmin work has consisted exclusively of changing versions in Dockerfiles from time to time.

Yes, most of the problems we had before could have been solved my a competent sysadmin, but that's precisely the point—hiring a good sysadmin is way more expensive for us than paying a bit extra to Amazon and just telling them "please run these containers with this config."


> None of us are or want to be sysadmins.

It's such a huge misconception that by using a cloud provider you can avoid having "sysadmins" or don't need that kind of skills. You still need those, no matter which cloud and which service you use.


Which skills specifically do you think we might be missing that we would need to run an app on a managed container service and managed database?

I know how to configure firewalls, set up a (managed) load balancer, manage DNS, and similar tasks directly related to getting traffic to my app.

What I no longer have to know how to do: keep track of drive space, manage database backups, install security updates on a production server without downtime, rotate SSH keys, and a whole bunch of other tasks adjacent to the app but not actually visible to incoming traffic at all.


You still need to do backups, a database backup is just one part of that, if you are not following the 3-2-1 rules and don't test your restore mechanism, you don't have reliable backups.

Those things you listed are sill sysadmin tasks in my eyes, and you are doing them, validating my point.

You still have to track storage space, either because you are paying for it and need to expand when necessary, or you have to manage costs at one point, that's not completely out of the picture. It can be easier for sure than building your own storage hardware.

You still need to keep systems up-to-date either you are using Docker so you are doing it on your "application level" or you are using Linux VMs and you need to upgrade those systems/images. Even if you are using something like Functions or Lambda, those have their own environment which you need to be aware of and they usually support specific versions of programming languages, so you need to upgrade your own stack when they don't support older versions anymore.


I tell you that ECS has eliminated a ton of extra work for my team for a bargain price, and your response is "but you still have to do x, y, and z!" It's like saying that I shouldn't buy a dishwasher because I'd still have to wash the pots.

Yes, we still need to do some sysadmin-y tasks. But ECS handles so many of them that we actually have the time, energy, and knowledge to take care of the few that remain.

(As an aside, keeping language and OS versions up to date becomes a development task rather than an ops task when running Docker + ECS. We increment a version number in the repository and test everything, the same as we do for any library or framework that we depend on.)


> As an aside, keeping language and OS versions up to date becomes a development task rather than an ops task when running Docker + ECS.

It's a development task with a proper bare metal setup too.


If you use purely PaaS offerings (or FaaS as well), then you also don't really need sysadmins.

That's not to say that you can get away with knowing _no_ sysadmin skills in these scenarios, but you don't need to have someone on staff who knows the ins and outs of Cassandra or Mongo or whatever you're using. In awful workplaces with high turnover, it's worth it for management to opt for these managed services so that when the overworked tech lead decides to rightfully bail on them, she/he doesn't leave them in the lurch. (Note: I'm not defending these workplaces, but just explaining that when they can't keep adequate in-house talent to manage their own services, it makes financial sense to outsource it, and pay the "cloud tax").


I think the problem with cloud environments is you do not "need" sysadmins - it is not obvious you need them, so what you end up with is a bunch of systems glued together without much thought, and then crazy things like HTTP logs not being turned on for your various services, insane service costs b/c of not understanding pricing tiers, etc..


The difference doing ops setting up a couple of Lambdas or Fargate containers vs provisioning your own servers is substantial.


In fact, if you're using Linux on your workstation you'll use the same skills locally as you do on the VPS/bare metal (depending on your scale.) Arguably "cloud" services need more sysadmin skills, not less.


Thats a very big if.

I have yet to work with a $corp that uses Linux for workstations.

Overwhelming majority uses Windows. Some use macOS.

The ocasional developer that uses linux will usually be in a VM, or if IT policies allow, WSL.

So yeah, running cloud services doesnt require sysadmin skills, unless you assume copy pasting from oficial documention "sysadmin skills".


That's funny... every team I've been on in the last 10 years has used Linux workstations almost exclusively, with a few Macs here and there.


In 27 years, I've had exactly two jobs where I didn't have Linux on my desktop, for a total of 5 out of those 27 years. In both cases, I still did all of my dev work on Linux.

It boils down to what kind of jobs you look for.

> So yeah, running cloud services doesnt require sysadmin skills, unless you assume copy pasting from oficial documention "sysadmin skills".

If that's the extent of how you're managing your cloud setup, then I could equally argue running bare metal servers doesn't require sysadmin skills either. When I did contracting, a large part of my income was to come in and clean up after people had relied on "copy pasting from official documentation" as a substitute for actual ops.


It's far easier to maintain my own Linux workstation than an internet-facing server used daily by customers.


Absolutely but most of the knowledge translates, it's the procedures that differ.


It's those different procedures that I'm trying to avoid. It's not that I couldn't do those things or learn to do them, it's that my time is best spent building and improving our applications, not keeping servers running, secured, and up to date.

At some point we hope to get to the scale where it makes sense to pay a human to do that, but at this point the additional cost incurred by an ECS instance over an equivalent server is negligible.


Very similar experience here. I work on a two person "DevOps" team. Without AWS ECS we would have to have a much higher headcount. I get to spend most of my time solving real problems for the engineers on the product team rather than sysadmin work.


What are “real problems” for the engineers or product team?


Things like automating manual workflows, building small infrastructure debugging tools, or providing infrastructure consultation to an engineer trying to decouple two parts of a legacy code base.


Managed container services (like Amazon ECS) are a sweet spot for me across complexity, cost, and performance. Mid size companies gain much of the modern development workflow (reproducibility, infrastructure as cattle, managed cloud, etc.) using one of these services without the need to go full blown Kubernetes.

It's lower level than functions as a service, but much cheaper, more performant, matches local developer setups more closely (allowing for local development vs. trying to debug AWS Lambda or Cloudflare FaaS using an approximation of how the platform would work).


Very much agree - due to a coworker leaving recently, I'm looking after two systems. They're both running on ECS and using Aurora Serverless.

My company takes security very seriously so if these two systems were running on bare-metal I'd probably be spending one day a week patching servers rather than trying to implement new functionality across two products.


I can bet our team is smaller than yours.

And yet… Sysadmin tasks take up maybe 2 hours a month.

Your theory is right though if no one on your team knows how to setup servers. In your case the cloud makes sense.


To the peeps running ECS? Why not just straight up AKS or GKE? Have you compared ECS to Cloud Run on GCP?


In my case, mostly because it was easier to get buy-in from the rest of the team on ECS than Kubernetes.


"Infrastructure is cheaper than developers(sysadmins)" all over again.


I also found that running a PostgreSQL database is really simple. Especially if most of your workload is read only, a few dedicated servers at several providers with a PostgreSQL cluster can deliver 100% uptime and more than enough performance for pretty much any use case. At the same time, this will still be cheaper than one managed database at any cloud provider.

I've been running a PostgreSQL cluster with significant usage for a few years now, never had more than a few seconds downtime and I spend next to no time maintaining the database servers (apart from patching). If most requests are read only, clusters are so easy to do in Postgres. And even if one of the providers banned my account, I'd just promote a server at another provider to master and could still continue without downtime.

I recently calculated what a switch to a cloud provider would cost, and it was at least 10x of what I pay now, for less performance and with lock-in effects.

But I understand that there are business cases and industries where outsourcing makes sense.


can you share more details, because im in teh process of doing the same since having a few terabytes of postgresql / dynamodb is stupid expensive.

for a lot big organizations its a matter of accountability. if they say AWS went down vs our dedicated servers went down, it matters a lot for insurance, clients.

what i dont get are 4 man startups paying thousands to AWS ... because everybody does it.


As I said, if most queries are read-only it's really simple. Streaming replication works very well out of the box, just make sure you keep enough WAL segments on master so that slaves can catch up after some downtime.

I have a 1-1 relationship between application servers and databases. The application queries replication delay and marks itself as unhealthy and reports an error if the delay is too high. You can also do that via postgres (max_replication_delay), but I found this way to allow for more graceful failovers.

With streaming replication, servers are completely identical, so you can easily provision a new server. Failover is done by just one command on a slave. I don't have automatic failover as I only needed to use that once in several years (and that was on purpose), I'd rather accept downtime than having an unwanted failover.

With that setup you can always failover and can scale read operations really well. There are solutions for postgres if you need more complicated setups, but I never looked into them.

If you're in Europe, it's really cheap to get a dedicated machine from Hetzner with a few TB of NVMe. Just pay the extra money for 10gbit link, otherwise replication will take forever. But there are also some decent providers in the US, it's just more expensive. But with Hetzner, a two machine setup will be <$500 per month for really beefy servers.

I'd just be careful with using block storage, I often found that to be a bottleneck with database servers. Local storage is almost always much faster.

But in the end it depends on your use case. In the end, your database will usually go down because of a bug in the application or some misconfiguration. Both can happen on any service. It's really so rare these days to lose a server without notice. And Postgres is really stable, I've never seen it crash.


Maintainability is much easier on a well working cloud setup from people who have potentially less knowledge.

One company had 6 servers and used AWS snapshot for backup + managed MySQL.

Backup and recovery of that db is possible by more people in the team as if it would run as non managed service.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: