Hacker News new | past | comments | ask | show | jobs | submit login

I have a variation of this as well. Boring is good. Docker for deploying whatever means I can deploy to any docker capable platform, which is typically some kind of set of vms with a load balancer in front of it. Our setup runs in two separate and very different cloud environments. One of our customers insisted on this. Same stuff but provisioned differently.

We don't use terraform on purpose because it is overkill to automate something that only comes up less than once a year. Doing it manually and documenting that is good enough. And with these kind of intervals, the automation tends to break down anyway due to changes. You get funny issues like "the last time I ran this script is two lts versions of Ubuntu ago".

We do have quite a bit of build and deploy automation via github actions. That pays itself back because we use it every day.

Otherwise, the choice is between managed and self hosted databases, search clusters, etc. Managed is more expensive. But it massively frees up my time which is more valuable to me. So in one of our data centers we do managed and in the other one we self host because the cloud platform in question is just not great (obsolete, unmaintained versions of all the stuff I care about).

You might conclude based on the above that I know nothing about devops and have never worked in teams where it is done properly. The opposite is true. I've been there and seen it all. The reason I do this is that I don't have the budget to do this properly so I'm literally trading off my own time here. It's a calculated choice. So no teraform because it doesn't make sense at our scale. No Kubernetes because the idle cluster cost with nothing in it is larger than the total cost of my current setup. And I have a monolith so kubernetes is just a really convoluted way to run a single service.




I was once a consultant at a mid-sized company. One of the developers openly admitted to pushing for new technology so that he could include it on his resume for his next job application...

I believe this is not an isolated incident, but rather part of a larger trend.


It's not a trend...I've seen it for decades. For example, in my heavy network consulting days you could tell when someone was studying to get their CCIE, because they'd implement something that's on the test, is technically correct, but almost no one would do in practice. Like "why in the world are they using IS-IS instead of OSPF like the rest of the world??". 95% of the time, it turned out the person who implemented it (who as often as not had moved on to their next gig) was getting ready for the IRP lab exam when they did it.


In my circles it is called Resume Driven Development.

My favorite case was when someone moved a single component from ground into a lambda.. that called back down to a ground webservice anyway.

But hey, they were able to add AWS Lambda to their resume, right?

I came on board after the guy left, got props from the business by 'lowering our cloud spend' when I ripped the lambda back out.


Now you can add cloud resiliency to YOUR resume.

Win-win.


I did it as cloud optimization, but at least have receipts for that (and have done more notable endeavors in such as well)


"No Kubernetes because the idle cluster cost with nothing in it is larger than the total cost of my current setup" has been the case for quite a while. I wonder whether it is anywhere on the k8s priority list to fix this. Even K3s consumes a lot more than say docker swarm for more or less the same job.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: