Hacker News new | past | comments | ask | show | jobs | submit login

Really, "a million other supporting services and all the systems that come with it" ?

You get a server with SSH, then you need something expose your container stacks over HTTPS like Traefik (which auto configures), and something for alerting such as Netdata (which auto configures too!), both of which are just a single binary to configure and setup, and probably it won't take long before you have scripts to automate that like we do[0]

Not only do you get amazing prices[1] but ...

You get to be part of an amazing community running the world on Free Software

But yeah, maybe I'm "missing the bigger picture" by "not locking myself in proprietary frameworks"

[0] https://blog.yourlabs.org/posts/2020-02-08-bigsudo-extreme-d... [1] https://oneprovider.com/




I'm a huge fan of open source and open standards, in fact I always push for and expect portability and avoid proprietary systems where possible. Abstractions like Kubernetes are a fantastic middle ground to provide portability across platforms whilst taking advantage of cloud provider services where they exist. The same for apps and frameworks built on open standards like Kubeflow and Apache Beam.

The supporting services and systems come when you run services that require strong guarantees for reliability and resiliency, and meeting the needs of different lines of business.

If I think of a mid-size company that wants to run these kind of workloads and demands minimal downtime, resilience against local disaster and minimal data loss: - Customer facing applications in a reasonable scalable manner to meet peaks and troughs of demand without needing to size of peak demand

- CRM/ERP systems to manage customer data, payments, sales processes and inventory that CANNOT have corrupted or lost data

- Data platforms for running reasonable scale analytics and analysis on reasonable size volumes of data (say a few Petabytes online accessible, analyse 10s of Terabytes per quer)

- Capability for mid-level machine learning and access to modern acceleration hardware, up to date GPUs and maybe some NVIDIA Ampere type equipment

- Tools and platforms for operations and security that can capture, store and analyse all the logs produced by all those systems, plus some half decent cyber services - network level netflow analysis, maybe IDS if you are feeling fancy, endpoint scanning and analysis, threat intelligence capabilities to correlate against all of that data

- Tooling and platforms for developers - source control, artifact repositories, container registries, CI platforms like Jenkins ideally with automated security scanning integrated, CD for deployments like Spinnaker to canary and deploy your releases safely

- Networking for all of that equipment, ideally private backbones, leased Ethernet or MPLS - and that all needs to be resilience, redundant and duplicated

- Storage for all of the above that meets performance and cost needs, replicated, and backed up offline

Yes, you can do all of that yourself! But let's be clear, buying a server on eBay is not even a fraction of 1% of the reality of running real infrastructure for real systems for real businesses. There ARE reasons why you might do this but that is increasingly the exception due to either extreme scale, regulatory and privacy requirements typically from data sovereignty or very unique hardware requirements.


I'm not talking about /buying/ a server, but renting one as a service.

99.9% uptime is plenty enough for 99.9% of projects and that's easy to achieve with one server, k8s is not necessary here. You're not concerned with MPLS or whatnot when you rent a server.

I can tell because I'm running governmental websites on this kind of servers actually, with over a thousand admins managing thousands of user requests. I've been deploying my code on servers like that for the last 15 years and it was great really, also got fintech/legaltech project in production and much more.

I guess the project you're describing falls more in the 0.1% of projects than 99.9%.


The issue for me is that 99.9% uptime isn't a useful or meaningful metric. End users only care about the experience and if the application isn't performant, reliable and durable it doesn't matter if the lights are flashing - they will tell you that it's not working as intended. And when you rely on SLAs from third party providers the liability is not equally shared; they might credit you some % of your bill if its offline, but your reputational impact and opportunity cost is likely orders of magnitude greater. You also can't control how that 99.9% will happen, and more often than not its going to happen at the worst possible time (payroll dates, reports due, board need statistics etc.)

Mitigating these failures will always lead you down the path of replication, load balancing, high availability or at the very least frequent backups and restore strategies. And all of that is going to need to be done across multiple physical locations because I am never going to stake my reputation on a single physical site not losing power, connectivity or cooling. Now you are in the realms of worrying about network reliability, bandwidth availability for those replication and backup services in a way that doesn't impact user applications. And monitoring all of that, managing failures etc. etc.

As someone who helps organisations with their IT strategy and overall budget allocation process the focus is always on delivering reliable applications to customers and business users. Using a cloud provider helps us to ignore all the complexity behind the scenes that require significant investment in people and resources to manage once you hit a non-trivial scale. Paying a premium to do that is absolutely worthwhile compared to the downside of it going wrong, and the opportunity cost of wasting time on minute details that do not add value.

[Edit] And for context I DID used to buy servers one eBay for testing and development, and then migrated to bare metal colo, and all the while thought I was winning and it was cheaper. But over the years I've experienced enough issues and worked with enough companies to understand this was a false economy and now see the errors of my decisions and seek to help others avoid them.


Have you seen link[0] in my comment? automated backups are of course a big part of the plan but replication is not an alternative to backups in my book anyway.

I'm not talking about buying servers and colo, but about talking about renting servers as a service[1], where you get the benefits of dedicated hardware but not the inconvenience.

The added security that we have by "not sharing our hardware"[2] also deserve to be mentioned here.

[0] https://blog.yourlabs.org/posts/2020-02-08-bigsudo-extreme-d... [1] https://oneprovider.com/dedicated-servers-in-north-america [2] https://media.ccc.de/v/33c3-8044-what_could_possibly_go_wron...


Yes I read your blog and it seems like you've written plenty of shell scripts and utils to try and abstract and automate away infrastructure, but it feels a lot like you are trying to re-invent the wheel when all of this and much more is available in major cloud providers as a service.

One item that stands out for me is that your backup is a couple of shell scripts and you mention that you would dump your database to a different RAID array. That means you are now on the hook to procure/rent, manage, update and monitor that RAID array. And you even call out that you don't include offsite backups so you are at risk of total loss because you are using a single physical site for your prod data AND backup data.

You mentioned above that you are "not locking myself in proprietary frameworks" - but in the process you have built custom one off scripted systems that are bespoke. If you leave or your consulting engagement ends it will be very hard for someone to take over and manage and maintain your systems - because your design, configuration and implementation is effectively lock in to YOU as a person and your consulting company.

Personally I would rather trust a cloud provider to offer something like backup as a service where they can handle geographic replication, snapshots, restores for me as a service and deal with all the hardware, disk replacements, hardware monitoring and network fun that comes with it. The human cost of moving to another cloud provider is not that large and I can easily hire a person or consultancy that has knowledge of Cloud Provider A and Cloud Provider B to make that transition because their services and systems are well documented, conform to a contract and there are training and certifications for how they work.

I still hold my opinion that taking advantage of services offered by cloud providers is value add in the context of running a business.

Also I would much rather trust a cloud provider with a big team of security experts to run my infrastructure than a random company renting me some servers. If you are getting them as a service then there is still a shared admin control plane, likely management type networks and infrastructure around it that are managed for you by a third party. Trusting their team, processes and security capabilities is a very high bar to meet.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: