Hacker News new | past | comments | ask | show | jobs | submit login

There's a middle-ground here, which is far more reasonable and seems to be a critical missing piece of OP's company: Disaster recovery. Ask, what's the minimum we need to do to keep the business running when our provider(s) disappear or go down for an extended period of time? And then implement those steps.

When you pay someone else to handle your data, there is a lot that can go wrong. GitHub could go down, they could lose (or corrupt) your repos, they could accidentally delete your account. The nice thing about git is that it's absolutely trivial to clone repositories. There is _zero_ reason not to have a machine or VPS _somewhere_ that does nightly pulls of all of your repos. When Github goes down, you'll lose a lot of functionality, but at least you have access to the code and can continue working on the most urgent things.

I'm not clear on the details but OP's issue seemed to be around a broken CI system. At its heart, CI is just the automatic execution of arbitrary commands. Every repo (or project consisting of multiple repos) _should_ have documentation for building/testing/deploying code outside of whatever your CI system is. If your source of truth for how to use your code is in the CI system itself, then your documentation is very lacking and yes, you are susceptible to outages like these.




CI build processes often require credentials, sometimes ones that are in some sort of twilight-zone¹ to devs. IIRC, Github doesn't provide a straight-forward way to clone those credentials.

¹e.g., the devs "don't" have access to the credentials, except they're in the CI workflow, so technically they do. But I've worked at a number of companies where security will happily bury their head in the sand on that point.


To the extent that that works, it's not really a middle ground, so much as choosing "run it locally" for the things you really need, and "kick back and relax" for the nice-to-have but nonessential stuff.

Because if the things you really need actually keep running when your provider(s) disappear or go down for an extended period of time, you're running them locally anyway, and might as well get the benefits of that effort all the time.


Heck, your entire host could go down, what do you then? If it's not a major infrastructure provider like Azure, AWS, or GCP (I forget if that's what they call theirs) then you're kind of SOL. Outages can and will happen, the question is, how bad is the next one? If they are too frequent, you have to evaluate if it's really the provider or your application, if it's the former, you may want to consider a new host, or get with your hosting provider and get them to figure out why you.





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: