Hacker News new | past | comments | ask | show | jobs | submit login

To be fair 99.77% uptime isn't very good.



That isn't fair at all.

It's 99.770% for a single month, immediately following a major event. If you sampled yesterday (or tomorrow, assuming no further issues) it would be higher. If you just look at today, it's at the much lower at 95.871%. If you assume no availability issues for the last 12 months (not true, but the point remains) then it's 99.981%. During an actual outage, availability was at the unacceptable 0%.

Unfortunately they don't provide 12mo stats, which is what you typically want if you're going to start calculating nines of availability.


Hey, are you seriously defending 3 9's of uptime? That's abysmal.

Github, if they're honest about their 12 month uptime levels would be lucky to be a single 9 service. Their uptime is Terrible with a capital T. But you know what? Until there's something better everyone is going to keep using them, right?

Great services with values that are hard to find become damn near irreplaceable even with terrible uptime. This is an obvious place to compete; if you made a github clone that simply stayed online you could win market share during every downtime. However cloning github would not be trivial.

And therein lies the problem and the answer to why we accept their terrible uptime levels. They give us something we can't get elsewhere: social coding and easy centralization.


Are you seriously incapable of distinguishing "hang on, you are getting numbers that look bad using statistical chicanery" from "Github = teh awesome"?

Pointing out that someone whose point I agree with is using bad math as evidence is not disagreeing with the point, it's asking that people who agree with me behave like honest, civilized, human beings---I don't care that you've already gone through the hassle of getting your pitchforks out of storage.

Speaking of which... your accusation that they are lying means that Github has had nearly 37 days of total outage this year---that they're down for two and a half hours a day, every day, for a year straight. And by honest, I assume you mean "they are lying", as opposed to "they are using a different definition of uptime than I would like." Naturally, you have some evidence for these claims, right?


Lying is a bit of a strong word. I think calling anything a single 9 service should be taken in jest. It's pretty hard to be down almost 40 days and still be in business.

Also, technically something with 98.9999% uptime would still be a single 9 service...


I agree with the rest of your comment, but... a single nine service? You think they have 36.5 full days of downtime yearly, 3 full days monthly, 16.8 hours weekly, or 2.4 hours downtime every day? That's certainly not the case.

Not to mention that often when they have issues it only affects a subset of customers.

https://en.wikipedia.org/wiki/Nines_(engineering)


3 9s is abysmal?


It's about 1/4th the downtime my company's little project server has had in the past month, and that's with SVN, not git, which means workflow was even more seriously disrupted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: