Didn't Github comment on their blog recently [1] on how they were improving and reducing failure detection times? Doesn't speak well for them if it's first on Hacker News and not their own Status page.
tl;dr; > One of the biggest customer-facing effects of this delay was that status.github.com wasn't set to status red until 00:32am UTC, eight minutes after the site became inaccessible. We consider this to be an unacceptably long delay, and will ensure faster communication to our users in the future.
EDIT: "We're investigating some issues with our databases".
I have the feeling that this is happening more and more these days. And it's a mayor problem when a large part of your infrastructure is depending on services like Github (composer, etc.)
It's kind of ironic when one realizes that one of the major design goals of git was to be distributed, to reduce dependency on a single point of failure.
That's definitely true, but PRs, comments and bug reports are not distributed, nor are many bridges between Github and external tools (issue trackers like JIRA/Trello, build-servers). This might seem pedantic, but it creates an asymmetry: commits and branches distributed, as opposed to PRs and comments.
I dunno, I'd see it as the opposite. The bigger the scale the more cost effective it is to have more redundancy. I can't remember the last time Google search wasn't working or saw news about it being down.
The fact that github keeps going down while being a huge business shows they still need to work on having more redundancy, something that's expected of big services like theirs.