Hacker News new | past | comments | ask | show | jobs | submit login
GitHub is down (githubstatus.com)
410 points by tomduncalf on July 22, 2019 | hide | past | favorite | 202 comments



I sometimes look forward to outages like this just so I can read the post-downtime-resolution blog post that almost always follow. I find reading about how companies deal with issues when shit hits the fan to be really interesting.


Agreed. I almost always dial into any large scale events at my company - regardless if it concerns me directly - to listen how ppl critically evaluate data, triage errors, and determine how to mitigate the problem(s).

Whatever skill level I have in this area, I cannot nail down if it's from multiple "baptism by fire" situations, learned naturally via university courses in the scientific field, strategy video games (half joking), or other sources I cannot think of.

I've listened to some very impressive people handle serious crisis situations and I'm both in awe and curious how they achieved that level of deductive reasoning.


The same way anyone gets good at anything: practice. Knowledge of the system + experience in high pressure situations and you get pretty good at solving those problems over time.


I agree. I made a website which breaks down failures and resolutions.


link?


Probably not what he's talking about but there's a nice collection here: https://github.com/danluu/post-mortems


Although it's nice that Git is Git and we can all mostly still work, it still seems foolish to rely on a single point of failure like Github. I've been toying around with the idea of creating a tool that would map the Git api to work with two+ hosting services at the same time. The effect would be something like, run "$git push" and it pushes to Github, Bitbucket, and Gitlab. I can't imagine something like this would be too difficult and would eliminate having to twiddle your thumbs while you wait for things to come back up.


You mean like?

    git remote set-url --add --push origin git@github.com:Foo/bar.git
    git remote set-url --add --push origin git@gitlab.com:Foo/bar.git
:-)

see: https://git-scm.com/docs/git-remote#Documentation/git-remote...


I will be that one, who will remind that git server does not have to be a gitea/gogs/gitlab/onedev/pagura/git-remote-{keybase,s3,codecommit,...} - you might provide a path to the WebDav server[0][1][2], if you need a very simple git server (i.e. served internally over 192.168/16 in office network, let's say RaspberryPi with an external USB drive or temporal repository share from own laptop to your colleagues).

[0]: https://cets.seas.upenn.edu/answers/git-repository.html

[1]: https://blog.osdev.org/git/2014/02/13/using-git-on-a-synolog...

[2]: https://git-scm.com/book/en/v1/Git-on-the-Server-The-Protoco...


Or you can just provide a URL to any host with a SSH server and Git installed. You only need `git init --bare /some/path` on the server and `git remote add origin myserver:/some/path` on clients. If the repo is used by multiple users, you’ll need the `--shared` flag.


TIL how is it this is! Thank you.


Yes, but you can also push/pull from a filesystem location. To be able to push it, it is simpler to init the repo with `git init --bare`.

I've used this on NFS drives, but also SMB shares from windows, and just about anything that can be mounted to a folder. Having an external hard disk drive or usb stick also works.

And lastly, git also comes with a daemon mode which makes it easy to temporarily host a server for a repo. Just connect multiple laptops trough Wi-Fi, and work together (with a pull workflow rather than a push workflow). That's quite useful [1]

[1]: https://stackoverflow.com/questions/377213/git-serve-i-would... Further reading: https://git-scm.com/book/en/v1/Git-on-the-Server


Number of alternatives to popular git repository hosting services looks indeed awesome.

> Yes, but you can also push/pull from a filesystem location. To be able to push it, it is simpler to init the repo with `git init --bare`.

Personally, I have build my very own and simple solution to sync encrypted files over the Internet using git with git-remote that uses filesystem location. The implementation evolved over time but initial idea[0] was to combine restic, pass and git with simple scripts to pull/push the git-remote repo (located in /tmp/repos) to S3 bucket via restic that takes care of upload, deduplication and encryption. Thanks to restic I also don't care much if I'd commit to stale (outdated) master branch, because it uses snapshots and it's quite easy to navigate between them.

[0]: Year-old PoC of encrypted repository share with B2 as a storage: https://gist.github.com/piotrkubisa/dece2fc71399efa56e2d0b8f...


Hah, today I learned! It's a great feeling when you don't have to reinvent a wheel, thanks.

Now I'm just annoyed that more teams I've been on haven't set this up!


Now that you solved your problem. Let me guess your next question:

I have two git repositories which somehow got into an inconsistent state: How can I reconcile changes in both repositories and resolve conflicts between mutable-metadata (branches, tags) in a sane way?


Tags are going to be ugly, but branches can simply be merged like any other.

Alternatively you decide the one of the repositories is the primary one, set up a remote called `mirror` and set-up a `post-receive` hook to:

    git push --mirror mirror
Now just ensure no one pushes into the mirror directly. Of course this only works if you control the primary repository.


Wasn't the original point to be able to push to the second copy when the first is down .. what's the point (other than backup) of a second "working" copy that you can't use.


A repo admin or script could enable pushing permissions to the mirror while the primary is down. The when the primary is back up, fast-forward it and change back. Or, just allowing pulling from it and wait to merge until the primary is fixed.


Branches: Let each branch owner deal with that. They likely have the most information about it. Create a new temporary branch, merge both sides into it and see what happens.

Tags: Don't have a process which can result in tags pushed into different places. It's a path to madness. Same applies to master/release branches.


> Same applies to master/release branches.

Yes. This was the case I had in mind. Path to madness.


You can just push and pull directly to/from your colleague's computers. The main advantage (for an established team) of github/gitlab/bitbucket are pull requests, issue management, CI etc., and that's not easily synchronized across multiple providers.


Me too. That's awesome: I've just suggested it to our team. Thanks for sharing GP!

One serious question though: how do you deal with PRs when you do this? That's one area where it feels like things could be quite messy, especially if you have quite a few PRs going in throughout the day.


There have been various proposals over the years for how to integrate issues and reviews in the distributed git tree itself (http://dist-bugs.branchable.com/software/), but I don't think any of them have really gone anywhere, certainly not in terms of support by the hosted git vendors.

Having looked briefly into it now, git-dit does look promising in its approach. I'd be interested to hear from someone who had actually used it and bumped up against the limitations: https://github.com/neithernut/git-dit/blob/master/doc/datamo...


There is git-appraise to fill that gap [1]. I am personally waiting for a federated "forge" for federating PRs across platforms, such as the one developed in [2]. Maybe via e-mail? [3].

[1]: https://github.com/google/git-appraise

[2]: https://github.com/forgefed/forgefed

[3]: https://drewdevault.com/2018/07/23/Git-is-already-distribute...


Pull requests could be done through email using the git format-patch, git send-email, and git am commands.


Merge the PR on github, pull to your local copy (now you're ahead of one of the urls of origin), push (and it should just push to the origin that's behind)

If you have any discrepencies in between them, you'll need to merge locally of course.


You could merge the feature branch into master (or whatever), resolve conflicts, commit, push and deploy.


> Now I'm just annoyed that more teams I've been on haven't set this up!

For most people, it would be just a read-only copy. And the value of that is fairly small.


Wait, what?! You can have 2 different urls for the same remote so a single push will push to both?


Yes, exactly. Here's an example for a repository hosted on my server and in Keybase Git. Pulls / Fetches will use the repository on my server. Pushes go to both.

    [timwolla@/s/xxx (master)]g remote show origin
    * remote origin
      Fetch URL: git@git.example.com:xxx.git
      Push  URL: git@git.example.com:xxx.git
      Push  URL: keybase://private/timwolla/xxx
      HEAD branch: master
      Remote branch:
        master tracked
      Local branch configured for 'git pull':
        master merges with remote master
      Local ref configured for 'git push':
        master pushes to master (up to date)


Neat! I was also unaware of this.


Whoa, cool, Keybase should definitely have this mentioned on their own page.


Is this safe? The Git docs explicitly say not to do this:

> Note that the push URL and the fetch URL, even though they can be set differently, must still refer to the same place. What you pushed to the push URL should be what you would see if you immediately fetched from the fetch URL. If you are trying to fetch from one place (e.g. your upstream) and push to another (e.g. your publishing repository), use two separate remotes.

which seems to imply that weirdness might happen if the two happen to get out of sync, or if one (specifically, the one pointing to the repository you're fetching from) fails.

For something that may be a bit safer, I believe it's possible (but haven't tested) to have multiple values for branch.whatever.pushRemote-- that should do the same thing, and has the added bonus of making the secondary remote easily fetchable.


I don't see how the parent comment does anything different from what's advised there? It's setting two push URLs for the same remote, not a push and a fetch URL. Presumably for fetch you would have a separate remote. I think the idea is that every time you push you push to both.


Cool.

But why not just have different remote names other than the default of “origin”? Somebody else on the thread mentioned that it might be a bit complicated to clean things up after an outage on such a “multiplexed” remote.


Git is fine, and the outage does not affect you and your team if you already have the source tree anywhere.

What it does affect is the ability to do code reviews, work with issues, maybe even do releases. All the non-DVCS stuff.


Actually the code review / issues are not necessarily non-dvcs. For example https://github.com/dspinellis/git-issue


> I can't imagine something like this would be too difficult

Pushing to all three isn't that difficult. The hard part is reconciling after one of them suffers an outage or partition.


git natively supports having multiple pushurls per remote[1], so you should be able to do this OOTB

[1] https://stackoverflow.com/a/14290145


Isn't this the idea of multiple origins? You can already set up different origins, including filesystem origins (like Dropbox).


There's not much more to do than create a quicker remote wrapper. The Git flow here is 2+1 steps: add another remote, push to that remote (+1 is to create the remote). It would be cool to see it built into a Git plugin or wrapper!


t my last company when our internet connection went down a bunch of team members said they couldn't work because they couldn't get too GitHub. They were shocked to learn they could still collaborate by pushing their changes back and forth with other colleagues.

Perhaps what they really meant is that they couldn't get to stack overflow :-(


Github: the premier collaborative platform for FLOSS development, which itself is closed-source, made under a closed development process. Based on a distributed CVS, it has become a massive SPOF. As if that wasn't strange enough, it was acquired by Microsoft. Was that the victory of FLOSS or the defeat? How can you tell?

WTF is going on?

Mood: "Am I going crazy, or is it the world around me?" ~ https://fishbone.bandcamp.com/track/drunk-skitzo


This falls under very specific genre of Hacker News comment that has always struck me as profoundly uninteresting.

Github built something that inarguably made development better.

- They host some huge number (tens of thousands? hundreds of thousands? millions?) of repositories for free.

- They built Github Pages for people to ship websites and host those for free.

- They built an intuitive flow for managing issues and pull requests, which is included with every repository for free.

- They integrate freely and openly with all sorts of third-party services, some of which compete directly with them. That's quite uncommon for a for-profit software platform of their size.

- They have given back immeasurably to the programming community, to the open source community, and to developers including sponsoring conferences, donating office space for events, etc.

- Their developers widely share their (i.e. done at/for Github) work and contribute upstream.

There is some argument that all of the above are self-serving for Github, which is proven false if you talk to a single developer at Github.

Do they adhere to every idealistic principle of FLOSS? No. But, honestly, who gives a crap?

So, to answer your question of WTF is going on: real, positive progress.


Except GP wasn't making any of the points you hand-waved away with "who gives a crap" (hint: all the people who complain about those things "give a crap"). They were saying that the fact we even care the GitHub went down at all is a problem with how much we've built our workflows around GitHub when Git was specifically designed to avoid this problem.

Linux kernel development doesn't shut down if kernel.org is down (heck, when kernel.org was completely pwned it only delayed the kernel release by a week or two). People still send each other emails and even if a maintainer is AWOL, other maintainers will pick patches and route them through their own trees. The only problem with the kernel development system is that it isn't easy to on-board people. But if we had a better UX for this system it would be far superior in every respect. There was a recent post by a kernel dev on how this might be improved by building on top of Secure Scuttlebutt and having a nice implementation on top[1].

[1]: https://people.kernel.org/monsieuricon/patches-carved-into-d...


* This falls under very specific genre of Hacker News comment that has always struck me as profoundly uninteresting. *

Not sure why you had to lead with an irrelevant comment like that as the rest of your comment has interesting counter arguments.

I think Github did make development easier, but the fact that it's close source and now owned by Microsoft gives me an uneasy feeling.

With that being said, the only thing that does make it ok is the fact that Git is decentralized.

If Github was an svn host I definitely wouldn't have been ok with it or hosted anything FOSS on it.


> Git is decentralized. If Github was an svn host I definitely wouldn't have been ok with it or hosted anything FOSS on it.

The most sticky part of Github is the social network that lives in issues, pull requests, and repository permissions. This is all entirely centralized, and it is scary how heavily the community relies on it.

So in practice, results in just as closed of a system as SVN. And it doesn't matter if you're "ok with it". GH is where all the people and projects are. If you want to participate, you have no choice.


* Github being closed-source doesn't really matter. Restrictive software licensing hurts the user of the software which in this case is Github Inc. and only Github Inc. As long as the protocol is open (which it is) whether hosted services that speak that protocol are open or closed source is immaterial. [1]

* Git isn't distributed. It's decentralized -- big difference. Distributed means there (usually) aren't any single points of failure. Decentralized means these are many single points of failure but failures are localized. Combine that with the reality that in most markets a "best option for most people" emerges and we get bigger and bigger points of failure.

* Microsoft has been trying to get into the socal networking game for a while now and found their match made in heaven with one based on software. Time will tell if the recent Microsoft will be a good steward -- so far it's been pretty good.

* Definitely a victory. The dominant VCS is open. More people than ever are contributing to OSS. It's easier than ever to share code.

[1] Github's on-prem offering is a completely different story and is very much a problem that it isn't OSS.


Good that git is a distributed VCS. At least our master branch history is available on our local PCs, and we can freely work. Yupi.


"A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable." -Leslie Lamport


Do you do PRs? or have CI/CD pipelines hooked with GitHub?


not being able to submit or review PRs for a day is much less impact than not being able to checkout for a day.


I guess it depends on the definition of done of each team.

We normally can't complete tasks until they are code reviewed, so checking out locally would be the analogous of working offline in a CVCS


You can finish the first task as much as you can, then start on the next one...


The CI/CD pipeline could also be done with a post-receive hook on another remote.


Hope you dont have any dependencies hosted in github :)


Java and JavaScript programmers have solved it years ago. Maven Central and npm do the job :)


This is the perfect time to take a break. Kick back, have a coffee, contemplate your life choices. That commit can wait, that PR (i was about to merge) can wait too. It's not the end of the world.


Why not just keep working? That next commit will work just fine without a central server.


At some point there has to be centralization. We are deploying one product, not N products for N commits. If I can't push, I can't have CI build my branch, and I can't submit a PR for review.


But you could continue working and only have to apply minor patches forward if you later find your starting point needed to be fixed.

You don't have to wait for CI, just create a new branch and continue your work as though CI passed, and your PR was accepted. If that comes true you don't need to worry much, just rebase and continue.

On the other hand if you find out you needed to make changes - do those on the branch you made those changes, and finish your PR/CI cycle. Then go to the new branch you continued work on and rebase, and continue.

Is there something I am missing?


Heaven forbid we find a reason to take a break


In the old days at least we had compiling. Not so much anymore.


No, today we have something else to wait for, it is called the runtime.


But users wait for that, I can get on with finding a new framework instead.


npm install


And when you need to delete your node_modules folder for the upteenth time, it's a nice 5 minute break.


He said a break, not a vacation


But you can technically spin up another git server, you can spin up as many git servers as you need and the same code will be there. I do understand what you mean though, been there. Organizations may not be happy with hearing that they need multiple places to store code.


you can totally push to each other dev machine as needed, so that covers reviews. and just get the guy responsible for maintaining ci to fetch from everyone machine and push to the build server, or you can configure that to pull from this guy pc if needed depending on your ci


It should be easy to move the center. Having a single 3rd party be the only central point you have available is asking for trouble.


Because we forget how to let go of all this work stuff and get sucked into it. There is a life out there too. some one us can balance, some find it hard. this is an opportunity to just pause for a few minutes. Mentally pause.


Right. There's an entire world out there of of personal projects and open source contributions you can be doing when not working.

People get so caught up in their daily coding they forget about all the other fun coding they could be doing outside of work.

It's a shame.


Or, not coding.


I think there was a joke in there ;-)


Of course, you know that, I know that, we all know it - but our bosses don't ;)


They are living in the technical stone age then. Time on keyboard does not equate good productivity. Happiness, balance, freedom - that increases output.


Because programmers always need a break. Don't go giving our bosses ammo. It's compiling, it's uploading, Github is down. Have some compassion.


merging work with contractor right now...


That works too, believe it or not. Just add another remote and pull it.


I only coordinate my remotes through github. I don't use git in "decentralized mode".


That's not something you decide once at the beginning and get stuck with it. You can reevaluate that decision at any point and easily add more remotes to a git repo on the fly.


So, uh, it's not clear what exactly people are expecting me to do here. Wire up my repo to my contractors temporarily (not even sure how I'd do that as i don't have access to their machines or filesystems) then detach when github is back (it's already back)?


A single repo can have many remotes. Adding additional remotes does not prevent you from also using old remotes, nor do you ever have to remove a remote that you intend on using again in the future.

Obviously it's a moot point for you now, but getting comfortable with how git remotes works is something that could pay dividends for you in the future.


I have multiple remotes already and know how they work. What I'm asking is, if github is down, and I don't have ssh or other access to my contractor's repo on their machine, what is my remote supposed to attach to?

Also, the additional effort to manage multiple remotes is entirely nontrivial.


I believe git was originally designed to share commits via email.

Here is one of the top hits searching "git email patch"

https://thoughtbot.com/blog/send-a-patch-to-someone-using-gi...


They can always tar up their repo and send it over. You then unpack it on your end, then set it as a remote, pushing/pulling from it as you see fit. Admittedly, not the greatest workflow, but it doesn't require any new tools if you are already familiar with remotes.

As another alternative, git-format-patch/git-apply or git-bundle may be a quicker way of shipping changes around.


1. Send a series of patches in an email.

2. Send a branch in whatever file exchange service you use. (Also formatted as patch series)

3. Setup ad-hoc vpn (zerotier?) and setup remote via SSH.

4. Push the branch into a private repository on a different service (private gitlab?)

5. Spin up a free t2.micro on AWS and push the branch there.

There are lots of options.


You don't need write access to a remote for that remote to be useful. Two or more people can collaborate using public repos that others may fetch from but not push to. Then, in lieu of github PRs, you request other people pull from you with email or however you like.

If you need access control without VPNs or whatnot (or if you simply want something closer to the github workflow), there are other services similar to github that can almost certainly provide what you need, such as gitlab.

> "Also, the additional effort to manage multiple remotes is entirely nontrivial."

With practice, I'd say it asymptotically approaches trivial.


>


A lot of people do that too.


well actually I spent the holiday weekend not working and now I have a time window to work with our east coast contractors before they stop work for the day (they don't work on the weekends or after 5PM) so having the system down isn't really end of the world but it will impact our productivity.


And reflect why keep using git when we still work the good ol' centralized workflow style of SVN and the likes.


You've got to wonder what the regex that caused _this_ downtime is going to be


It's funny I thought pi-hole was somehow blocking github, but dig returned good responses.

  dig raw.githubusercontent.com +noall +answer                         0 < 12:54:48

  ; <<>> DiG 9.14.3 <<>> raw.githubusercontent.com +noall +answer
  ;; global options: +cmd
  raw.githubusercontent.com. 6 IN CNAME github.map.fastly.net.
  github.map.fastly.net. 19 IN A 151.101.20.133

Naturally the first thing I did was check here and nothing was posted. After 20-minutes of TSing I thought it was DNSSec screwing up as after disabling it everything works.

And now I come back here and see this.. :D


Thankfully, like virtually all major tech companies, neither GITHUB.COM nor GITHUBUSERCONTENT.COM are DNSSEC-signed, nor are they ever likely to be.


Just came here to post this. I was reading through the css-modules docs[1] and am now stuck. I can't see anything other than the main README. Anyone know of a mirror?

[1]: https://github.com/css-modules/css-modules



Kinda irritating when a binary you need is only hosted on github (...together with the files to build from source).


I'm in the same position trying to build Neovim, but I have all the sources downloaded already since libmpack always times out. Maybe someone browsing this thread who can't work right now can help: does anyone know how to force/suggest to Cmake that the dependencies it needs are already downloaded and in some src directory?


Even worse if libraries / packages are only hosted on Github. Then the main ecosystem of a language dependent on those packages can grind to a full stop from there.

I'm looking at all of you CocoaPods, SwiftPM, npm, Cargo, vgo.


Insanity. Having an emergency mirror on standby is not hard when it's critical infrastructure. Surely npm/Cargo/etc.. could do this.


I think I'll set this up now https://github.com/local-npm/local-npm


The npm registry itself does not depend on GitHub.

It's true that most developers rely on GitHub for pushing to the registry, but there are a handful that use GitLab or self host that would be totally unaffected.


AdoptOpenJDK builds are hosted there too; found out about the outage trying to download one.


I heard that maven is also considering to move it's central repo to github.com.


I'm just wrapping up the work to migrate my company away from Gitlab to Github and this happens. I did it because I figured Github has to have better reliability / uptime than Gitlab. Someone joked that as soon as the migration is done Github will have some major downtime.

sigh


I highly recommend running at least a local, self-hosted git mirror at any tech company, just in these cases. Gitolite + cgit are extremely low maintenance, especially if you host them next to your other production services.

Not to mention, if you get the self-hosted route you can use Gerrit, which is still miles better for code review than GitHub, Gitlab, bitbucket and co.


You don't even need gitolite, if you're going the self-hosted route:

apt install git-all

is enough to host your own git server. Put it behind a firewall to limit access and use standard linux users with ssh keys for access control if you don't need anything fancy. For small companies I'm not sure you need anything else. Of course if you need different levels of access etc then you'll need more sophisticated tools, but many people won't.

Code review I do using local tools (the editor) face to face, again not sure you need an online service for that unless you're a larger company with lots of developers coordinating (in which case it becomes pretty essential).


> Code review I do using local tools (the editor) face to face, again not sure you need an online service for that unless you're a larger company with lots of developers coordinating (in which case it becomes pretty essential).

I mostly like online code review services because they offer an audit trail and semantic history that's easier to navigate than email. And of course, to let CI automation check tests, coverage and lint. Not because I don't trust my coworkers, but because otherwise I would forget to run tests and lint myself.


Lots of different ways to do it, and of course github and online code review is tremendously useful to people, particularly on large projects with lots of collaborators, and where a history of reviews is required.

For lots of small projects though, it's perhaps not as necessary as people think. I run tests and linting locally on save and don't really use the code review/CI features of online hosts much. That won't suit everyone of course, but it is one possible path.


Could you explain why Gerrit is better than the rest for code reviews? I have not used Gerrit in years (before I ever used github PRs) and I guess I don’t miss it, but also don’t know what I am missing :)


For me there are two killer features:

1) dependent reviews/change requests. I will work on some feature, submit it for review as one CR, and then I can immediately start working on a feature that depends on that. When I submit this one for review, it will be always shown as dependent on the first, and show a diff against master after the first is merged. This also means you can split large changes into multiple CRs, have them reviewed (possibly independently), then submit them all at once. It makes changes across large repos fantastically easy.

2) very powerful rule engine for approvals. It's based on Prolog, and basically allows you to define arbitrary, turing complete rules on what labels added by whom must be present on a CR for it to be submittable. Using the 'owners' plugin, you can also make it depend on OWNER files that define ownership in subtrees of the repository. This can lend to rules like 'product A must be approved by an owner of A but cannot be self-approved; in addition, someone who is fluent in the languages used must approve it, but that can be self-approval'.

Without those two working in Git monorepos is painful. And since I like monorepos for other reasons (like ease of deployment and testing), I like Gerrit, too :).

It also offers, in my opinion, a much better UI for actually reading and commenting on code. High contrast, fast keyboard navigation, marking of files as reviewed and a very readable history of patchsets, comments, approvals, etc.

The learning curve is much steeper than a GitHub PR, as it's a somewhat weird abstraction (CR/patchset vs git commit/branch), but in my opinion it's worth it. I guess it's my general tendency to use less beginner friendly but more powerful tools. ^^


Github's PRs are pretty bad at letting you comment on code near the diff lines (you can do it if it's within 5 lines, but if you have to click to expand the entire file, you can't comment on the expanded parts). I also like how Gerrit lets you comment on specific parts of the line, rather than the entire line.

Finally, I'm a big fan of the various labels that are common. +2 Code Review means I reviewed the code, +1 Verified means that I ran it and it worked. Those are different things and having to have both makes the responsibility clear, even if the author is adding +1 Verified.


> dependent reviews/change requests

This is really nice in Gerrit. On GitHub you can simulate this by changing the base of the PR yourself, but it's not as smooth experience.


you can also self-host the GitLab community edition :D


>I did it because I figured Github has to have better reliability

Have you looked at the GitHub incident history? https://www.githubstatus.com/history


[flagged]


HN is not typically receptive to snide corrections/remarks. There are several more productive ways to address a minor capitalization error - the best of which is to not correct it at all because it's so trivial and unimportant.


Besides, why would I care if how I capitalize a company's name isn't perfectly in line with their marketing style guide? Github is lucky if I even feel like capitalizing the first letter.

What a weird thing to hold over others.


If we are being pedantic they were both spelled correctly.


>I did it because I figured Github has to have better reliability / uptime than Gitlab

Why the hell would you think that after the Microsoft acquisition?


Github goes down on Monday around 9am pacific time - must be totally random.


The funny thing about this is the idea that west coast engineers actually start work at or before 9am


I'm rarely in before 9AM. The 10 minute standup at 10:30 AM is the only daily requirement for most staff.


I bet a bunch of PR's were built up from the weekend which weren't deployed, and some guy who came in at 9 decided to deploy them and broke things. Always scary to be the one to deploy if no one has deployed in a while.


Brings into perspective how essential git is for my workflow - I am waiting for `scp` to transfer my files from my laptop to work computer and can't push anything into the CI/CD pipeline.


For what it's worth, the "D" in "Distributed Version Control System" is useful here. You can `git init --bare foo` on your work computer, `git remote add workcomp username@hostname:path/to/foo` on your laptop, and `git push workcomp master` to push everything over using pure Git. (And the first steps only have to happen once for these two machines.)

(This creates a bare repo on your work computer, meaning there's no associated working directory -- you'd probably want to add that same repo as a remote from whichever existing repository on your work computer you have. The bare repo, in this scenario, is just a means of passing commits from your laptop to your work computer in a Git semantically-meaningful way.)


Your comment is probably going to inspire a lot of people to start self hosting remote repos during the downtime today.

As with most things, it'll start great, but a lot of those people will be in tears in a few weeks.

Great power; great responsibility.


True! It's best to consider that kind of repo as a downstream fork, with all the responsibilities of upstreaming changes once the canonical repository comes back up.


Having a bare repo is certainly one option. Some may find having two copies of a repo on one system to confusing.

Here's an alternate setup that doesn't use a bare repo. It does require some git hygiene/discipline though.

Setup: Desktop: git config --local receive.denyCurrentBranch updateInstead This will let the laptop push the desktop, updating its files, as long as the desktop doesn't have uncommited things (aka working dir is clean). On the laptop: git remote add desktop...

Working with it: Desktop: commit everything Laptop: commit everything git pull --rebase desktop git push desktop (assuming there are no issues w/ the pull).

Not saying either workflow is better, merely providing an alternative.

imo the reason git took over has some to do with being unopinionated about workflow - there's some tooling, but whatever workflow is managable with that tooling is "supported" - as long as the team can agree to use said workflow.


This is awesome, thank you!!


Assuming you're transferring a bunch of small files with `scp -r`? Try piping tar over ssh instead; it can be much faster for such use cases.

https://unix.stackexchange.com/questions/10026/how-can-i-bes...


This was a great thing to learn as SCP seems to perform poorly when you have many small files. Thank you!


I feel like this whole year has served as one big reminder of how fragile the internet really is...


It's only fragile when you centralize all of the information in giant websites. It didn't use to be like that. And now there are a ton of decentralization technologies that we should be taking advantage of, but almost everyone ignores them.

For example https://medium.com/@alexberegszaszi/mango-git-completely-dec...


Absolutely. I'm (cautiously) optimistic that some good can come from all this downtime in that sense.


It certainly helps make the argument not to move some stuff to the cloud, there is still a case for on premise as always the answer is "it depends".


2017 S3 outage was a big reminder as well

https://aws.amazon.com/message/41926/


The great AWS crash of 2017 convinced me to distribute a bit more, thankfully


Well, the internet didn't stop working, so it's no so fragile after all.

You probably mean centralized internet based services.


Over the past 2 or so weeks, I’ve been getting 500 errors from GitHub quite regularly. A page will take a while to load, then finally fail with an octocat error page. Reloading the page usually succeeds with the correct page, so it seems to be some sort of intermittent issue. In the years I’ve used GitHub, I can’t remember this ever happening – I wonder if this incident is related? Has their user base expanded a lot lately and they’re having difficulty scaling their system to meet demand? I know that I have been noticing a seeming increase in activity and new users as GitHub becomes more and more mainstream, I wonder what their growth looks like.


GitHub has a really bad track-record on reliability.

I worked on our devops systems for a while. Every `git clone` had to have multiple retries, and even then there were multi-minute outages multiple times a month that caused things to turn red and caused distrust in our CI pipeline.

I tried hard to get my company to not rely on github as part of our CI process (as others in these comments indicate) but that's an expensive proposition for many - similar to relying on other third-party cdns like dockerhub, npmjs.org, etc.


Bitbucket isn't really any better, they just surf the line of "right, that's it I'll just run Gitea on a local server, I've had enough" and convenience.


Took this opportunity to set up my own Git server on a VPS I have. It was actually extremely simple to set up the server, limit the git account (ssh only) to only using git-shell, and adding gitweb with basic auth in an nginx setup with letsencrypt, HTTP/2, and FastCGI. Took maybe 20 minutes, and most of it was looking up the necessary commands.

That being said, I am, and probably will still be a massive GitHub user, because I value its community aspect. But I also now have unlimited, truly-private Git hosting for peanuts (cheap DigitalOcean server), which is always nice.


don't even need that much... just SSH + Files.

On server:

    mkdir -p ~/repos/foo.git
    cd ~/repos/foo.git
    git --bare init
On desktop:

    git remote add backup ssh://user@server/~/repos/foo.git
    git push --all backup
Or something similar, been a while. Can use an absolute path too if sharing, need that path setup for +rwx for user/group. Used to do this as a quick and dirty remote for git usage. You can also setup a git user and add your public key, but find that isn't really necessary.

edit: looked up commands, made a couple minor tweaks.

note: this doesn't include any kind of LFS support if you're needing it, consider a more complete install, unsure on gitea or gitlab etc.


You can also use gitea, which is setup very quickly as uses almost no resources


I definitely considered it, but it struck as more of a "host your own Github," where really all I needed was "host your own Git," which of course comes out-of-the-box.


Time to start going back to using a decentralized git archive, like git-ssb.

https://github.com/noffle/git-ssb-intro


It's a hassle, for sure. If this incident cost you time, I would advocate for you to invest a brief amount of time to hedge against GH. The next downtime may not be as brief.

These steps will cost you < 15 minutes but you could end up with significant savings.

1. make an account on gitlab

2. create a new repo

3. configure it to mirror your existing GH repo. The mirror will periodically pull from GH without any interaction from you, so you always have another option.

To be clear, I don't have any opinions about the operations of Github vs Gitlab. I have no idea which has superior procedures or equipment to avoid downtime. But I am sure that this small effort to diversify is worthwhile.

EDIT: wowsers downvotes for practical advice? Downvoters, please chime in on how my advice could be improved.


How will this save you from outages like this one?

The problem is usually not that people loose access to their code (you usually have a local cache), it's that you can't open or review PRs, and that you can't trigger CI builds.

Using _both_ GitHub and GitLab simultaniously for more advanced workflows is a big hassle.


Please don't break the site guidelines by going on about downvotes.

https://news.ycombinator.com/newsguidelines.html


Its up n running for me now, everything works. Location Sweden.



Damn. Thought I was going nuts when I couldn’t browse certain repos. Everything I checked said GitHub was up


There was a certain repo I was trying to access and I was concerned the content had been taken down. I pulled up the repo in Google Search cache, copied the git URL, and cloned it from my terminal. That worked just fine, only the webpage was completely down.


Skimming this thread I found only two important takeaways, are there any more?

1) It's really, really, easy to setup a second mirror that you can switch your CI/CD processes to in the event of downtime in order to avoid being affected

2) Unless you don't vendor your builds and some of your third party dependencies rely on Github as their sole distribution mechanism in which case you are SOL


The incident says it was resolved but I still can't get to my profile page.


Could be that the fix has been deployed but it takes time to propagate thru.


Then it hasn't been resolved.


Clearing all Github cookies seems to have fixed the issue for me. Otherwise I end up with "my least favorite unicorn".


You're not crazy, it was only finally just resolved at 19:47 UTC


I wonder if all of these downtimes are related to the summer holiday season...


They're probably still moving stuff to Azure after the acquisition. If my memory serves, there was an uptick in outages ~6 months after the buyout.


6 months after an IPO is often the conclusion of a typical employee lockout period on shares, which can also be a time of more frequent team members leaving.

I wonder what the retention incentive period was for Github employees; it's often a year, and the Github acquisition was mid-2019...


I think the comment is referring to the other outages like Cloudflare, Stripe and so on.


And all the interns pushing to prod...


That explains a sudden access error when I tried to open an issue in a repo.


Can we talk about how many services have had outages recently now?


We can talk about the Poisson distribution, yes :-) https://en.m.wikipedia.org/wiki/Poisson_distribution


push/pull seems to be working for me. Getting 500 server error while trying to browse repositories on the website


Push and pull doesn't work for me. For a second I thought this was my company's subtle way of terminating my employment.


Working now.


Definitely isn't.


Seems to be working for me.


As of this post it was working as of 1 minute ago.

Is this going to be a PR stunt like hotmail - they're going to say the fix was migrating to Azure or upgraded from Linux to Windows 10.


Can people stop with the "it's working" or "it's not" - because there is intermittent / unevenly distributed failures. Still have some pages not loading for me.


It's what makes these posts cute. Thanks to multi-region and load balancing it will work for some and be broken for others


Very nice page. Some one knows a open source project similat to this githubstatus page?


3 hours later still down. That's a pretty long outage by most standards. Ouch.


Yup, I can not push my commit.


Is my project lost now?


This is why I run my own internal development repository, so I don't have to worry about online failures.


It gives me the chuckles every time I hear Github or Gitlab goes down.

Because one of the main points of Git was to provide a "distributed" version control system.

Meaning that you are probably using it wrong if your entire company completely halts to a stop just because Github/Gitlab/Bitbucket are down.

Maybe what we need is a service that automatically consolidates them for you so your repos are always online?


I don't really understand your point. Everyone still has their copy of the repo, and they can keep working on their code. That's the distributed part. It's still important to be able to share that code, trigger builds and tests, etc.


These organizations are not distributed hiveminds. Someone in authority has to approve the code change, and someone above that person has to verify that the code is being approved appropriately, and someone above that person has to monitor everything below them. If you are working on a solo project it doesn't matter if the remote repo goes down, just push to another Linux server


> a service that automatically consolidates them for you

What is the "them" you're referring to? Perhaps you're talking about consolidating multiple hosting services? This seems dangerous. What happens if Susan can access GitHub but not Gitlab and the reverse for Thiago. Susan pushes updates to GitHub, Thiago pushes updates to Gitlab and now you have to figure out how to reconcile the too.

Of course you have a similar problem if both commit different changes locally, but in that case it's generally agreed upon that the hosting service holds the version of record and that developers should reconcile changes against this.


Do you think Github is nothing more than a 'central git' repo? Have you even used the site in the last few years?


> Have you even used the site

That's a variant of "did you even read the article", which the guidelines ask you not to do (see https://news.ycombinator.com/newsguidelines.html). That's partly because it's a putdown, but mostly because there's no information in it. Readers who know less than you do are here to learn, so a better version of this comment would share some of what you know.


I disagree, and it was definitely not intended to be a put down. It's a question, because it appears the OP was speaking of GitHub as it was, not as it is. I don't question their intelligence or ability to read. But whatever.


Gitpocalypse


git is distributed, so no worries


Just when it was time for beergarden. Thank you!


Did Microsoft switch GitHub to AGI as part of the OpenAI deal?


Q: Can it automatically reject bad code or would that be impossible because of the halting problem?


it could automatically reject bad-looking code perhaps


Nobody brought up radicle yet? HN what's up?

There you go: fully IPFS based version control and collaboration. https://radicle.xyz


I thought Radicle was cool too, but as I understand it (in its current state), it has a much "larger" SPOF in that changes can only be submitted when the single authoritative repo is online.


how so? If everything is stored in IPSS, then it should always be online. The IPFS daemon is local, so you can commit things without any network connection. Am I misunderstanding what IPFS is?


"Currently the owner of the project must be online in order to receive any proposed RSM updates from a contributor. Once received and processed, these updates will be written to IPFS by the project owner, and made available to all users who follow that project." -- http://radicle.xyz/docs/#faq


What the hell? I get downvoted for linking an authoritative answer to the question I was asked? I'm so god damn sick of participating here in good faith and getting wordlessly crapped on for it. The answer supports exactly what I said, and I provided a link.

Pretty obvious someone didn't like one of my other comments and then proceeded to downvote the others they could since I commented in multiple threads at the same time. Why is this nonsense allowed? It would be easy to detect.


If you are sick of participating, don't, and your problem is solved.

Also, if you've been here long enough to get sick of anything you should have learned that dumb downvotes happen, they mostly get reversed over time, and nothing good comes of reacting to them at all, much less throwing a expletive-laden fit about them.


Such great advice, had never thought of that. Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: