Hacker News new | past | comments | ask | show | jobs | submit login
SSH Key Audit on Github (required) (github.com/settings)
292 points by ericelias on March 7, 2012 | hide | past | favorite | 105 comments



What makes me the most happy about this is that they ask for the password in order to add a key now.

I was always very afraid of XSS attacks (I know - there shouldn't be any - but there could and were, though not for this) that would add another key, so I always hoped they would add that additional bit of protection.

As such: Another huge thanks to @homakov for forcing the issue.


Note to sibling comment by SaltwaterC: a previous negatively-voted comment (https://news.ycombinator.com/item?id=3593799) got your account auto-killed.

SaltwaterC 5 hours ago | link [dead]

The API token is still there, in the "plain": https://github.com/settings/admin

Fetching it via XSS should be fairly trivial. Via a simple script in that page is straight forward. Still have to see if I can get it via XHR :).


This change is orthogonal to @homakov's attack. Silently changing the user id on a key wouldn't generate an email.


The change is good and was quite clearly precipitated by Homakov. Thus it seems entirely reasonable to thank him. So I don't see what your point is beyond irrelevant pedantry.


Nothing in the changes will do something if a security flaw surfaces again. The changes only inform you if someone change/add a key to your account while normally signed in, that is, using the standard procedure. This is why it is mainly security theatre.

A non security theatre approach would be to keep track of all the add/update/remove of the keys in an append only log and on a regular basis do reconciliation and check that what you have in practice is what you have from the logs. This is intern to the system, you do not show it, this is real added security/control.


I don't think it's pedantry. I think it's good to point out that while this change is good, and is in the same realm, if you give the impression that it would prevent an attack like the one this weekend, it's security theater.


It doesn't and I really hoped that I was clear enough in my initial comment. I know that this has nothing to do with the attack on Sunday.

But ever since that XSS vulnerability was shown maybe a year ago, I was scared that there could be another one of them as they are very, very easy to miss, especially when you have to rely on filtering instead of escaping.

Github has to because you can freely use HTML in Markdown, which is the main markup language on github.

So there we are: Relying on filtering out bad stuff from user content, instead of blindly escaping it, at least one XSS vulnerability already happened and the public key interface still allows adding keys without confirming the password.

It would really suck to be hit by a XSS attack that silently adds a key to my account, not only giving the attacker the possibility to impersonate me in commits, but, even worse, giving access to my private repositories.

Regardless of the fix for the problem on Sunday, I have always hoped Github would add the check for the password. Not to mitigate the mass-assignment problem, but to prevent a possible XSS attack from being used to deal much worse damage.

I see this package of changes that we see now happening at Github as a direct result of the Sunday hack, so while we got the mass-assignment fix (of course that had to be fixed), we also finally got the password recheck which we likely would not have gotten without the hack.

Hence my "thank you" to @homakov. Not just for uncovering the mass-assignment thing (which is a simple code fix), but also for forcing a change of policy (with negative usability repercussions and thus probably not universally accepted between github internals) elsewhere.


You are right. Same with asking password. But these additions are very sane.


Here is the command you use to obtain your fingerprint for this audit:

`ssh-keygen -lf ~/.ssh/id_rsa.pub`


Thanks for this. I found it kind of weird that the email they sent didn't include any instructions on what to do. The linked webpage didn't have any guidance either. I guess they assume we all know what to do.


This. In fact, it creates an even more dangerous situation, as users could go to the site, see their keys, and say "I dunno. Looks fine?" and approve all of the keys, without actually confirming that the keys are legitimate.

Not giving instructions on the page on how to verify the info was weak. Github people, if you're reading this, please update that page.


This is a good point. I was really impressed with their GitHub Bootcamp [1], which made setting up the keys and getting started a breeze.

It's a shame that hasn't carried through to the key audit page but there is (now) a link on how to verify keys (http://help.github.com/verify-ssh-redirect)

[1] http://help.github.com/set-up-git-redirect


I very much agree they should have added instructions to the page. However when I went through the process there was a prominent note saying that when in doubt, you should reject keys and upload new ones. So the "I dunno. Looks fine?" case seems like it would be a problem only for the careless.


Sort of like how the Rails default settings only caused problems for the careless....

I bet a lot of people would have verified their keys with instructions but didn't bother without.


Disagree for anyone with more than one key. The problem with verifying all your keys at once is that I'm not going to find all my devices (I don't practice falconry). It would have been better if you could delay answering for some keys. I'm not sure you could have, but I didn't feel that way when performing my audit so I accepted them all, they all had recognizable hostnames.


It looked like you could put off dealing with keys by just not doing anything to them. Anything you didn't approve or deny would stick around. However, I didn't actually test this, and I only had one key which is now approved so it's too late.


That is correct I did exactly that. I got the message at home, and I had a key for a work computer on there, so I confirmed the home keys and left the work key disabled.


How is that "more dangerous" than having the users never look at the keys.

Most users will not have many keys (probably just one), and will be alerted if they see more.


Honestly, I did that. Just went right to the page and clicked "Approve" to all of them. I couldn't remember the command to get my fingerprint, I was lazy, and that was really stupid of me but it does go to show you should never trust a user. Even one who is a programmer and understands the risks.


Very weird. What I consider to be a "public key" is one of those long strings of characters in my `.pub` files. What they asked me to verify turns out to be a fingerprint (from what I understand elsewhere in HN comments).

I'll be honest - I didn't understand what those MAC address-type fingerprints were, and I accepted each key. Their email made me trustful, saying that probably no account was affected. I do feel bad saying this in public, but it is what I did when there was not more information immediately available to me.


When I did it just now they did have instructions for Mac OS, Windows and (default) Linux linked off the page. Maybe they added them after the original email?

The Linux instructions were

ls ~/.ssh/*.pub | xargs -L 1 ssh-keygen -l -f


Yes, that link to instructions was only added later this afternoon.


I mailed them as soon as I received their notification to check my keys, that any kind of help on how to actually view the fingerprint would be useful. Also I suggested as a bonus, that they detect the OS from the browser and display the appropriate information. I assume, that I wasn't the only one asking for more detailed guidance. Funny enough, four hours later, they did the exact thing and notified me again via E-Mail. What a great response!


If you actually look around the page, there is the link "Need help verifying fingerprints?", which takes you to the correct page.


As I noted above (5 hours before your post), these comments all took place before they posted a link to some instructions. That link was added in the afternoon, eastern time.


Some people might be running DSA keys - this covers that case:

`ssh-keygen -lf ~/.ssh/*.pub`


> Some people might be running DSA keys

Yep, or use a non-default key name.


Fair point but if someone is using non-default names then they presumably had to deal with ssh config files too.

I'd expect such users to have just enough familiarity with ssh keys to know how to check them (though I could be wrong).

Edit: Also, GitHub's setup guide explicitly leads to rsa keys so anyone using dsa keys consciously made that choice http://help.github.com/mac-set-up-git/



You can also iterate over all the public keys in ~/.ssh:

    for f in ~/.ssh/*.pub; do ssh-keygen -lf $f; done


I had the same problem. I emailed them to invite them to add that exact command to the audit page. Glad to see I wasn't the only person who had trouble.


You can also use 'ssh-add -l' to see all the keys you've added to ssh-agent.


The accompanying email:

  A security vulnerability was recently discovered that made it possible
  for an attacker to add new SSH keys to arbitrary GitHub user accounts.
  This would have provided an attacker with clone/pull access to
  repositories with read permissions, and clone/pull/push access to
  repositories with write permissions. As of 5:53 PM UTC on Sunday,
  March 4th the vulnerability no longer exists.

  While no known malicious activity has been reported, we are taking
  additional precautions by forcing an audit of all existing SSH keys.

  # Required Action

  Since you have one or more SSH keys associated with your GitHub
  account you must visit https://github.com/settings/ssh/audit to
  approve each valid SSH key.

  Until you have approved your SSH keys, you will be unable to
  clone/pull/push your repositories over SSH.

  # Status

  We take security seriously and recognize this never should have
  happened. In addition to a full code audit, we have taken the
  following measures to enhance the security of your account:

  - We are forcing an audit of all existing SSH keys
  - Adding a new SSH key will now prompt for your password
  - We will now email you any time a new SSH key is added to your
    account
  - You now have access to a log of account changes in your Account
    Settings page
  Sincerely, The GitHub Team

  --- https://github.com support@github.com


Why are ONLY keys at risk, which this implies?

Presumably someone could have added a key, done evil, then removed the key. Evil includes all sorts of interesting things, like checking in code under the name of an existing contributor. This could potentially be really subtle and would be difficult to find in an audit later.

(Remember the stink over OpenBSD potentially having backdoors in the IPsec stack, revealed in late 2010? http://blogs.csoonline.com/1296/an_fbi_backdoor_in_openbsd)


There's no way to delete keys on another account. The bug only let someone move their key to another account.


Then you move the key back afterwards?


My guess is the last thing github want to do is make its users aware of that distinct possibility.


I disagree. It is github's duty to provide a realistic view of the situation.

Such an attack is unlikely, yes, but still possible.


No, it's not possible... how do you delete a key with a mass-assignment hack?


Deleting the keys afterward isn't really a key (heh) part of the attack, just a nice way to cover your tracks afterward. Enough accounts have multiple expired or forgotten keys to make the mere presence of multiple keys and potentially some unknown keys not always an absolute indicator of compromise, too.


It's likely that your ssh key only authenticates your account anyway. If that were the case, you could change your key to a different account, make a change, and then change it back and no one would be the wiser (though any logs demonstrating an identical key across users would identify it, but unlikely they exist and in a form worth pursuing). Security is hard.


I don't see any way to remove key after you dropped it to target.


Ah, I didn't check that. It still means a bad key could have been used for evil, just that there would be clear evidence left behind.

The issue is that I've had lots of ssh keys, and might not have my fingerprints for all of them. If I see a bad fingerprint, it's entirely probable it's just an old key of mine from an old laptop, cellphone, script, or whatever, and NOT an attacker's key.

But, in light of this attack which you revealed, now any account which contains keys which aren't 100% accountable could have been compromised by an attacker. (in fairness, someone who stole the github user password could have done the same thing too, but that's an obvious attack route)

Key management is such a pain!


I thought git itself provides authentication and integrity? In fact, if someone modifies the history of a branch that will raise all sorts of red flags the next time a legitimate user pushes.


Not modifying stuff on disk outside git, but using substituted keys to masquerade as a legitimate user and contribute evil code. It leaves an audit trail, but means you need to audit all contributes even those ostensibly from project owners.


Not really. If you make a new commit on top of the history, then nobody will notice unless you sign every commit (which i don't think is common practice).

Only modifying (existing) history will cause git to complain.


It's possible that there's a plan to review or alert repo-owners to commits introduced by keys that are rejected in this process.


You could just take a gander at your activity and see if there are any commits that you didn't make.


They also did a notification when you tried to push:

ERROR: Hi andrewjshults, it's GitHub. We're doing an SSH key audit. Please visit https://github.com/settings/ssh/audit/<removed>; to approve this key so we know it's safe. Fingerprint: <removed> fatal: The remote end hung up unexpectedly

A little weird to see when you're doing a push but good that they put it in there. Their email got flagged as bulk in gmail so until I saw this I didn't know they were doing the audit.


> "good that they put it there"

I think this was handled very badly. This broke alot of automated jobs for many people. The email actually arrived after some of our automated jobs had stopped running. There should have been an advanced warning that such an audit would take place. I'm glad we had monitoring on our automations, but many people won't have those and might still be unaware that they are not running until they authorize their keys.


AND when you tried to pull, which broke my current project's build. First time I've seen current news directly affect my work...


Sadly the link in the email isn't direct (it's a tracking link through "news.github.com"), so Thunderbird flags it as a possible phishing attempt =(

Edit: github send out an email with a link to the ssh audit page; that's the email to which I refer


As an interesting side effect, they will have pretty exact stats on how many active users they have; might help them sunset old accounts or move them to the slowest servers.

(Because of the offline nature of most git actions and different habits on pushing/pulling, it's probably hard to otherwise estimate how much a user cares about their github.)


Why is this more effective than say "number of push / pulls in the last month"?


Some people try to pool local commits into larger, less frequent pushes and pulls so the number of push/pulls is perhaps less relevant than their cumulative size. But pushes/pulls will never correspond well with user involvement because people use github for all kinds of scenarios. For instance, I might be developing branches that I don't want to push or pull to github yet--maybe I don't even intend to ever make them public. However, I may still want people to clone from my github repo and report issues to me.

The amount of time between someone getting an email from github and re-activating their account is probably the best metric github will ever have on users' attachment to their accounts.


From the way their servers are laid out in the presentation I saw they wouldn't want to move them all to a slow server, but would want them evenly distributed to help decrease load on the servers.


We have stats on this from other sources, actually. But it did provide some interesting graphs this morning :)


Correct me if I'm wrong but the nature of the vulnerability was that someone who's not you had to submit a page with certain POST variables they could have determined after the fact to be malicious while logged in.

So the fact that they're sending out this E-Mail tells us that they either don't keep logs on requests + POST contents, or that they haven't had the time or inclination to analyze this data if they have it.


> So the fact that they're sending out this E-Mail tells us that they either don't keep logs on requests + POST contents, or that they haven't had the time or inclination to analyze this data if they have it.

No.

Github is primarily a B2B company. They're not making their big bucks off of individuals.

Businesses understand that problems arise. What they want to see is immediate action taken to rectify the problems.

Business 101. Even if the problem can be easily fixed by flipping a switch on your end that the customer never has to know about, always show the client "you did something to fix the problem". This is an in-your-face-we-are-taking-charge action. Even though it is completely unnecessary from a security standpoint, it is necessary from a business one.

They get it.


Or they realize that psychology is just as important to security (and customer confidence) as logs, analysis, and good code.

Every person that got this email now feels more secure about Github. They audited their own private keys. They were reminded that they can remove keys at will. And they know Github has improved its code and given users more power (email alerts, etc) to be in control of content.


I for one am feeling way less confident in them with every announcement they make. They clearly have no idea if and how they were exploited, and their communication with their users only asks their users to check for one attack vector, while in actuality the attack was not limited to just adding ssh keys.


This isn't the only thing we're doing. Most of the work is going on behind the scenes though.


Do people keep POST contents?


Rails logs POST and GET parameters by default.


Still, I think it's unrealistic to expect GitHub to parse through all of their logs. First, it would be non-trivial to detect the malicious behaviour in the first place, and secondly, keeping logs that go back multiple years is certainly non-standard, particularly at the info level.


GET parameters I understand, but POST as well? As in the form contents? I'm finding it hard to believe you. By default?


Why is that so shocking? It logs both.


Shocking because it will produce extremely large amounts of data, makes the logs extremely security-relevant, and probably breaks all kinds of privacy laws. E.g. in the EU a user has a right to request that a company delete all data they have collected about him. So you'd have to go through your logs and purge all request data from that user - possible, but likely to be overlooked.


GET params are a part of the URL. POST params aren't.


I guess this answers my questions about how long this vulnerability existed (a long time) and whether or not they could verify no other accounts were compromised (no).


Um, is anybody else having the experience that their keys really do seem to be different?


You should probably tell GitHub


I thought I deleted this comment. Sorry. Turns out I had an old key still in my .ssh folder. Two current and valid keys and no others. My bad.


Each fingerprint listed on my audit page was exactly what I expected it to be.


Sure you aren't confusing the fingerprint with the actual key contents?


I'm missing a whole bunch of keys – I previously had about 4 keys registered, now it's just one.


Did you contact support@github? We definitely want to know about that.


It would be interesting to know the details of the vulnerability. Given that they've patched it, it would be good to see what the error was in case others are affected.

Was this Rails-related and what was it?


It was a mass assignment vulnerability in our code.


There are a number of articles that surfaced on Sunday/Monday, but in short, yes - it was a Rails mass-assignment vulnerability.


I'd like to point out that this is not a rails vulnerability, but a mistake Github engineers made, which happens to the best of us. Mass assignment is a feature and I guarantee the problem has been know for years and Github engineers were probably well aware of it.


It most certainly is a vulnerability in rails, by encouraging bad practice by design. Mass assignment should work by defaulting attributes to protected, if it should exist at all.


And yet they still fell prey to it. Insecure by default is not secure, even if you can fix the insecurities.


The same way magic_quotes was a "feature".


http://unix.stackexchange.com/questions/2116/given-keys-in-s...

This script is very useful when doing this audit, because you can turn your .ssh/authorized_keys file into a list of key names and fingerprints to check against what github is showing you.


I've just registered yesterday on Github (it's suggested for Coursera's Saas Class i'm attending) but they've sent it to me too, even though the vulnerability has already been resolved before my account was created. Maybe they've not checked account age..


I suspect that since they only closed the hole late Sunday or early Monday, that they decided the number of new accounts was so small, it wasn't worth introducing the complexity (and potential bug/insecurity sources) to handle such a small percentage of the user-base. A reasonable tradeoff since getting this audit done in a timely manner is more important than waiting to handle edge cases such as yours.


Why are you guys praising GitHub? They basically screwed up thrice: first by not catching such an obvious flaw (granted it should have been changed in Rails, but still), second by breaking half the scripts that rely on their service and finally by sending such an obnoxious email (really required action? Who the hell to do you think you are?).

Anyway it is pretty moot at this point since I have long ago forgotten my password and changing the orgion to somebody else is pretty easy.

That said, can anybody recommend alternatives? I know Bitbucket and they seem pretty great, especially as they allow private repositories, but it seems the consensus here doesn't like them for some reason?


I love Bitbucket and have been using them exclusively for a long time. I have many private repos histed there for free. I have several private repos there shared between a few users, also free. I'm unnerved by how they havent asked me for money, but grateful. Free jira, wiki, and hipchat, too.

I'm a Mercurial guy, but they do git too.


Is it a good idea to check created_at != updated_at ?

People update public keys very rarely. I would even say NEVER.

Just make an sql against your table to see what are the most possibly are malicious keys.

(i see no reason to update timestamps doing 'the trick'. I believe attackers didn't)


The vulnerability probably also allowed the attacker to edit the created_at and updated_at columns in the ssh key table.


Several comments below praise the Github team response to this vulnerability. I agree. But it should also be mentioned that the first email I sent to my company this morning read, "should [our product] source code be in the cloud?"


At the very least, I think it is irresponsible for your product's source code to ONLY be in the cloud. Luckily git provides an easy way to keep a mirror (it's kind of automatic), but some kind of regular off-github backup, signing, etc. would make a lot of sense.


This is a vulnerability where an attacker would be able to add his SSH key to a private repository and pull proprietary source code he was not authorized to see. This is why we pay for Github, not use a free account. We don't want people to be able to walk off with the intellectual property of our company.

That being said, we don't have the resources to deploy a more secure alternative without hamstringing our development capabilities (e.g. no internet connectivity).


Oh, I agree. I'm just saying the baseline for everyone should be backups and integrity checking. A lot of companies don't have great value in their code remaining confidential, a lot do, so that has to be factored in. Other infrastructure (including runtime environments/hosting) need to be factored in, too, and there's confidentiality plus a lot of other concerns like availability.

Why isn't Github Enterprise an option for you? Too expensive? (plus of course you have to run it; if you don't have a good VPN or premises network, sysadmin resources to run it, etc., it's entirely possible a self-hosted thing could be less secure than a SaaS solution)

(The irony of my running a cloud tech startup and not trusting "the cloud" for our source control, email, file storage, compute, ... is not lost on me. It definitely adds costs, but I think this is an appropriate level of paranoia. The providers of business services need to provide convincing arguments why their services are secure enough to use, at least for b2b.)


You hit the nail on the head. Github Enterprise is too "expensive" in attention required. While we are security-minded about our proprietary code, we also recognize that we have a limited budget for "distraction overhead." We chose infrastructure largely based on how little we have to think about it. In this case, the distraction cost would be significantly higher than the risk-weighted cost of IP theft. I still would prefer to minimize that risk, but without the additional staff and systems you mention, the only reasonable alternative we have is a local server with no internet connection. Alas, connectivity is a fundamental requirement.


It was easier for me to just delete all of the keys. I had some I didn't need anymore. I also didn't pick great names for the keys I had. It's easy to add a key so instead of checking the fingerprints I can just create a new key.


I've just seen it and I headed to Hacker News to verify if it was legit :)


Same here, wasn't sure seeing no reference to github.com in mails' header, even if the link pointed directly on their domain.

i guess that's the precise moment when I should feel too much paranoid.


Damn I envy the GitHub guys. They can send a mail to their users about SSH Keys and nearly all users simply understand it and get it over with.

In any other business, the result of a similar mail would be an overloaded helpdesk, a significant reputation hit and a massive bucketload of competitor FUD.


They also added a audit log so you will be able to track and address any future issues.. https://github.com/settings/security


you got balls guys. It is hard to force everyone to do something but you did it. Kudos

also, if we go back few years ago this way would be a bit secure to handle keys @key.body = params.. @key.title = params.. I am sure update_attributes is good choice when you got 5+ fields and update database scheme pretty frequently. Just my 2 cents


We sent someone back in time to make this change. Somewhere there's an alternate universe with a secure GitHub by default. Thanks!


while this was a good response to their security issue a little heads up would have been good. they broke all of our auto builds and by the time we figured it out the guy who's key was used for the builds was gone on his vacation. luckily, we got ahold of him prior to him turning his phone off.


Did this change just disable re-use of deploy keys across multiple repos?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: