Hacker News new | past | comments | ask | show | jobs | submit login
Sidejack Prevention (github.com/blog)
72 points by abraham on Oct 27, 2010 | hide | past | favorite | 29 comments



The chrome extension I wrote today both redirects to HTTPS for the sites you specify in options, but also re-writes all the cookies to set the secure flag.

Works perfectly on Facebook and Twitter without your session cookies every leaking (including the initial HTTP request) and including like buttons, etc.

Add any other site in options (Twitter and Facebook are on by default - I added github.com to mine and it works perfect, you may just need to re-login)

See: http://github.com/nikcub/Fidelio


We do exactly the same thing -- data-sensitive requests sent over HTTPS with a secure cookie, normal browsing over HTTP. At this point it seems difficult to implement site-wide SSL (CDN issues, browser bugs, overhead in SSL).

It seems like an important area of development for the web community to improve the tools and services available to make SSL on regular sites easier to implement. Yes, it's easy to turn on SSL, but there's a lot of nuance in a good implementation.


That's basically what we are going to have to do. There will be trade offs (carving time out of our roadmap, etc) and it's going to be a pita, but our space is one where some moron is going to sniff out a bunch of logins at a conference and pull some epic trolling and we'll have to spend time deleting the crap.

It's a shame that we can't go all SSL, but that's just the way it will have to be. The best we can do is make it difficult to hijack access to our tools that require elevated permissions.


Not to diss the OP but most of the proposed solutions to FireSheep I've seen display a distinct lack of knowledge of security. And that's one of those fields where enthusiastic amateurs tend to get things badly wrong.

The solution to FireSheep is something akin to SSL (robust, fast encryption with authentication). Could be SSL, or a VPN, or an SSH tunnel doing SOCKS.

Beware the MITM attack if you think all you need is encryption without authentication.


I think engineers (and computer scientists, perhaps even more so) have an unduly binary view of security.

If you are specifically targeted, you're probably screwed no matter how good your security is. If the attack isn't getting through remotely, it may escalate into social engineering, or even into physical attack, up to and including physical keyboard bugs and monitor signal sniffing.

If you're not specifically targeted, and are merely part of the crowd, you don't have so much to worry about, because you can play the numbers game. All you need to be is more secure (a couple of standard deviations) than the average. In these circumstances, security through obscurity is statistically measurable: yes, in the binary sense it's wrong to think of obscurity giving you security, but in the practical sense, it measurably decreases your risk of being compromised.


Honorable mention to client certificates which I have implemented in two projects now where security and user authentication were important.

Means that the client is not only authenticating the server, but the server is also authenticating the client.

Client cert interfaces in browsers are fairly user friendly, and github already has user keys, so they may as well make it an account option.


What's wrong with GitHubs solution? As I understand it, only insensitive stuff is available in the unencrypted session, while writes and sensitive stuff goes via unsidejackable HTTPS. Stealing a read-only Facebook/Twitter session is much less critical, and allows for CDNs etc.


Can someone still sidejack a session and view a private repo?


The secure cookie is meant to prevent this.


Yes, I see this right in the post now. I completely overlooked that SSL was used for browsing private repositories.


Once again VPN and SSH tunnels will only solve a certain class of problems. End-to-end encryption is a good start. Restricting the session token to only one/few IP addresses could help solve a small subset of issues. Server authentication is a good thing to have as well. What am I missing?


I have a basic proof-of-concept workaround to prevent Firesheep and other HTTP Sniffers from seeing data sent from Client -> Server and Server -> Client. I will announce it here when I get enough time to get it to properly work.

Since the concept is so ridiculously simple, I'm sure anyone can trivially implement it. The only issue is that it needs HTML5 Local Storage to work.

    1. HTML rendered by the server has a small piece of code to generate an RSA Key Pair on the client using Javascript.
    2. The randomly generated keypair is stored on the Client using the HTML5 Local storage mechanism.
    3. For every request that the client sends the server, it sends the RSA Public key along with it. Also, the Server's Public Key is known by the client, and all outgoing data is encrypted.
The heavy work is already done, but by leveraging Local Storage and enhanced Javascript Capabilities, it is trivial to create a pseudo SSL functionality. This also deals with the issue of requirement of Static IPs, and other limitations of SSL.

Of course, this won't mean that the entire Website will be secure. It'll be quite trivial to create a small script to force all Form Data to get encrypted with the Server's public key - thus making sure that atleast your Passwords aren't transmitted in Plaintext.

In fact, with my current startup's application, the user's Password would never even _leave_ the client machine. Javascript would perform an MD5 hash, and the server treats that as the client's password. The issue with this is that you'll need JS active on all browsers for it to work.


Even assuming a secure implementation of this system (unlikely), what's to stop a man in the middle attack? When you're browsing at the coffee shop, I just lie to you about the server's public key.

Now sure, you could access the server one from a trusted connection to get the public key. But then when someone spoofs the public key you get a large and nasty error message saying the public key has changed. And history tells us that security can't safely be reduced to that type of user interface problem.

Long story short, you're massively underestimating how valuable it is that web browsers ship with certificates installed. When in doubt, don't re-implement SSL. ;)


You're right. I didn't assume that the initial page can be hijacked as well. My small hack was just to try and work around the problem, while I understand it's best to rely on proven solutions like SSL when things get serious.

Thank you for pointing out where I was wrong.


Ah, nice timing with your message. I think we wrote the same concerns.

It's not just that the servers public key can be replaced on the victim's machine, but also an attacker can send their own public key instead of the victims to the server. During a MITM attack, all traffic would be fully viewable to an attacker without anyone really being the wiser.


You're making several hefty assumptions:

1. That your HTML used to genereate the RSA Key Pair hasn't been intercepted and modified.

2. That sending the public key is equivalent to encrypting the document with the public key.

3. That there's nothing sensitive in the response.

4. That RSA crypto in Javascript is going to be quicker than SSL for either party (encryption client side, decryption server side).

There are other problems I can see but don't really have time to articulate.

Crypto is really, really hard. Fast crypto is harder. Secure, fast crypto is even harder still.

Security in general is really hard too. SSL is the way to go without user changes being required. some sort of crypto-tunnel (SSH, VPN, SSL) is the way to do it client side on an untrusted network.


It was a very naive implementation and it clearly has several faults. Thanks for pointing some of them out. There isn't any solid use case, and we're all better off letting nginx/apache/$SERVER handle the encryption rather than doing it inside the application.

I was just trying to jump on the "Look mommy! Look at what I can do with Javascript and HTML5" bandwagon :)


No worries :)


Hmm, I like the idea, but I'm curious how you're handling a few things.

If an attacker is able to steal your session cookies there's a reliable chance that they're already on your network. Therefore, if they're already on the network, what's to stop an attacker from sending the server their own public key in lieu of the victims or sending the victim a fake server public key. Either way all the data passing between the victim and server would be viewable to an attacker. Also, the server's public key would have to be mutable so couldn't an attacker simply "correct" the victim at any point in the transaction all the while missing the initial handshake?

Edit: While I think that security around session cookies is important for websites to implement, there is a sharp line where it becomes too much. In this instance, given a perfect implementation of your idea with no legitimate potential for a MITM attack, then it would be far easier to simply strip the protection. Rather than trying to send fake keys around, simply remove the necessary things when the user is connecting to the page. (Think SSLStrip.)


>In fact, with my current startup's application, the user's Password would never even _leave_ the client machine. Javascript would perform an MD5 hash, and the server treats that as the client's password.

In some respects it makes you wonder why browsers never had this natively in the first place - why should a third-party need to know my password when it could just send a hash instead?


Because if you just send the MD5, others can log in as you when they know the MD5, which they can just sniff. It doesn't gain anything in security, except for not knowing your actual password.


That wasn't the reason for Hashing the password before sending it to the server. It's common knowledge that a lot of people use the same password for many lower-rung services.

So if I send MD5 ( Password + Salt String), even if the attacker sniffs this and logs on as the user, the original password string isn't compromised, despite the fact that the salt string is publicly visible.


You can send `nonce + MD5(nonce MD5(username realm pass))`, and then it's not sniffable.


At that point, the server will have to have received the MD5(username realm pass) at least once in order to verify the hash. You're better off not building your own schemes and instead trust existing solutions like SSL.


They do have it. Digest HTTP authentication has MD5-based challenge-response, but (just like any JS solution) it's not secure when attacker can modify request/response.


The way I see it - forms, multipart-post, etc came first, and then people found a way to abuse this to create login forms. Actual "HTTP Authentication" does a Base64 encode to make it Unreadable to the naked eye. It would be nice if I could tell the browser to hash the data before sending it to the server like so:

    <input type="password" name="secret" hash="yes" />


That's an interesting question, and it may have to do with the age of Javascript. It's really only been around for 15 years now according to Wikipedia. At that point, the web was already starting to expand quickly, and browser support has typically been shoddy with new technology.


You'd need HTTPS to safely bootstrap the RSA code and the server's public key to the client anyways.


So basically, Firesheeps existance has directly lead to Github becoming more secure. Excellent.

If you use a vulnerable service, just write a Firesheep handler for it, publish it, and then let the provider of that service know.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: