Hacker News new | past | comments | ask | show | jobs | submit login
Cookies Lack Integrity: Real-World Implications (usenix.org)
128 points by adamnemecek on Sept 25, 2015 | hide | past | favorite | 63 comments



VU#804060 is talking about "cookie forcing". This is not a new discovery. Here's Chris Evans (not the actor) talking about it in 2008: http://scarybeastsecurity.blogspot.com/2008/11/cookie-forcin...

The best solution is to preload HSTS on a domain and include all subdomains, and we've been saying that for years. That prevents any HTTP connections, although it's obviously not an easy solution in many cases.

The USENIX paper does suggest some unilateral changes to cookie semantics to address this issue, but any such changes have eye-watering compatibility concerns and could only be deployed after a lot of testing.


"The best solution is to preload HSTS on a domain and include all subdomains"

That's the great thing about HTTPS/SSL security. Every attack, every vulnerability, every problem with performance is met with "just make sure your server enables XYZ and blah blah blah is updated and make sure the clients are only connecting from chrome while standing on one leg and singing the national anthem while looking at a picture of the Pope. So yeah, it's actually really secure.

When are we going to accept that it's a fool's errand. SSL is a hopeless case and design something better for today's world?


We'd all love to see the plan.



Thanks.


What do you suppose TLS 1.3 is?


Please explain how the OP cookie hijacking attack will be mitigated by TLS1.3?


"man in the middle can do nasty thing, here is a castle of vulnerabilities we built on top of already having pwnd the communication channel"

related http://blogs.msdn.com/b/oldnewthing/archive/2006/05/08/59235... "It rather involved being on the other side of this airtight hatchway"


I can't for the life of me find it, but there was a paper a few years ago about evading App Store policies by purposely introducing vulnerabilities in your app. The idea being that e.g. you have a buffer overflow and the app makes a specific network request that exploits it, resulting in code execution free from Apple's policies. So that ONT post is a bit short-sighted in thinking bugs that don't result in privilege escalation aren't "security holes."


Coincidentially I heard about this at a recent talk - is it this:

http://www.imore.com/jekyll-apps-how-they-attack-ios-securit...

Jekyll apps in App store


That's it, thanks!

For anyone wanting to read the paper itself, the link the imore.com article is broken. Use this one instead: http://www.usenix.org/system/files/conference/usenixsecurity...


Evading Apple's policies isn't a security thing, though. What keeps the system secure from malicious activity by your app is the sandbox that your app is executed in. Putting in exploits that you then use yourself is just a weird way of obfuscating your code to bypassing non-security restrictions, and it's one that involves way more effort than is necessary. Much easier and just as effective to embed a small interpreter and make a HTTP request back to your server on launch to see if it has any code for you to run.


My new favorite: "I have discovered a critical security vulnerability in Windows which I intend to present at the XYZ conference. It allows any user with administrator privileges to perform operation Q, something that should be available only to SYSTEM."

http://blogs.msdn.com/b/oldnewthing/archive/2015/09/23/10643...


This was my first reaction too, but I took the time to read through most of the paper. The end result is that someone visiting "unsecure.com" means they can no longer trust Gmail's chat interface. That's pretty obviously a vulnerability.

True, it's more of an issue with web apps themselves and not using HSTS.

Perhaps the real problem is the terrible clickbait title. The actual authors had much better sense and taste: "Cookies Lack Integrity: Real-World Implications". Perhaps HN mods could change it.


Ok, we changed the title and the URL from http://www.kb.cert.org/vuls/id/804060 to the original source which it points to.


I might be nitpicking, but I disagree with Raymond Chen here.

Regarding his example, IMO, the process of saving a file has a platonic security abstraction, it is a very limited computation engine. When one is able to escape this abstraction, it is a security concern.

The distinction between a security bug and non security bug is very subtle.

An example how this violation may happen in real life. Assume a filesystem that may contain long filenames, an attacker may control it and cause remote execution. Another scenario may be a site that tells you to see an easter egg in notepad by fire up notepad and save a file named <payload>, which will do some trick and will also run arbitrary code on your machine.


Right. What Raymond is trying to say is that if the attacker is someone else (in your case, the attacker is the person getting the victim to use a specific long filename), then it's escalation and hence an issue. Otherwise his post would mean that even opening a bad Word doc isn't a security hole.

Thus, if you are trying to show Windows is broken, and YOU are the attacker making up this long filename to inject code into your own process, then a buffer overflow isn't a vuln.

Still as I mentioned on Raymond's original post, this doesn't quite work as Windows has things like Software Restriction Policy (and AppLocker). With that in mind, it is a vuln in the app if an app lets you inject code since you couldn't do so otherwise.


that's not a fair comparison since https is supposed to prevent mitm.


it is fair because the paper injection works by either pwning a subdomain or be the MTM, hence it involves already being on the other side of the airtight hatchway


Being the MITM isn't hard ("free public wifi"). That's one of the reasons we have HTTPS. If being MITM can get you around HTTPS then... that's not good.


The MITM is not "getting around HTTPS". It's exploiting vulnerabilities in web apps. Hence the paper's name of "Cookies Lack Integrity: Real-World Implications" instead of the nonsense "bypass HTTPS" title used here.


I'm on a cafe's wifi all day long. The channel cannot be trusted.


I am more afraid of the man at the browser and the man in the browser rather than middleman attacks.

For instance a user might be trying to use your paid service for free or otherwise get information they are not supposed to and if you are using cookies for authentication, authorization or application state, the user could modify the cookie and break your system. Not to mention the cookie is a vector for XSS, buffer overflows, and other troubles.

So if you are sending anything in a cookie that you don't want people to tamper with you should cryptographically sign it, or alternatively, send them a single opaque random identifier that points to a session or request record inside the server. There are way too many cookies on web requests now, and just from the viewpoint of speed, the opaque reference is a performance win in the age of Hazelcast.

Replay attacks can be made but there are many countermeasures, such as adding a timestamp and a nonce.

This defends against the major threat (the would-be user who wants to abuse your web site) vs a more hypothetical case (sophisticated outsider wants something.)


What you're talking about is defending the site. That's great and of course you should do that. What the paper talks about is defending the user. The example they give is being able to inject cookies then override the GMail chat widget with the attacker's account.

Such an attack doesn't put the site at risk - Google is fine. But it puts Google's users at risk, as they are signed in, yet chatting on another account.


If you can't authenticate the platform, you can't defend the user, end of story.

The web "works" insofar as it does because of the margins of abuse are tolerable.

I'm sure those working in "ad tech" have different ideas of how tolerable the abuse is.

I've always wondered about how banks deal with it, but having read through my bank's online ToS, I'm happy with my level of liability, because they're clearly willing to take on more risk than I would given the situation.


The point the GP was trying to make, I think, is that if the site's operator has cared enough about their own security to cryptographically sign their cookies, then this provides security to the users as a free benefit, because a MITM wanting to attack the user doesn't have the site's signing key either.


Cookie signing doesn't fix this. Attacker will just login, take his own signed, valid, session cookies, shove them into Victim. Now Victim uses whatever.com and Attacker can see.

The example they gave was being able to do this to Gmail. Victim is logged into Gmail, but the Gmail Chat widget is logged in as Attacker.


Is there really a difference between attacking the user and attacking the system?


I haven't dug around my browser config or the Extensions store in awhile. Does anyone happen to know if IE/Chrome/Firefox can be configured to not accept cookies from non-HTTPS sites?


Chrome plug-ins have a lot of control over network requests through the webRequest API[1]. They're easy to write - email me if you have questions.

[1] https://developer.chrome.com/extensions/webRequest

"Use the chrome.webRequest API to observe and analyze traffic and to intercept, block, or modify requests in-flight"


Doesn't look like you can in Chrome, at least without a custom extension. They control it based on hostname:

https://support.google.com/chrome/answer/3123708


Browsers should consider not showing a green lock unless a site uses HSTS.


But even with HSTS, if it wasn't preloaded, an attacker could inject cookies on the first visit (then take user to real HTTPS site). Or, if the domain doesn't have full HSTS for all subdomains, there's still a potential vuln as you can inject from non-HSTS subdomains. The paper notes that Google and some other big properties have technical issues in deploying global HSTS.


Major browsers have given up on user indicators anyway. They've decided that all they can do is pass/fail, and if fail, make it very hard for the user to access the site at all.

(I hate the trend but that's how it is.)


This exposes a weakness in the "double-submit cookie" CSRF defense technique.

https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(...


Here's the research paper presented at USENIX https://www.usenix.org/system/files/conference/usenixsecurit...


Another great argument for using signed cookies.


Here's a simple scenario and I don't see any mitigation.

A site exists, let's say "search.com", and having your searches leaked is bad. A victim opens up a new browser (no HSTS from existing connections) and goes to search.com. An attacker MITMs this HTTP request and injects their own session cookies and then redirects to https://search.com. At this point, let's assume full-domain HSTS kicks in and no more HTTP/MITM are possible.

Now the victim is on https://search.com, lock icon and cert are a-OK. But the user is signed in as attacker. The user might not notice this, and makes searches, which are now available to the attacker, in the attacker's history.

Problem: Fix this, without overly relying on the victim noticing they are signed in as the attacker. After all, they just opened search.com for the first time and aren't expecting to be signed in (though they are wary of HTTPS and do notice the green URL bar).

Perhaps it is slightly contrived and the answer really is "Well, always look on a site to see if you're logged in as someone else". But otherwise it seems rather difficult to fix, outside of HSTS preload. (And this ignores all the other issues noted in the paper.)


The user couldn't be signed in as the attacker if search.com required their cookie to be signed. Only search.com would have the secret key.

EDIT: Just read https://news.ycombinator.com/item?id=10279794 ... NM, you are correct.


Unfortunately no. An attacker can inject a signed cookie just as easily as an unsigned one. A signed cookie can prove integrity, that its data hasn't been tampered with; it can't prove that it is coming from the browser it was issued to.


That is not a thing a cookie can do, so while it may be sad, there's not much to be done about it.


That's a main point of the paper: taking legit sessions from Attacker and shoving them into Victim, then being able to spy on Victim even when Victim is on HTTPS. Apps aren't handling this case well, as in the example of being signed into GMail under Victim, but showing the chat widget of Attacker.


Exactly. A better, or at least more general, mitigation than enabling HSTS (though that's a good idea anyway), is to not design your web application in such a way that a modified cookie in the client creates a vulnerability. Since cookies are stored in the client, they are always going to be susceptible to malware on the user's machine. So, trusting that the contents of a cookie were authored by the service receiving them is a bad idea in general. Cookies should be stored along with some additional information that verifies their authorship.

A relatively simple way to accomplish this is to have your application include an HMAC in the cookie contents, and verify it whenever the cookie is received. e.g, if you are storing $session_id in a cookie, change your cookie contents to be "$session_id:$hmac_of_session_id", and verify the HMAC every time a cookie is presented.

Now a user, or malware, or a MITM, is not in the position to take over or modify a different user's session simply by altering the cookie, since they will not be able to produce a valid HMAC (the key is never shared with the user).

If even storing the key in your web frontends is too risky, you could use RSA or DSA signatures, only store the public key in the web frontend that verifies cookies, and store the private key in a more hardened cookie-signing service that isn't directly exposed to external networks. This service can be invoked when new sessions are created or upon user login, if applicable.

On top of this, if the client supports ChannelID, you should include the ChannelID in the message that is HMAC'd, so that stolen cookies cannot be reused on other machines.


> Now a user, or malware, or a MITM, is not in the position to take over

I fail to see how this fixes this issue. I can just set my cookie to $their_session_id:$hmac_of_their_session_id, or I can set their cookie to $my_session_id:$hmac_of_my_session_id

Sure, I can't modify signed cookies. But I'm still in a position to take over their session.


> I can just set my cookie to $their_session_id:$hmac_of_their_session_id

If you can steal somebody else's cookies (which are not Channel-bound) then that's true. If you can only steal or predict somebody else's session ID's, the HMAC provides protection.

It's not atypical for session IDs to be simple counters that get incremented for each new session. If your session ID is 100042, it's a pretty good bet that 100041 and 100043 are valid session IDs as well, and without HMAC, a user could take over these sessions trivially.

The even better mitigation to cookie theft, which I also mention, is TLS ChannelID. ChannelID creates a unique private/public keypair for each new TLS connection, and sends the public part along in the TLS handshake. Then, when you resume sessions from the same machine, you can prove that you have the private part and the server can accept your existing cookies. With this approach, cookies are no longer bearer tokens and stealing cookies becomes worthless.

This can be hardened even against local malware running as the same principal as the user doing the browsing if the browser's ChannelID implementation generates and stores the private key inside a TPM or HSM.


> If you can steal somebody else's cookies (which are not Channel-bound) then that's true. If you can only steal or predict somebody else's session ID's, the HMAC provides protection.

Session fixation. You don't need to steal any cookies. The attacker can plant his own session ID cookie in the victim's browser using the OP exploit. Using signed cookies doesn't change this attack at all.


Presumably this could be done transparently by the web server? Incoming cookies could be validated and then the underlying cookie value passed on to the app. Meanwhile, cookies set by the app get rewritten by the web server to include the signature. This would mean that no server-side code would need to be changed in order to support the cookie signing.

Are there any web servers out there that do this?


I think of the following:

- store only session ID in cookie

- regenerate session ID upon privilege escalation (login, what else?)

- destroy session upon logout

That being the case, is this really capable of doing much damage? Especially once you enable HSTS.


This is still vulnerable to the same kinds of attacks you can do with login CSRF: http://seclab.stanford.edu/websec/csrf/csrf.pdf

Though the attack scenarios for that are always very tenuous.


solution: Deploy HSTS on top-level domain

yeah easy for you to say


Why? What's hard about HSTS? Although they don't include subdomains.


  What's hard about HSTS?
Getting everyone in a large organisation to do it properly.

Look at [1] - in particular, look at the list of Google affiliated domains that don't include "include_subdomains": true, "mode": "force-https"

blogger.com, youtube.com, gmail.com, google.co.uk, doubleclick.net

Google isn't short of competent employees, and they obviously acknowledge the importance of hsts having played a big part in creating it, yet they don't have it turned on. Evidently, getting a single header turned on in a large organisation must be a complicated matter!

[1] https://chromium.googlesource.com/chromium/src/net/+/master/...


not really hard but a limit of HSTS in this particular scenario is that it doesn't automatically redirect you to https if it is the first visit to the web site, which means if the http channel is compromised (the scenario calls for MTM) the hsts can't do anything to fix it (you can simply strip the 302 locations or tell the server that the other endpoint is talking https).

only returning visitor will be redirected client side and only if they saw at some point the HSTS header over https.


Please see the above comment from toomuchtodo about HSTS preloading.


hey that's great! didn't know there was such a process. will list all my stuff. thanks!


There is still the issue of trust on first use. Unless the domain is preloaded in the browser as HSTS there is no assurance that there won't be at least one HTTP request.


https://hstspreload.appspot.com/

Took five minutes to configure the preload directive for HSTS in our nginx headers and submit our domain.


DNSSEC, TLS DANE is the proper solution for this.


DNSSEC means creating irrevocable CAs that'd be under essentially-direct control of major governments. No thanks. At least with the current system, if a CA fails to act proper, they can get smacked back. With DNSSEC, if .com starts issuing *.com certs, there's no recourse.


Huh? Your site can be already completely hijacked by the same actors. Your browser trusts a lot of CAs so VeriSign (or whoever operates the .com zone) can already issue .com certs. And your only hope is to preload your cert (which is a Chrome only thing, and pretty inefficient and inflexible).

The Convergence Project with notaries is an even better solution.

But using the DNS as the authoritative source of data and using external parties to keep an eye on that would both lead to efficiency (performance, flexibility) and security (as in from the State).


No, Namecoin is the proper solution to this.

DNSSEC is not safe:

1. It requires centralized trust which can be exploited by governments.

2. It still leaves your domain and PKI in the hands of incompetent rent-seeking registrars who can and do get socially engineered all the time.


So is this a real issue or a theoretical one? Are people actively using this to do harm or is something someone could do?


Why isn't cert.org using HTTPS by default?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: