Hacker News new | past | comments | ask | show | jobs | submit login

You do that, and basically you end up treating everything as tainted.

Also: yes, you are missing something. Namely that you don't need concurrency for this sort of timing attack. Even something as simple as "X does work or doesn't" and "Y measures execution time" leaks timing info.

For another example: X either reflows the page, or doesn't. Y keeps track of the refresh rate of the page. Or does a timing attack to determine if the page was reflowed or not. Or injects JS into the page to ping website Z when a reflow happens. (Note that it can inject this JS before it grabs the sensitive data.)

For another example: Depending on sensitive data, I either allocate a large array or don't. And then detect how much is allocated before a GC happens.

Note that this can be part JS / part "trusted" extension. It can also even take advantage of parts of existing pages.

The more I consider this scheme, the more I think it's an elegant scheme against accidental leaks, but is fundamentally flawed against malicious leaks. Unfortunately, since the entire point of it is to protect against malicious leaks...




I'm not sure I follow your examples - you seem to ignore that anything happening inside a conditional that depends on a tainted variable becomes tainted itself.

So if you read my cookies and then access the DOM (to trigger a reflow or inject JS), then all future accesses to the DOM will be considered tainted.

This doesn't taint "everything", but it should taint exactly what we want to be tainted (all possible paths to our sensitive data).

Yes, the challenge would be to find and treat all covert channels (time, GC, PRNG etc.), but that seems surmountable. The very exotic channels (like your GC example) are best handled by trimming down the general API for trusted extensions.

I.e. most extensions don't need any kind of GC access to begin with. If your extension does then all bets for fine grained taint-checks are off and it must be marked as "fully trusted" by the user before proceeding.


"Unfortunately, since the entire point of it is to protect against malicious leaks..."

That's actually not the entire point. At least in this paper, we do not claim to address attacks that leverage covert channels. But the attacker model assumption is weaker (i.e., the attacker is assumed to be more powerful) than that originally assumed by the Chrome design (e.g., that only pages are malicious and will try to exploit extensions). And this is important. Particularly because the design that we end up with will be more secure than the current one. So, at worst, the new system addresses the limitations of the existing system under their attacker model. Then, depending on how far you are willing to hack up the browser, underlying OS, or hardware you can also try to address the covert channel leaks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: