One drawback to giving branding to these otherwise complex and obscure vulnerabilities. It wouldn’t be so effective to say “Intel has detected your computer is vulnerable to a cache side channel attack. Click here to install the fix.”
The flip side being that people are more likely to update if their computer sounds like it's going to "melt down" than if it's vulnerable to a "cache side channel attack".
As I sardonically stated on Twitter, it's not a real vulnerability until it has a catchy name and logo. It's the newest node.js hipster pentesting tech.
Can someone please help me clear something up, as I have not been able to figure it out for sure myself...
If I understand correctly, it is possible to perform these exploits with JavaScript. What about without?
Let's say I set javascript.enabled=false, is it possible to do break out of the browser's sandbox with just HTML5 + CSS? I have read that today it is "Turing-complete"...
I understand the only way it can be exploited in javascript relies on access to a very precise timing API, which is trivial for browser vendors to make less precise. I wouldn't worry too much about javascript at least as far as this vulnerability is concerned.
Because of javascript's wide surface area, it's nigh-impossible for browser manufactures to be sure that they've disabled indirect access to timing data.
The original proof of concept didn't even use a "precise timing API", it features `while(true){ i++; }` to increment a counter, and pulled timing information out of that side-channel.
It is trivial for the browser vendors to disable access to a specific API, but we're in for a game of whack-a-mole.
Dismiss the exploitability of javascript at your own peril - sure, WebWorkers and SharedArrayBuffer are this week's blocked timing attack, but smart money says there are other ways to get timing information that are unpatched.
Direct use of timing APIs isn't the only channel; it's also possible to make your own timer by having another thread increment a shared variable in a tight loop. Firefox has banned SharedArrayBuffer to block one way of arranging this, but there may be others. https://security.stackexchange.com/questions/177033/how-can-...
If that's the case, how is the virus going to infect my computer unless I run untrusted code I downloaded on purpose? And if I'm doing that then I am accepting I'm likely going to get hacked, before and after Spectre/Meltdown, so how exactly are they making the situation worse?
An attacker needs the ability to compute on your local machine. Javascript is the way to do that in a browser.
With just CSS this should be impossible/ very unlikely. I guess it is probably technically possible, but I do not expect to see exploits using just CSS.
I've assumed since the exploits were published that viable attacks existed to exploit them. Does that make me too cynical?
Of what advantage is it for attackers to publish that they are using a particular exploit? Especially due to the nature of these exploits, they would more than likely be silent attacks.
Yeah, the white papers are pretty clear on how to replicate it, though it seemed it would still take no small effort to get it working at a higher level (IIRC they used kernel modules to pull it off).
Could someone with php and a remote page reloader access data in the other shared hosts? I was told a shared hosts computer can sometimes have thousands of web accounts and domains. Seems like a limitless opportunity for stealing backend information. I hope I'm wrong.
Countercounterpoint: once a proof of concept shows it is possible to use something maliciously, in a way that isn't impractically inefficient, it will soon enough be used that way by someone.
So: don't panic, but do take precautions as soon as possible.
That looks good on paper, but it doesn't seem to be happening. researchers aren't having trouble reproducing the exploit independently, and it didn't take long to expand and improve it e.g. https://mobile.twitter.com/aionescu/status/95126147034336051... from almost a month ago. But malware authors just don't seem interested so far. Since it's a read-only attack and would have to be chained with other vulnerabilities for most malicious use-cases, maybe there's lower-hanging fruit or they just don't see the potential.
> malware authors just don't seem interested so far
Possibly because the issues are mostly mitigated in the wild and there are other, easier to exploit, holes out there too (particularly the water-bag problem: human engineering can be a great attack vector). So they are picking the lower hanging fruit instead. As soon as there is a PoC that seems to have a decent ROI for the implementation time, exploits will appear in the wild.
I can assure you this is 100% false. These techniques were being used in the wild on cloud provider(s) at least as early as June 2017. I witnessed this myself.
> A successful attack could expose passwords and other secrets.
Perhaps this is a good moment for the IT-world to start eradicating the use of passwords.
EDIT: Of course this will not solve every problem related to these vulnerabilities, but it might go a long way, especially if possible exploits are taken into account when designing new systems.
Passwords have nothing to do with this. The attacker can potentially do anything to the target system voiding any security mechanisms that can be accessed by operating system.
And replace it with what exactly?
Physical devices - Yubikey - are a neat idea, but not everything supports them. Biometrics is not password, that's a username replacement.
Also, it doesn't matter. Passwords, secret hashes, ssh keys; anything can be accidentally acquired.
> Yubikey - are a neat idea, but not everything supports them.
It's a fair point, but in general security has no silver bullets. Build a better lock, and attackers will build a better lockpick (or go through a window instead).
Lack of universal U2F support (a broader login security spec that Yubikeys supports) is Yubikey's weakness, but it only helps the attackers to be a security nihilist.
More and more sites are starting to support U2F. The biggest being Google (which covers Gmail, Google Cloud, Google Docs, etc), but also Github, Facebook, and Dropbox also support it, among others.
The trade-off between usability and security applies here - prior to meltdown/spectre, authentication cookies were happily isolated, so logging in once every 30-days seemed reasonable. Now, cycling login cookies every day or so is a more aggressive policy that is arguably better, but unfortunately, automatically logging users out after a short period hurts site adoption.
To be fair, maybe around half of all websites with login and password don't seem to require them at all and only seem to have accounts in order to obtain a valid email address for marketing.
Fingerprints are not passwords or usernames. They're another form of authentication.
Remember: MFA is a two-or-more combination of: something you know (password, pin, etc.), something you have (keycard, token, fob, etc.), something you are (fingerprint, iris scan, voice recognition, etc.), and potentially even somewhere you are (geolocation).
If the flaws can expose passwords, there's no particular reason that it couldn't also expose biometric data. You use can hash it and put it in a secure enclave - but you could do that with passwords.
A popup that says "Intel has detected your computer is vulnerable to Meltdown. Click here to install the fix."