Hacker News new | past | comments | ask | show | jobs | submit login
Meltdown-Spectre: Malware is already being tested by attackers (zdnet.com)
145 points by okket on Feb 5, 2018 | hide | past | favorite | 47 comments



Here's how it can be exploited:

A popup that says "Intel has detected your computer is vulnerable to Meltdown. Click here to install the fix."


Dear Sir, you forgot to include a download link for this fix. Please advise


I've clicked this popup each day for the last week, installing a new update each time...

When will this horrible Meltdown end???


One drawback to giving branding to these otherwise complex and obscure vulnerabilities. It wouldn’t be so effective to say “Intel has detected your computer is vulnerable to a cache side channel attack. Click here to install the fix.”


The flip side being that people are more likely to update if their computer sounds like it's going to "melt down" than if it's vulnerable to a "cache side channel attack".


Careful, you'll get downvoted for suggesting that marketing security vulns is a stupid idea on HN.


As I sardonically stated on Twitter, it's not a real vulnerability until it has a catchy name and logo. It's the newest node.js hipster pentesting tech.


It's as useless as bashing node.js in an unrelated conversation.


Bonus points for sending the user to a legitimate link explaining what Meltdown is.


Can someone please help me clear something up, as I have not been able to figure it out for sure myself...

If I understand correctly, it is possible to perform these exploits with JavaScript. What about without?

Let's say I set javascript.enabled=false, is it possible to do break out of the browser's sandbox with just HTML5 + CSS? I have read that today it is "Turing-complete"...


I understand the only way it can be exploited in javascript relies on access to a very precise timing API, which is trivial for browser vendors to make less precise. I wouldn't worry too much about javascript at least as far as this vulnerability is concerned.


Because of javascript's wide surface area, it's nigh-impossible for browser manufactures to be sure that they've disabled indirect access to timing data.

The original proof of concept didn't even use a "precise timing API", it features `while(true){ i++; }` to increment a counter, and pulled timing information out of that side-channel.

It is trivial for the browser vendors to disable access to a specific API, but we're in for a game of whack-a-mole.

Dismiss the exploitability of javascript at your own peril - sure, WebWorkers and SharedArrayBuffer are this week's blocked timing attack, but smart money says there are other ways to get timing information that are unpatched.


The while true used a very specific memory API that was disabled in all browsers the week the bug was disclosed.


Direct use of timing APIs isn't the only channel; it's also possible to make your own timer by having another thread increment a shared variable in a tight loop. Firefox has banned SharedArrayBuffer to block one way of arranging this, but there may be others. https://security.stackexchange.com/questions/177033/how-can-...


Precise timers are not needed, but they improve the data rate. For an efficient technique to bypass timer mitigation and a proof of concept see https://weblll.org/index.php/spectre-cascade-there-may-be-no...


If that's the case, how is the virus going to infect my computer unless I run untrusted code I downloaded on purpose? And if I'm doing that then I am accepting I'm likely going to get hacked, before and after Spectre/Meltdown, so how exactly are they making the situation worse?


An attacker needs the ability to compute on your local machine. Javascript is the way to do that in a browser.

With just CSS this should be impossible/ very unlikely. I guess it is probably technically possible, but I do not expect to see exploits using just CSS.

With HTML5 idk, that's really outside of my area.


I've assumed since the exploits were published that viable attacks existed to exploit them. Does that make me too cynical?

Of what advantage is it for attackers to publish that they are using a particular exploit? Especially due to the nature of these exploits, they would more than likely be silent attacks.


Yeah, the white papers are pretty clear on how to replicate it, though it seemed it would still take no small effort to get it working at a higher level (IIRC they used kernel modules to pull it off).

Here's the meltdown repo from the paper if your interested: https://github.com/IAIK/meltdown


I've assumed since before the exploits were published that viable attacks existed to exploit them. Does that make me too cynical?


Could someone with php and a remote page reloader access data in the other shared hosts? I was told a shared hosts computer can sometimes have thousands of web accounts and domains. Seems like a limitless opportunity for stealing backend information. I hope I'm wrong.


You're almost certainly right. And a lot of those hosts do not reliably do a good job of staying current on patching.


Infosec experts have pointed out this article contains some inaccuracies: https://www.virusbulletin.com/blog/2018/02/there-no-evidence...


Counterpoint: "There is no evidence in-the-wild malware is using Meltdown or Spectre" https://www.virusbulletin.com/blog/2018/02/there-no-evidence...


>> the Flash Player patch Adobe will release next week.

Part of me thinks you could read that sentence this week, next week, a month or 6 months from now and it will still hold true...


that would only stay true until was it 2020 where they stop flash player support


Countercounterpoint: once a proof of concept shows it is possible to use something maliciously, in a way that isn't impractically inefficient, it will soon enough be used that way by someone.

So: don't panic, but do take precautions as soon as possible.


That looks good on paper, but it doesn't seem to be happening. researchers aren't having trouble reproducing the exploit independently, and it didn't take long to expand and improve it e.g. https://mobile.twitter.com/aionescu/status/95126147034336051... from almost a month ago. But malware authors just don't seem interested so far. Since it's a read-only attack and would have to be chained with other vulnerabilities for most malicious use-cases, maybe there's lower-hanging fruit or they just don't see the potential.


> malware authors just don't seem interested so far

Possibly because the issues are mostly mitigated in the wild and there are other, easier to exploit, holes out there too (particularly the water-bag problem: human engineering can be a great attack vector). So they are picking the lower hanging fruit instead. As soon as there is a PoC that seems to have a decent ROI for the implementation time, exploits will appear in the wild.


That's meltdown, not spectre. And meltdown relies on native code running with very little interference from other things causing timing to screw up.

If somebody could already run that code, they'd choose anther attack method.


There is no ^publicly disclosed evidence of in-the-wild malware using Meltdown or Spectre ^yet.

The real nightmare targets are cloud providers. Many smaller providers have not patched yet.


I can assure you this is 100% false. These techniques were being used in the wild on cloud provider(s) at least as early as June 2017. I witnessed this myself.


Do you have any further info about this?


> A successful attack could expose passwords and other secrets.

Perhaps this is a good moment for the IT-world to start eradicating the use of passwords.

EDIT: Of course this will not solve every problem related to these vulnerabilities, but it might go a long way, especially if possible exploits are taken into account when designing new systems.


Passwords have nothing to do with this. The attacker can potentially do anything to the target system voiding any security mechanisms that can be accessed by operating system.


Spectre and Meltdown don't give you the ability to _write_ memory, only _read_ it.

So they're "only" a problem to the extent that there is information you do not want the attacker to read. Passwords are the first obvious target here.

Of course the same attack could expose other credentials, so it's not a password-specific issue.


And replace it with what exactly? Physical devices - Yubikey - are a neat idea, but not everything supports them. Biometrics is not password, that's a username replacement.

Also, it doesn't matter. Passwords, secret hashes, ssh keys; anything can be accidentally acquired.


> Yubikey - are a neat idea, but not everything supports them.

It's a fair point, but in general security has no silver bullets. Build a better lock, and attackers will build a better lockpick (or go through a window instead).

Lack of universal U2F support (a broader login security spec that Yubikeys supports) is Yubikey's weakness, but it only helps the attackers to be a security nihilist.

More and more sites are starting to support U2F. The biggest being Google (which covers Gmail, Google Cloud, Google Docs, etc), but also Github, Facebook, and Dropbox also support it, among others.

The trade-off between usability and security applies here - prior to meltdown/spectre, authentication cookies were happily isolated, so logging in once every 30-days seemed reasonable. Now, cycling login cookies every day or so is a more aggressive policy that is arguably better, but unfortunately, automatically logging users out after a short period hurts site adoption.


To be fair, maybe around half of all websites with login and password don't seem to require them at all and only seem to have accounts in order to obtain a valid email address for marketing.


Can you give an example?


HN for all the people who read and don't comment comes to mind.


Not replacing them, but MFA and good identity management is a start.


Exactly, fingerprints (and more generally biometrics) are usernames, not passwords.


Fingerprints are not passwords or usernames. They're another form of authentication.

Remember: MFA is a two-or-more combination of: something you know (password, pin, etc.), something you have (keycard, token, fob, etc.), something you are (fingerprint, iris scan, voice recognition, etc.), and potentially even somewhere you are (geolocation).


Even if you used pubkey authentication, these exploits could exfiltrate your private key just as easily as a password.


If the flaws can expose passwords, there's no particular reason that it couldn't also expose biometric data. You use can hash it and put it in a secure enclave - but you could do that with passwords.


You could keep a private key in a secure enclave and use it to sign each request or even just signing a password with a nonce.

Though if your attacker has root they can change what is on the screen anyway.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: