Hacker News new | past | comments | ask | show | jobs | submit login
Spectre Mitigations in Microsoft's C/C++ Compiler (paulkocher.com)
163 points by ENOTTY on Feb 15, 2018 | hide | past | favorite | 40 comments



> I've been in touch with the Microsoft compiler team ... Given the limitations of static analysis and messiness of the available Spectre mitigations, they are struggling to do what they can without significantly impacting performance. They would welcome feedback – should /Qspectre (or a different option) automatically inserts LFENCEs in all potentially non-safe code patterns, rather than the current approach of protecting known-vulnerable patterns?

The fact that this isn't a full mitigation is mentioned in the original post from Microsoft, but it's more of a footnote. I think it would be better if it was spelled out more clearly so people don't think they're safe when they're not.

From https://blogs.msdn.microsoft.com/vcblog/2018/01/15/spectre-m...

> It is important to note that there are limits to the analysis that MSVC and compilers in general can perform when attempting to identify instances of variant 1. As such, there is no guarantee that all possible instances of variant 1 will be instrumented under /Qspectre.


Yeah, even this footnote really oversells it imo. I have trouble imagining a person who would want the level of (non) protection delivered by this flag.

Kind of reminds me of https://digitizor.com/internet-explorer-9-caught-cheating-in... (sorry for the non-primary source, the original blog post has since been taken down).


Hi, I'm the guy who wrote the original VCBlog post. I would assert that we've been consistent in our messaging: there is not an automatic fix available for Spectre. The /Qspectre switch offers help in mitigation. It doesn't offer, nor does it claim to offer, complete protection.

We--as an industry--are learning as we go with Spectre. There's a lot of data that went into the decision to release this switch with its current implementation. And this isn't the last iteration: note in Paul Kocher's writeup that we've asked for feedback as to whether anyone would use a switch that was sound (for known variants) but incurred very large performance regressions.

As evidence that this is an industry-wide issue, I ask that you reread Paul Kocher's post. Also, see Chandler Carruth's tweet this morning about this topic: https://twitter.com/chandlerc1024/status/963995521705627648. Chandler is the guy driving Spectre in LLVM.

Lastly, understand that Microsoft has a lot of customers who rely on our technologies. For their benefit it's often better to say less than to say more, especially when talking about security vulnerabilities.


> Hi, I'm the guy who wrote the original VCBlog post.

Hi from one compiler engineer to another!

> Chandler is the guy driving Spectre in LLVM.

I actually sit about 25ft away from him; we talked about his tweet before lunch today. :)

> there is not an automatic fix available for Spectre. The /Qspectre switch offers help in mitigation. It doesn't offer, nor does it claim to offer, complete protection.

When I read this sentence and the footnote in the VCBlog post my takeaway is that /Qspectre offers incomplete protection that is nonetheless useful for a nontrivially broad class of applications. That is, I understand "incomplete mitigation" to be a stronger statement than "there exists a program in which the spectre attack is mitigated".

But when I read Paul's post, what I understand is that the level of protection offered is not useful for applications that do not look extremely similar to the original Spectre PoC.

I wonder if you think I'm being unfair in my reading of either of these documents?


> understand that Microsoft has a lot of customers who rely on our technologies. For their benefit it's often better to say less than to say more, especially when talking about security vulnerabilities.

I'm also genuinely curious how telling customers less about a security fix might be better for them than telling them more.


I wonder why they aren't using the approach others are taking.

That is, compile an array access a[i] into an array of size N as a[i&M] where M is one less than the next power of two larger than N.

The logical and is very fast; it can be applied everywhere without a big performance hit. And this removes almost all the attack surface, as the real vulnerability is when the attacker forces speculative access to go way outside the array to unrelated memory that probably even belongs to another process.

Edit: actually there is an even better masking approach the linux kernel is using -- see https://lwn.net/Articles/744287/


That is, compile an array access a[i] into an array of size N as a[i&M] where M is one less than the next power of two larger than N.

Doesn't that still permit speculatively getting an out-of-bound element, where the problem gets worse with larger arrays?

The following LWN article describes a more sophisticated masking approach by Alexei Starovoitov:

https://lwn.net/Articles/744287/

Of course, the same question still applies: why not use this cheaper masking approach. Are there known problems with it?


I don't think the compiler always knows N. This would then be something the programmer (and/or a library implementer) should provide.


You can't read other process's memory with Spectre. And your solution would only work for statically sized arrays. In that case though it seems like a reasonable thing to do. Not a full solution though.


Spectre can read from other processes! Meltdown can even read across privilege levels (userland to kernel). But within userland, spectre lets you read anything. See: https://isc.sans.edu/forums/diary/Meltdown+and+Spectre+clear...


No, that'd be like claiming buffer overflow attacks let you exploit any other process on the system. It's technically true-ish but misleading.

You can only read from a process that you have managed to compromise, and spectre is a new category of exploit vectors not an exploit in and of itself.

So you need a process that takes external inputs (you need some way to influence it), and then you need to figure out how that input can resolve to a viable exploit via spectre. If your input validation code is spectre-free (LFENCE, whatever) then you're probably good to go. You don't need everything in your app to be spectre free. In the blog's example, for instance, he just assumes that somehow an attacker can influence the input X. But if there's no way for an attacker to do that because X never comes from any externally-influenced thing then you didn't need any LFENCEs on it.

Ensuring that all your external-touching code is completely spectre-free is non-trivial, of course, but it's not like you can just arbitrarily exploit anything you want, and you don't need every array access to be LFENCE'd or mask'd.


IMHO it's worth mentioning that this discussion is about Spectre variant 1 (as is the original article). I.e. we should not write "Spectre" when we talk about a specific Spectre variant.

I'm mentioning this because (at least to my understanding) in Spectre variant 2 the entire address space of the victim process can be used to find the "gadget" i.e. an usable target for the indirect branch. This means that making only your input validation code "spectre-free" is not good enough for variant 2. (This is why e.g. OpenSSH recently started using the (Spectre variant 2!) retpoline compiler flags of GCC/LLVM if available. See this thread for details: https://lists.mindrot.org/pipermail/openssh-unix-dev/2018-Fe...)


True, but variant 2 isn't as gloomy as it sounds because there's 2 major challenges with it. The first is you need detailed knowledge of the binary you're targeting as well as it's memory layout. ASLR makes that challenging, to say the least. You then also need a side channel of some sort to observe the effects, such as shared memory.


Is this true though? I'm no spectre expert, but my layman understanding is that since you can poison branch predictors, you can direct the other process to execute a gadget that wouldn't normally be executed. This means you can conceivably direct the other process to perform speculative operations on inputs you control, even if those code paths would never touch those inputs during normal execution.

I suppose if you mitigate the branch predictor poisoning (retpoline perhaps?) then this is not a concern any more.


You can only influence which branch it takes, you can't force it to jump to an arbitrary place. And there's a limit to how far down that branch it will go, of course.

But consider the example function in the article:

   void victim_function_v01(size_t x) {
     if (x < array1_size) {
          temp &= array2[array1[x] * 512];
     }
   }
If you can't control x as the attacker you can't really get this to do anything useful no matter which way you manage to get the if to predict. Simply forcing the if to speculate one way or the other does not result in arbitrary memory reads. You need to force the if and control x.


> You can only influence which branch it takes, you can't force it to jump to an arbitrary place.

Assuming you've mitigated spectre v2, right?

If you are vulnerable to v2, I can take any other indirect branch in your program (which may appear after some "y" I can control as an input) and have it speculatively branch from there to this "temp &= ..." code, leaking the value of "y".

If you are not vulnerable to spectre v2, then I agree, the paths are much more limited and tied to speculative execution that is related to attacker-controlled values.


You can only read memory that is mapped in the address space of vulnerable processes with Spectre. Usually, that is only memory from the current process and maybe a small bit of shared memory (and code segments of shared libraries, but those don't contain any interesting secrets). So while you can read memory from other processes, those processes need to be vulnerable and you cannot read memory from processes that applied mitigation.


Is this true? I'm no spectre expert but I thought that if you could detect cache changes correctly, you could infer the value of memory that you did not have mapped (but the other process did) as long as the other process changes its cache-touching behavior based on the value of the memory it had access to.


Here's the big takeaway.

At first, I thought I had the wrong version of the compiler, since I wasn't seeing any LFENCEs. Finally, I tried compiling my example code from the Appendix of the Spectre paper, and saw LFENCEs appearing. Still, even small variations in the source code resulted in unprotected code being emitted, with no warning to developers.

Theres some explanation for why this is the case, but it's not looking all that good for Microsoft's implementation, IMO.


The actual conclusion is fairly clear also "Developers and users cannot rely on Microsoft's current compiler mitigation to protect against the conditional branch variant of Spectre."

"Speculation barriers are only an effective defense if they are applied to all vulnerable code patterns in a process" Which they are not, since that would be a huge performance hit.


It's important that no exploitable paths get missed. An attacker needs only a single vulnerable code pattern, which can be anywhere in a process's address space (including libraries and other code that have nothing to do with security).

Hmm, I don't understand this statement. My (simplistic) understanding of this attack is that speculative execution produces detectable side-effects that result in memory being leaked from another address space that the attacker isn't supposed to be able to read. Surely if Spectre is used to leak the contents of something useless then that's not going to help the attacker compromise the application. Spectre is read-only, right? Am I missing something?


Spectre can be used to leak arbitrary memory from a process iirc. So, if you have a process with nothing interesting it should be fine.


That's my understanding as well. If you are writing a new Notepad.exe, you probably don't care about Spectre.


Until someone uses it to keep the plaintext files with their passwords in it open.


Hope your users do not try to open an ssl private key...


Well, if they already have malware on their machine, they are screwed anyway. Since the browsers have mitigations built in already, I'm not worried about a remote attack over the web. What's left?


How about games?


I thought for Intel processors it leaked across all processes, including outside user-space.

Or was that meltdown?


Meltdown allows information to leak across ring boundaries. Spectre by itself doesn't. Meltdown is relatively easy to fix whereas Spectre is a more fundamental issue with how most modern chips are designed.


Been looking for an answer to a related question on this.

DRM technologies like "white box crypto," seem to be designed for the same use case. Is encoding sensitive operations in an application using these techs not a viable medium term mitigation as well?


No. White-box and obfuscated encryption cores are extremely slow, can't protect program logic (unless you encrypt the program and run it as a VM, which is even slower), and are themselves failure-prone. They work in DRM because DRM is an economic defense intended mostly to protect the new-release window for a title and to impose greater costs on copiers than a title is worth. They're not general-purpose countermeasures.


I can't actually see it, but I believe this ticket[0] in the Chromium bugtracker is for Spectre mitigations (in V8). If you search in the V8 commit log for BUG=chromium:798964, you'll find a number of commits affecting most of the backends (and if you're really looking, you'll also see related or similar commits which are no longer tagged with the bug number, later in the log)

[0]: https://bugs.chromium.org/p/chromium/issues/detail?id=798964


What an absolute disaster spectre has been, and what a shame it is that intel won't be punished for it


And all other chipmakers? Unlike Meltdown Spectre applies to pretty much all modern processors.


Except most low end chips (mostly ARM) are immune.


Insofar as they are immune, is it because of some sort of foresight and/or better engineering practices, or because the immune chips are simply cheaper, less sophisticated designs?

Hint: The fact that ARM ranges from "completely immune" to "affected by the less-common Meltdown" gives away the answer.


arm chips are vulnerabe to spectre, and some even to meltdown:

https://developer.arm.com/support/security-update

ans mips to spectre:

https://www.mips.com/blog/mips-response-on-speculative-execu...


People make mistakes, "punishment" isn't the way forward here. There are alternatives out there to Intel, but kindly remember that other chip makers also released hardware that is susceptible. Also, you can trust this won't be the last time issues such as this come up.


The only thing Intel deserves to be punished for here is their "nothing to see, move along" posture.

At the technical level, these are subtle bugs that affect most of the industry (with the Meltdown bug affecting a more restricted set of chips). It's a mess we'll be paying for for years to come, but not one that Intel is especially culpable for.


I think the big thing here that is unspoken is that the Windows operating systems are compiled with that compiler. MS devs make tradeoffs for performance and their statement that no significant performance degradation is present in updated Win10 should be taken as: "We are delivering vulnerable operating systems because otherwise it would be unacceptably slow".

Scary times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: