Is this true though? I'm no spectre expert, but my layman understanding is that since you can poison branch predictors, you can direct the other process to execute a gadget that wouldn't normally be executed. This means you can conceivably direct the other process to perform speculative operations on inputs you control, even if those code paths would never touch those inputs during normal execution.
I suppose if you mitigate the branch predictor poisoning (retpoline perhaps?) then this is not a concern any more.
You can only influence which branch it takes, you can't force it to jump to an arbitrary place. And there's a limit to how far down that branch it will go, of course.
If you can't control x as the attacker you can't really get this to do anything useful no matter which way you manage to get the if to predict. Simply forcing the if to speculate one way or the other does not result in arbitrary memory reads. You need to force the if and control x.
> You can only influence which branch it takes, you can't force it to jump to an arbitrary place.
Assuming you've mitigated spectre v2, right?
If you are vulnerable to v2, I can take any other indirect branch in your program (which may appear after some "y" I can control as an input) and have it speculatively branch from there to this "temp &= ..." code, leaking the value of "y".
If you are not vulnerable to spectre v2, then I agree, the paths are much more limited and tied to speculative execution that is related to attacker-controlled values.
I suppose if you mitigate the branch predictor poisoning (retpoline perhaps?) then this is not a concern any more.