For cryptographic applications yes. That is why people have spent significant effort to implement constant time algorithms to replace standard math and bitwise operations.
At the hardware level any optimizations that change performance characteristics locally (how long the crypto operation directly takes) or non locally (in this case the secrets leak via observation of cache timings in the attacker's untrusted code) are unsafe.
Intel DMPs already have a flag to turn off the same behavior that was exploited on the M1/M2. Which may suggest that the risk of this type of optimization was understood previously.
Mixing crypto operations with general purpose computation and memory accesses is a fragile balance. Where possible try utilizing HSMs, yubikeys, secure enclaves - any specialized hardware that has been hardened to protect key material.
> Where possible try utilizing HSMs, yubikeys, secure enclaves - any specialized hardware that has been hardened to protect key material.
Are there any circumstances where this hardware is accessible in the browser? As I understand, it is not generally available (if at all) for any cryptography you might want to do in the browser.
The browser doesn’t have direct access for JavaScript but can use those for supported features. This already happens for FIDO/WebAuth using a hardware root such as a Yubikey or Secure Enclave, and I believe SubtleCrypto uses hardware acceleration in some cases but I don’t remember if it makes it easy to know that.
One thing to remember here, though, is that there isn’t anything special about key material in this attack other than it being a high-value target. If we move all crypto to purpose-made hardware, someone could just start trying to target the messages to/from the crypto system.
> If we move all crypto to purpose-made hardware, someone could just start trying to target the messages to/from the crypto system.
This is one of the technical advantages of a blockchain-based system. As long as the keys are protected and signatures are generated in a secure environment, then the content of the message doesn't need to be secret to be secure.
It's not a solution to situations where privacy is desired, but if the reason for secrecy is simply to ensure that transactions are properly authorized (by avoiding the leakage of passwords and session information) then keeping the signature process secure should be sufficient even where general secrecy cannot be maintained.
Is the untrusted code observer able to see cache timing implications of fetches to addresses that the MMU considers off limits for the process? This is what keeps surprising more, it does not align well with what I think I know about processors (not much)
For this type of attack to work, the algorithm being run needs to be very well understood, and the runtime of the algorithm needs to depend almost entirely on the secret key.
In contrast, the timing of virtually any email operation is not dependent on the contents of the email, other than the size. That is, whether you wrote "my password is hunter2" or "my password is passwor", the timing of any operation running on this email will be identical.
Perhaps those could be attacked. It's possible though that it's not feasible, that the possible inputs leading to a certain timing signature are just too many to get any data out of it.
Consider that those programs are not making any effort whatsoever to run in constant time, and yet no one has shown any timing attack against them. OpenSSL has taken great pains to have constant execution time, and yet subtle processor features like this still introduce enough time differences to recover the keys.
> It's possible though that it's not feasible, that the possible inputs leading to a certain timing signature are just too many to get any data out of it.
That's plausible, but a very different argument from the original, that read:
> In contrast, the timing of virtually any email operation is not dependent on the contents of the email, other than the size.
Check out https://www.qubes-os.org/ for an operating system that tries to put as many layers of defense as possible between an end user's applications.
It's infuriating that all modern computers have a secure crypto TPM, but you're explicitly not allowed to use it for your own important keys, it's only for securing things against you like the DRM in certain drivers.
I’ve been using the TPM 2.0 chip on my ASUS based Linux box to store various keys. Tooling for this on the Linux side has improved significantly [0] and it’s been supported since kernel 3.20 (2015) [1].
How effective this is at improving one’s security posture is another question and it’s probably not a huge security upgrade, but it does mitigate some classes of attack.
I’m curious why you’re saying it’s explicitly not allowed? At least for standard TPM 1.2/2.0 chips, that isn’t the case.
At the hardware level any optimizations that change performance characteristics locally (how long the crypto operation directly takes) or non locally (in this case the secrets leak via observation of cache timings in the attacker's untrusted code) are unsafe.
Intel DMPs already have a flag to turn off the same behavior that was exploited on the M1/M2. Which may suggest that the risk of this type of optimization was understood previously.
Mixing crypto operations with general purpose computation and memory accesses is a fragile balance. Where possible try utilizing HSMs, yubikeys, secure enclaves - any specialized hardware that has been hardened to protect key material.