I found the paper interesting in that I hadn't heard of cache way prediction before, but I kind of have to agree with AMD's assessment here.
The attacks outlined in the paper all take the form of setting up an L1 cache structure in some way to induce collisions with other threads (or with the kernel running in the same thread), and then measure when collisions occurred in order to deduce bits of the memory addresses accessed by the other thread (or the kernel).
This type of attack has been known for a long time: you can do it just by making sure to evict all of the other thread's cache lines. It seems to be generally agreed upon that it is software's responsibility to guard against this kind of attack.
What's new in the paper is that instead of just bits 6 to 11, additional bits of the virtual memory addresses accessed by the other thread can be leaked. That's an interesting result, but I find it questionable how critical it is in practice. Making it easier to break ASLR feels like the biggest potential problem here, and I'm not sure it really is one.
We are aware of a new white paper that claims potential security exploits in AMD CPUs, whereby a malicious actor could manipulate a cache-related feature to potentially transmit user data in an unintended way. The researchers then pair this data path with known and mitigated software or speculative execution side channel vulnerabilities. AMD believes these are not new speculation-based attacks.
AMD continues to recommend the following best practices to help mitigate against side-channel issues:
Keeping your operating system up-to-date by operating at the latest version revisions of platform software and firmware, which include existing mitigations for speculation-based vulnerabilities
Following secure coding methodologies
Implementing the latest patched versions of critical libraries, including those susceptible to side channel attacks
Utilizing safe computer practices and running antivirus software"
"Additional
funding was provided by generous gifts from Intel" I want to know more ! Was Intel tired of this team finding a new CVE every 6 months they sent free AMD gears ?
Edit: Intel is funding some of the Graz students (to work on anything) https://mobile.twitter.com/lavados/status/123608333055623168...
Google has Project Zero (a team dedicated to finding exploits in competitor products). The end result is safer products all around. I think it's better that your competitor finds flaws in your products and use that for self promotion than a 3rd party selling exploits.
Meh, seems like this would be a wet dream if it's the only issue found on an Intel chip these days. Pretty easy exploit but pretty limited exposure from doing so.
Traditionally, conference proceedings are given out at the conference itself.
Looking at the conference in question (ASIACCS'20), the camera-ready deadline for a paper is March 15. Most authors probably have that camera-ready version at this point.
This could just be SOP for these authors, but it's also possible they are worried that the conference will be cancelled, as have so many others. Perhaps they figuratively blurted out their material.
The attacks outlined in the paper all take the form of setting up an L1 cache structure in some way to induce collisions with other threads (or with the kernel running in the same thread), and then measure when collisions occurred in order to deduce bits of the memory addresses accessed by the other thread (or the kernel).
This type of attack has been known for a long time: you can do it just by making sure to evict all of the other thread's cache lines. It seems to be generally agreed upon that it is software's responsibility to guard against this kind of attack.
What's new in the paper is that instead of just bits 6 to 11, additional bits of the virtual memory addresses accessed by the other thread can be leaked. That's an interesting result, but I find it questionable how critical it is in practice. Making it easier to break ASLR feels like the biggest potential problem here, and I'm not sure it really is one.