> A system called TEMPEST that can tell if part of a computer has been compromised just from its electromagnetic emissions.
this is basically the only way you really can at this point, right?
with an increasing number of components per board assembly coming from a variety of sources, in a field where you can stuff an entire malicious payload inside a large capacitor, accounting for all counter-counter^n measures a nation-state can take...
all you can really validate is whether something under test functions identically to a known good copy/ies by watching its every move: will it behave the same way as 100 other copies we've built, factoring out thermal noise/environmental factors/TOP SECRET parameters
Where does it all end? I have a feeling that in the limit this turns into thermodynamics somehow.. Let's see, a component can behave correctly in one way, but be compromised in many possible ways. Taken together compromise is a higher entropy state.
Only if you label everything but the intended deterministic state "compromised", but when you put operator error and programming bugs into the "compromised" bucket, it's sort of loses it's meaning.
except that Powerguard looks at the power side channel rather than the RF emissions side channel (although there is likely some correlation between the two). The company eventually dropped the product and pivoted to other products. If I remember correctly, Kevin Fu said the reason was the number of false positives; the embedded systems were complicated enough that there were often little used subroutines that would very occasionally get called, setting off the anomaly detector. The timing of when those subroutines get called was often unpredictable. Strange as it may seem, even embedded computers are no longer deterministic due to interrupt and thread handlers, external trigger events, user interfaces, and noise. Unless the hardware is really simple and you can exercise all branches of all the software, this kind of anomaly detection does not seem likely to work well. Especially for an FPGA running machine learning software where there are likely to be all kinds of fuzzy boundary conditions that can cause changes in outcomes. A 1.83 percent false positive rate may sound good, but if you are doing hundreds of integrity tests per day then the system would be indicating the FPGA is compromised several times per day.
Just curious where you heard Kevin Fu speak about this? I heard about the PowerGuard a while ago but never heard what became of it. If he has talks about the failures they encountered I'd love to hear them, because it's a really cool concept.
I think he gave a talk at Dartmouth College not long after his company pivoted to a new product. The talk was mostly about other research but he also mentioned what was going on with his company. I don't remember the date or location, might have been hosted by Dartmouth's ISTS. You could contact him directly, he's generally open to discussions about his work. I'm sure his contact info is available online.
I'm a little confused about whether this is really called "Tempest" (isn't that a terribly confusing name, if so, because of the similar government codeword related to emissions security?).
Ermm yeah. TEMPEST (as an acronym for Transient Electromagnetic Pulse Emanations Standard) has been around many decades and as such its mitigation by actors via various means. It is related to the RF emissions of hardware and ways to decode usable information from it. The name for this project is confusing. It uses TEMPEST characteristics to spot tampering. Which is an interesting use case in itself. They should've chosen another name. This will confuse the hell out of a lot of folks in this specific niche of infosec.
You are correct except for one tiny thing. TEMPEST is a code name [1], not an acronym. At some point, the acronym was fitted to the code word rather than the other way around, a case of a "bacronym" [1]. To my knowledge, no government document has ever used the acronym.
'project called SHEATH ". That's what it is actually. Every SW will have different EM signature because (such a surprise) will execute differently based on what instruction it contains.
I stumbled upon a talk from GRCon18 a few weeks ago where they did this [1], though on a much more restricted testcase.
Find the idea interesting, but I'm curious how resistant it is to attackers who know you are using it / how you'd design things to make backdooring in a way that isn't noticeable hard.
Of course it would be trivial for an enemy to put a transmitter aboard a commercial jet that mimicked military RFI just to to spoof and cause a shoot down "by mistake" intentionally.
this is basically the only way you really can at this point, right?
with an increasing number of components per board assembly coming from a variety of sources, in a field where you can stuff an entire malicious payload inside a large capacitor, accounting for all counter-counter^n measures a nation-state can take...
all you can really validate is whether something under test functions identically to a known good copy/ies by watching its every move: will it behave the same way as 100 other copies we've built, factoring out thermal noise/environmental factors/TOP SECRET parameters