Hacker News new | past | comments | ask | show | jobs | submit login
Researchers can slip an undetectable trojan into Intel’s Ivy Bridge CPUS (arstechnica.com)
108 points by gabemart on Sept 18, 2013 | hide | past | favorite | 31 comments



No, researchers could theoretically slip a difficult-to-detect trojan into Ivy Bridge if they controlled the manufacturing process for the chip. The paper is extremely interesting and important, but this headline does it a grave disservice. There is actually nothing specific to Ivy Bridge about the technique; Ivy Bridge is simply a case study in the paper for how the technique might be valuable to an attacker.


Exactly, the mentioned researchers haven't done anything with any schematics of the real Intel processor. It's "what we believe we could do if we'd have the control of the production of a processor with the RND like the one described by Intel to exist in their CPU's starting with their Ivy Bridge architecture."

Still, it's interesting to see how such things could be done.


The choice of BIST over scan/ATPG makes this attack theoretically possible on Ivy Bridge. If Intel had used scan/ATPG then this attack could not have worked even in theory. Unfortunately, I don't have the insight to understand why scan/ATPG could introduce a backdoor.

Disclaimer: I don't think that this attack is even theoretically feasible given just a few real world constraints. I am just nitpicking


Scan chains by design, give you the ability to load each flop with whatever value you want. Then you can toggle the clock and see the result. If you were very resilient about this you may be able to gather information about the structure. Without a GDS or netlist this is all very far fetched and unlikely. I've said before, but it is far more likely to bribe a rogue designer than to introduce back doors at this level.


Alarmist title - you can't execute code undetected, as you would think from the title, you can alter the manufacturing process to poison the random number generator. The alteration is not detectable using current testing procedures.

It's not any big deal, people already are treating the RDRAND instruction as poisoned.


Before: it's no big deal, how likely is is that someone could compromise RDRAND without anyone noticing After: what's the big fuss, everyone who counts already treats RDRAND as compromised

Why does this always happen with stories like this one?


Possibly because almost every single post like this brings out the same apologists. The same apologists get voted sky high.

The proper question is how do the same apologists get voted to the moon and posts such as this get deleted?

We already know of corporate astroturfing. Could that be happening on HN? Nah, unpossible.


Are you suggesting NSAstroturfing?

The government has engaged in that sort of thing before. http://rezn8d.net/2012/10/19/us-defense-contractors-sock-pup...


Worst headline ever. Ars is very bad at reporting on security issues.


> In an e-mail, Paar stressed that no hardware trojans have ever been found circulating in the real world and that the techniques devised in the paper are mere proofs of concept.

No need to get all worried about this type of hack. First of all, it is just an idea, not something actually found. Secondly, the hack would be all too easy to detect: run rdrand 2^31 + x times and see if x is a repeated sequence.


It is not "too easy to detect" a bad PRNG from a good one.


Well this particular change to the PRNG could be detected that way. Detecting an arbitrary tampering in a black box fashion is as you say very hard, but black box detection is not what is discussed; optical inspection of the die is also allowed.


I agree. Sounds like a number theory halting problem.

Generate sequences of numbers continually, ad infinitum, and determine if distribution rates are (?).

How would one begin to fill in that question mark? A PRNG that spits out three million 'zeros' and a single 'four' is as reliable as a PRNG that spits out an even distribution of random numbers.


PRNGs are ultimately machines, they produce data from an input, even if that input is obscured from whoever receives the output. Statistical tests for randomness are still very valid and very important for PRNGs, and your dismissal of them is premature. Three million zeroes is so wildly improbable that any PRNG that for a reasonable seed value would produce such a sequence would be intensely scrutinized to determine if other seeds exhibited such behavior. If Dual_EC_DRBG exhibited that behavior, the cryptographic community's reaction would be swift.

The issue here is the entropy of such a sequence is extremely low, and while low entropy PRNGs do exist, and they are not considered good for cryptographic purposes. If algorithmic weaknesses led to long stable and highly compressible sequences, or rather, sequences with a very low entropy (as measured by say, Kolmogorov complexity) would be considered suspect.

Determining Kolmogorov complexity is itself a difficult problem, it's the problem of finding the shortest program (in a theoretical computer science sense) that produces a given output. And the complexity of that sequence is trivial, which is a huge problem for PRNGs. If every other seed had an extraordinarily high complexity with lots of hidden state, then maybe it would be considered, but I suspect not.


Isn’t the problem with detecting bad randomness that you can’t be sure about your decision?

That is, if you accept a certain probability of falsely identifying a RNG as biased, I don’t see why this should be a fundamentally difficult problem, at the very least for the generator with four million 0s and a single 1: The probability that a good RNG spits out less than a thousand 1s in four million queries is negligible compared to the probability that it returns between 1.5 and 2.5 million 1s.

Of course, it will be harder to detect more sophisticatedly broken generators, but still, without requiring certainty, it should be possible to make some sensible claims?


For background info, read the art of computer programming, chapter 3.

For practical help, see http://www.stat.fsu.edu/pub/diehard/ (linked from http://en.wikipedia.org/wiki/Diehard_tests)


Couldn't you run the same test (say) a few thousand times, then look at the distribution of numbers? Statistically they should be distributed evenly and be equally likely to appear for large enough runs.

Any number or group of numbers that are consistently over/under represented should point to a bias as far as I can tell.


For large enough runs, 0, 1, 2, ..., max, 0, 1, ... etc exhibits that behaviour. Thus, a PRNG that fails your proposal is bad, but passing it doesn't meanthat the PRNG is good. A test suite like diehard (or the moderner version dieharder) runs tests for uniform distribution as well as a lot of others, but even then, a pass is only saying "any glaring nonrandomness can't be detected by this suite".


Thanks, that's true but my comment was mostly a response to the "three million zeroes and a single four" as a valid random sequence comment. I wasn't suggesting it as a generic method to test the quality of a PRNG.


In fact you can perform a birthday attack, which costs on the order of 2^16 calls with n=32. I'm surprised the researchers don't mention this: they're handwaving counter-attacks away because of the "very good digital post-processing, namely AES", but since they use a fixed key it's not actually very powerful.

However, the attack is essentially undetectable with n=128, and it can still be very useful because they force the AES key to be constant: between two reseedings of the conditioner, the output of RDRAND will be entirely deterministic and computable by the attacker. According to http://www.cryptography.com/public/pdf/Intel_TRNG_Report_201... , under heavy load there could be up to 44 64-bit outputs between consecutive reseedings, even assuming a healthy entropy source. The first 2 outputs are unpredictable, and the remaining (up to) 42 outputs can be predicted by the attacker.

Imagine a scenario where a malicious sandboxed application uses the random generator to monitor RDRAND output, while another application generates a cryptographic key. I think that according to Intel, this is safe because RDRAND is designed to be resistant to malicious processes (see the above paper). But if the n=128 attack is used, the malicious process can detect with high reliability when another process uses RDRAND, and can recover the value received by the other process (the exception is that if another process uses RDRAND just before a reseed, it can't be detected). For a 256-bit key and maximal load, that's a 90%+ chance of recovering the key. All with just a simple sandboxed binary.

And I think the broader point of the researchers is not about any particular attack, which can indeed be countered by specific countermeasures: it's that we now know that it's easy to maliciously alter any chip design to add various vulnerabilities, and it's probably going to be very difficult for any chip designer to predict what vulnerabilities this will introduce. I think RDRAND was chosen just because design details were made publicly available by Intel, not because it's the most interesting target. The paper gives another interesting example, which enables side-channel attacks in a side-channel-resistant chip.


<3 the Trojan Horse from Burning Man 2011 :D


This is interesting, but I wonder what the real world application of this change would be. According to Linus, the Intel RdRand is only used as one of many sources of entropy in /dev/random. So even if it were compromised it wouldn't have a large impact on security.

Source: http://www.change.org/en-GB/petitions/linus-torvalds-remove-...


Somebody posted this link on another discussion: http://cm.bell-labs.com/who/ken/trust.html

There's no way to know for sure, but I don't think this is a problem for the average consumer. I'd like to think serious black hats are not amateur fishermen who use a bait to catch anything that passes by. I picture them more like a sniper waiting for the perfect shot on a valuable target. In other words, yes every system could be compromised, but the ones that are will be the ones holding more valuable information. And those are few relative to the mass of average joes. But then one might get an advantage from controlling a botnet, and use it to attack more valuable targets.

I don't really know :)


It's the same as security always is. "What are my enemies capable of" is an absolutely critical component of the security equation that cannot be overlooked.


Not all applications use Linux. Not all applications go through Linux's /dev/random for their random needs. If you look into the discussion on, say, Theodore Tso's G+ entry on rdrand, you'll find that some people are eager to skip the OS and use rdrand directly.


Most server applications run on GNU/Linux operating system. Those machines are usually the strategic targets of such attacks, not Windows based computers.


Theodore Tso is a Linux kernel developer.


The only caveat is that if RdRand was made to output the bits about to be sent out XORed with a stream of AES(n++, NSA_KEY) then you get seemingly random data that is 100% deterministic once you know the key and completely invisible if you don't know the key.


NSA can do much better than that. US tech can no longer be trusted.


Doesn't sound undetectable to me. Isn't this what the die hard tests are for?


No it's not. The die hard tests don't help against malice.

Consider this PRNG:

`"random"_bitstream` := AES('00000000...', NSA_KEY)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: