Hacker News new | past | comments | ask | show | jobs | submit login

I haven’t looked into it but from what I’ve read from various sources, not every platform can extract entropy from that process. I took that to mean that the CPUs on which the jitter dance is employed are not entirely “deterministic.” Well of course they are but they must be carrying black box state that affects execution that, so far, is impossible to extract without knowing the full execution trace up to that point.

Can anyone provide a specific explanation for why some of platforms can extract entropy from scheduling jitter and others can’t?




Imagine running a Linux kernel on a fully deterministic CPU emulator. The jitter would be the same on every boot.


Wouldn't disk, memory, network, cache and really anything that could trigger an interrupt also need to be fully deterministic? It seems to me this is really what the jitter dance is measuring to get randomness. It is only reading the cycle counter, all of the above would affect that slightly.


Memory, disk and cache could be deterministic in an emulator. Imagine an emulator for embedded software. Network doesn't come online until booting has reached a certain point and at that time some keys may have already been generated. It is not a showstopper (after all the jitter algorithm did make it in to the actual kernel) but there are cases in which problems would be conceivable.


Elaborate on the threat, please. Who is running the kernel and who is controlling the cycle perfect emulator?


I have an embedded product that generates keys very soon after boot. I run it on an emulator for some reason. Oops, now my keys are fully predictable.


My honest take is that these dumb "but what if I paint myself into a corner" scenarios are why it takes so fucking long for anything to get better.


How would you get a random number on a fully deterministic emulator?


You'd ask the emulator to give you a seed. (This was mentioned in the article, and a way for VMs to do this has been added in this kernel.) You could also refuse to produce any random numbers until you had entropy from some part of the emulator that wasn't deterministic. (Maybe the emulator gives real random numbers for the CPU random instruction, or maybe it lets you connect to a real network.) Since people don't think of cycle counts as something that needs to be nondeterministic for a computer to work, seeding on them makes it possible for people running the program to accidentally run it in a way that breaks the assumptions of the program authors. Note that this is true of many random sources - but the objection was not to including cycle counts, but to emitting random numbers while that was all the driver had collected.


How do you know you got real entropy instead of a replay of an already recorded stream of events?


That's not the threat model (and it can't be a threat model, it's impossible to defend against), the threat model is people accidentally replaying an exact boot (which is more likely than accidentally replaying everything and also failing to seed the VM.)


Probably expose a random device.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: