At boot time, on a server sitting in a rack beside thousands of others ... how are these going to help any? They aint moving and the RF/energy environment around them should be steady state or well within characterize-able bounds of noise.
"Random enough" is a metaphysical question when you get into it. If an RTLSDR stick and a site customized munger script can't provide enough entropy for the entire data center you've fallen into a Purity spiral and will never be happy, anyway.
There are true hardware random number generators. IIRC, one example is based on a reverse biased diode. Due random quantum effects, an electron flow backwards occasionally and measuring that gives you a source of real randomness.
The best RNG solution for the paranoid would have been to have a standardized internal header/connector with an analog-digital converter input and a power supply, like the connector that exists on most motherboards for the front-panel audio (but preferably with a higher-frequency and lower-resolution ADC than for audio, even if an audio ADC is also acceptable).
If such a connector would have been standardized, very small analog noise generator boards that could be plugged in it would cost only a few dollars at most, and they would not contain any device more complex than an operational amplifier.
This solution cannot be back-doored, because it is trivial to test the ADC without a noise-generator attached, to verify that it really is an ADC and the small PCB with the analog noise generator can also be easily inspected to verify that it contains only the specified (analog) devices.
All this could have been very simple and cheap if it would have been standardized, and not more difficult to use than the unverifiable CPU instructions.
As it is, the paranoid must have electronics experience, to design and make their own analog-noise generator, to be used either with the microphone input of the PC audio connectors (which includes a weak power supply), or better with the ADC of a small microcontroller board, to be connected via USB (preferably on an internal USB connnector of the PC motherboard).
> standardized, very small analog noise generator boards
The following design[1] uses _two_ pluggable analog noise generator boards (since you don't trust one). The writeup will be of interest to the paranoid in this thread.
This is a good example of how you can make a RNG using a microcontroller board connected to an internal USB connector of the motherboard.
However what they have is not perfect, because the RNG boards include the ADC and some simple digital post-processing, providing a RS-232 serial output. For better auditability, the RNG boards should have been simpler, with only the analog part of the RNG, and they should have used an ADC input of the microcontroller instead of using a RS-232 input. If you compile from source and you write the flash of the microcontroller yourself, then it is secure enough.
Because only seldom such boards are available for buying, many people have done something like this only for themselves.
However the problem is that this is a non-standard solution. A connector like the 3-pin header shown at this link should have existed on every motherboard (but with analog input, not with RS-232 input). All software should have expected to have a standard RNG input on the motherboard, like it expects to have HD Audio input/output or temperature/RPM sensors. If the ADC would have been provided by the motherboard chipset, which already provides many other ADCs, there would have been no need for a microcontroller and no need of firmware for the microcontroller.
Had they wanted, Intel could have easily standardized a RNG input for the chipset, like they have standardized HDAudio, SMBus and countless other chipset features. Anyone else would have followed.
It is very likely that standardizing such a solution would have been actually much cheaper for Intel and AMD than implementing RNG instructions inside the CPU, which will always remain non-recommendable for any serious applications, so they waste die area and testing time during manufacturing, and they may also reduce a little the yields of good dies.
Here's another iteration: A user supplied board with a high-gain op-amp, a comparator, and a latch -- accepting a clock line -- could produce a definite noise-informed bit sequence. This bit sequence could be observed both at that level and the software level, to confirm that no alteration had taken place in-between, in the motherboard/chipset etc.
For the rest of the chip / hardware there are at least some chances to test what they do and discover any suspicious behavior.
Any well-designed back-door in a black-box RNG cannot be discovered by testing.
Except for the RNG, the only other thing that cannot be trusted at all is because your computer/smartphone might allow remote connections to a management component of its hardware, regardless how you configure it.
Wired connections are not very dangerous, because you can pass them trough an external firewall and you can block anything suspicious.
The problem is with the WiFi connections, e.g. of the WiFi Intel chipsets in laptops, and obviously the least under your control are the mobile phones.
However even a smartphone can be put in a metal box to ensure that nobody can connect to it (even if that also defeats the main use of a mobile phone, it can make sense if you are worried about remote control only sometimes, not permanently).
On the other hand, a RNG that cannot be verified to be what it claims to be, is completely useless.
> For the rest of the chip / hardware there are at least some chances to test what they do and discover any suspicious behavior.
That there are some chances to test them does not provide any measure of trust... you actually have to perform the audit to achieve that.
>... On the other hand, a RNG that cannot be verified to be what it claims to be, is completely useless.
If we are going to take such an absolute line over RNGs, then, to be consistent, we should take the same attitude to the rest of the hardware and software we use - but, per my previous point, that means actually evaluating it, not just having the possibility of doing so.
One might, instead, argue that we should use only verifiable RNGs because that is actually feasible (at least for non-mobile equipment), but that does nothing to bring the rest of the system up to the standard of your last paragraph.
Like I have said, besides the RNG, the only other problem is with the possibility of remote connections to the computer chipset.
Any other malicious behavior is implausible, as either easy to detect or requiring too much information about the future (to be able to determine what to do and when to do without a remote control connection). Self-destruct is something that could happen e.g. after a certain number of active hours, but this would make sense only if you are a target for someone able to load special firmware into your own computer. If that would be a general feature of a computer, it would be easily detected after it would happen randomly.
So if you do not trust the HW, you must prevent remote connections. This is easy for desktop/server computers, if you do not have WiFi/Bluetooth/LTE and you do not use Intel Ethernet interfaces (or other chipset Ethernet interfaces, with remote management features) connected to the Internet or to any other untrusted network. Towards untrusted networks, you must use Ethernet interfaces without sideband management links, e.g. you may use securely USB Ethernet interfaces.
Unfortunately, currently there is no way to completely trust laptops when WiFi connections are possible, even if they claim that e.g. Intel vPro is disabled. In any case, it is still better if the manufacturer claims that this is true (like in my Dell laptop), even if you cannot verify the claim with certainty.
Even if someone would be able to connect remotely to your computer and spy you, they will have access only to your unencrypted or active documents.
If you use a bad RNG for encryption purposes, then the spies could also access any encrypted and non-active documents, which is a much greater danger.
In conclusion, the RNG is still in the top position of the hardware that cannot be tested and cannot be trusted. Nothing else comes close.
I can design a CPU that reads the operating system’s clock (easy-peasy) and is actually a time bomb that does something at a predefined time.
You could now argue that we can use some workbench that simulates these conditions, but then I will catch you on the fact that you can also determine that random output is changed (statistically for example) for some parameters of the system.
Either way enumerating all of these parameters is intractable due to the size of the search space.
Same thing goes to the fact that you can’t definitely say whether your CPU will always do what it says it does, purely because you can’t enumerate all possible execute paths as they are infinite.
* I see a lot of LUTs, adders, and common operations(not a theoretical problem).
* There's memory and IO, but that shouldn't matter for defining a single clock tick.
* Thermal/power monitoring - not sure how important that is, but sure that's outside the definition of a FSM.
I have written simple software and FPGA CPU cores, and I would describe all of them as FSMs. It's possible that newer CPUs wouldn't qualify because e.g. they rely on metastable circuits for randomness, power-monitoring, etc. but most of it should be a FSM and the exceptions aren't anything like "infinite execution paths".
Yes, you are repeating yourself without addressing the issue that the ability to verify does not, by itself, confer any trust. Even if we accept your conclusion, it does not mean the other risks are inconsequential.
I think his point is the most consequential issue is the RNG, which to me makes a sort of sense.
You can never be truly secure and eliminate all potential concerns (what if there are invisible people with nanotech in your house - not a known concern, you can always manufacture some artificial concern), so you try to address concerns in an order of priority.
This seems like a very reasonable thing to do, in fact everyone does it - it is why we even bother to call anyone "paranoid" to begin with, i.e. we have our own list of concerns and try to address what we can...
Deniable backdoors are a much bigger risk than reproducible backdoors.
I trust my hardware manufacturers to be afraid of putting a backdoor into their chips if a binary captured via network surveillance could be used to show that a backdoor existed. This would be devastating to their business. Therefore, I trust them to not do anything that risks this occurring.
This is why people were so uneasy when internally-accessible unique serial numbers were added to the microcode engines of Intel processors.
You can trust a chip to correctly do cryptographic computations by comparing with another, more trusted system (an FPGA, if you want to go to absurd lengths).
You can protect yourself against faulty key generation by generating the key offsite or on a HSM.
However, a flaw in a RNG that allows a third party (hello NSA) to break cryptography - you cannot defend from that, you can't even detect it.
> However, a flaw in a RNG that allows a third party (hello NSA) to break cryptography - you cannot defend from that, you can't even detect it.
You always put bad randomness through enough calls of one way functions that reversing them is computationally infeasible for your adversary for the lifetime of the secret.
A datacenter scenario seems like a good fit for a centralized source of entropy, like a server with a dedicated high quality entropy source (maybe some kind of geiger counter/nuclear decay based source?). Very early in the boot process query the entropy server for a truly random seed and go from there to initialize your random algorithm, kind of like NTP and network time sources. Security would be something to pay attention to as you wouldn't want an attacker to ever get control of providing entropy.
Assuming you fully trust your PRNG algorithm, you really only need to do this once, generate a seed, then hash it with the current time(Assuming you have a trusted source), and whatever other entropy you have, plus the untrusted hardware RNG.
A backdoored RNG is unlikely to ever repeat patterns, that would be obvious, so it should be trustworthy enough to create a unique number.
It also probably, but not definitely can't phone home without someone noticing(Unless it can target specific people who aren't looking), and if it can, it can also send anything else it wants anyway.
An insecure but unique seed hashed with a secret machine specific value should be safe, it's not like they can manipulate the final value without having a way to break the hash, right?
You could even reuse the secret between machines as long as the attacker doesn't know it and everything else in the hash is unique.
Whatever network boot thingy or ansible job could provision random seeds when it updates the OS.
They don’t use them (read the details); they could fall back to using them. And it’s a stupid publicity stunt. And even then, they would use them via an ipcam - something probably way less secure than any rdrand or lava lamp.
There are many other more practical sources of random number generation, including ones that are non-deterministic due to quantum effects. SGI did it for fun, and obviously someone at Cloudflare was a fan.
I'm also confused by the second statement. You believe that more people have seen lava lamps in video games and similar than ones in person, or photos/videos of real ones? This seems unlikely to me.
Yes as a matter of fact I do believe people see lava lamps in digital media more often than in real life. I am one example of such a person. Lava lamps are an American thing largely, so if you want to see them you need to bring one from America or go to America to see it. What is the alternative? Lava lamp in the media, where you can look at it many times, but it is always a rerun.
The lava lamps are completely impractical. You'd get randomness of equal quality for orders of magnitude less energy by turning the lamps off and putting a lens cap over the webcam they use to record them.
Ah, but is your sample still live enough to be "cryptographic grade" random? Is the hardware that measures the source and the software that reports it subject to any periodicity that you don't know about but your attackers might?
(Some) People who study this often get lost down the rabbit hole and come out thinking the universe is deterministic.
Any distribution with a sufficient amount of entropy can be turned into "cryptographic-grade" randomness source using randomness extractors [1]. These work independently of any outside factors that might be trying to sneak signal (e.g. periodicity) into the noise -- as long as you can prove there's sufficient entropy to start with, you're good to go.
Low-intensity radiation is random enough, but it's slow: your device is necessarily twiddling thumbs between a detected event and the next, and entropy is mostly proportional to the number of events (for example, almost n bits from what of 2^n identical units is hit by the next particle).
Or, it's what one of my ex-NSA buddies told me: we almost never break the encryption, we break the implementation, because that's where the errors are.
The same can assuredly apply to capturing entropy.
At boot time, on a server sitting in a rack beside thousands of others ... how are these going to help any? They aint moving and the RF/energy environment around them should be steady state or well within characterize-able bounds of noise.
"Random enough" is a metaphysical question when you get into it. If an RTLSDR stick and a site customized munger script can't provide enough entropy for the entire data center you've fallen into a Purity spiral and will never be happy, anyway.