Hacker News new | past | comments | ask | show | jobs | submit login

The dedicated RNG scares the paranoid the most because it is an obvious target for corruption.



The best RNG solution for the paranoid would have been to have a standardized internal header/connector with an analog-digital converter input and a power supply, like the connector that exists on most motherboards for the front-panel audio (but preferably with a higher-frequency and lower-resolution ADC than for audio, even if an audio ADC is also acceptable).

If such a connector would have been standardized, very small analog noise generator boards that could be plugged in it would cost only a few dollars at most, and they would not contain any device more complex than an operational amplifier.

This solution cannot be back-doored, because it is trivial to test the ADC without a noise-generator attached, to verify that it really is an ADC and the small PCB with the analog noise generator can also be easily inspected to verify that it contains only the specified (analog) devices.

All this could have been very simple and cheap if it would have been standardized, and not more difficult to use than the unverifiable CPU instructions.

As it is, the paranoid must have electronics experience, to design and make their own analog-noise generator, to be used either with the microphone input of the PC audio connectors (which includes a weak power supply), or better with the ADC of a small microcontroller board, to be connected via USB (preferably on an internal USB connnector of the PC motherboard).


> standardized, very small analog noise generator boards

The following design[1] uses _two_ pluggable analog noise generator boards (since you don't trust one). The writeup will be of interest to the paranoid in this thread.

[1] http://nosuchlabs.com/


Thanks for the link.

This is a good example of how you can make a RNG using a microcontroller board connected to an internal USB connector of the motherboard.

However what they have is not perfect, because the RNG boards include the ADC and some simple digital post-processing, providing a RS-232 serial output. For better auditability, the RNG boards should have been simpler, with only the analog part of the RNG, and they should have used an ADC input of the microcontroller instead of using a RS-232 input. If you compile from source and you write the flash of the microcontroller yourself, then it is secure enough.

Because only seldom such boards are available for buying, many people have done something like this only for themselves.

However the problem is that this is a non-standard solution. A connector like the 3-pin header shown at this link should have existed on every motherboard (but with analog input, not with RS-232 input). All software should have expected to have a standard RNG input on the motherboard, like it expects to have HD Audio input/output or temperature/RPM sensors. If the ADC would have been provided by the motherboard chipset, which already provides many other ADCs, there would have been no need for a microcontroller and no need of firmware for the microcontroller.

Had they wanted, Intel could have easily standardized a RNG input for the chipset, like they have standardized HDAudio, SMBus and countless other chipset features. Anyone else would have followed.

It is very likely that standardizing such a solution would have been actually much cheaper for Intel and AMD than implementing RNG instructions inside the CPU, which will always remain non-recommendable for any serious applications, so they waste die area and testing time during manufacturing, and they may also reduce a little the yields of good dies.


Here's another iteration: A user supplied board with a high-gain op-amp, a comparator, and a latch -- accepting a clock line -- could produce a definite noise-informed bit sequence. This bit sequence could be observed both at that level and the software level, to confirm that no alteration had taken place in-between, in the motherboard/chipset etc.


That would just give an attacker an easy way to control the entropy source.


Paranoid implies some aspect of unjustified fear. In this case the fear is quite justified.


What’s the point of not trusting the hardware entropy source while still trusting the rest of the chip / hardware?


For the rest of the chip / hardware there are at least some chances to test what they do and discover any suspicious behavior.

Any well-designed back-door in a black-box RNG cannot be discovered by testing.

Except for the RNG, the only other thing that cannot be trusted at all is because your computer/smartphone might allow remote connections to a management component of its hardware, regardless how you configure it.

Wired connections are not very dangerous, because you can pass them trough an external firewall and you can block anything suspicious.

The problem is with the WiFi connections, e.g. of the WiFi Intel chipsets in laptops, and obviously the least under your control are the mobile phones.

However even a smartphone can be put in a metal box to ensure that nobody can connect to it (even if that also defeats the main use of a mobile phone, it can make sense if you are worried about remote control only sometimes, not permanently).

On the other hand, a RNG that cannot be verified to be what it claims to be, is completely useless.


> For the rest of the chip / hardware there are at least some chances to test what they do and discover any suspicious behavior.

That there are some chances to test them does not provide any measure of trust... you actually have to perform the audit to achieve that.

>... On the other hand, a RNG that cannot be verified to be what it claims to be, is completely useless.

If we are going to take such an absolute line over RNGs, then, to be consistent, we should take the same attitude to the rest of the hardware and software we use - but, per my previous point, that means actually evaluating it, not just having the possibility of doing so.

One might, instead, argue that we should use only verifiable RNGs because that is actually feasible (at least for non-mobile equipment), but that does nothing to bring the rest of the system up to the standard of your last paragraph.


Like I have said, besides the RNG, the only other problem is with the possibility of remote connections to the computer chipset.

Any other malicious behavior is implausible, as either easy to detect or requiring too much information about the future (to be able to determine what to do and when to do without a remote control connection). Self-destruct is something that could happen e.g. after a certain number of active hours, but this would make sense only if you are a target for someone able to load special firmware into your own computer. If that would be a general feature of a computer, it would be easily detected after it would happen randomly.

So if you do not trust the HW, you must prevent remote connections. This is easy for desktop/server computers, if you do not have WiFi/Bluetooth/LTE and you do not use Intel Ethernet interfaces (or other chipset Ethernet interfaces, with remote management features) connected to the Internet or to any other untrusted network. Towards untrusted networks, you must use Ethernet interfaces without sideband management links, e.g. you may use securely USB Ethernet interfaces.

Unfortunately, currently there is no way to completely trust laptops when WiFi connections are possible, even if they claim that e.g. Intel vPro is disabled. In any case, it is still better if the manufacturer claims that this is true (like in my Dell laptop), even if you cannot verify the claim with certainty.

Even if someone would be able to connect remotely to your computer and spy you, they will have access only to your unencrypted or active documents.

If you use a bad RNG for encryption purposes, then the spies could also access any encrypted and non-active documents, which is a much greater danger.

In conclusion, the RNG is still in the top position of the hardware that cannot be tested and cannot be trusted. Nothing else comes close.


I can design a CPU that reads the operating system’s clock (easy-peasy) and is actually a time bomb that does something at a predefined time.

You could now argue that we can use some workbench that simulates these conditions, but then I will catch you on the fact that you can also determine that random output is changed (statistically for example) for some parameters of the system.

Either way enumerating all of these parameters is intractable due to the size of the search space.

Same thing goes to the fact that you can’t definitely say whether your CPU will always do what it says it does, purely because you can’t enumerate all possible execute paths as they are infinite.


> you can’t enumerate all possible execute paths as they are infinite.

How do you design a CPU that isn't a finite state machine?


As soon as you try to tell us which kind of FSM the CPU is, you'll realize why it isn't any of them


I used to have a friend who worked at AMD, and he said they simulated one of their in-progress CPUs on five Virtex-5 FPGAs.

Looking here: https://docs.xilinx.com/v/u/en-US/ds100

* I see a lot of LUTs, adders, and common operations(not a theoretical problem).

* There's memory and IO, but that shouldn't matter for defining a single clock tick.

* Thermal/power monitoring - not sure how important that is, but sure that's outside the definition of a FSM.

I have written simple software and FPGA CPU cores, and I would describe all of them as FSMs. It's possible that newer CPUs wouldn't qualify because e.g. they rely on metastable circuits for randomness, power-monitoring, etc. but most of it should be a FSM and the exceptions aren't anything like "infinite execution paths".


> Like I have said...

Yes, you are repeating yourself without addressing the issue that the ability to verify does not, by itself, confer any trust. Even if we accept your conclusion, it does not mean the other risks are inconsequential.


I think his point is the most consequential issue is the RNG, which to me makes a sort of sense.

You can never be truly secure and eliminate all potential concerns (what if there are invisible people with nanotech in your house - not a known concern, you can always manufacture some artificial concern), so you try to address concerns in an order of priority.

This seems like a very reasonable thing to do, in fact everyone does it - it is why we even bother to call anyone "paranoid" to begin with, i.e. we have our own list of concerns and try to address what we can...


Deniable backdoors are a much bigger risk than reproducible backdoors.

I trust my hardware manufacturers to be afraid of putting a backdoor into their chips if a binary captured via network surveillance could be used to show that a backdoor existed. This would be devastating to their business. Therefore, I trust them to not do anything that risks this occurring.

This is why people were so uneasy when internally-accessible unique serial numbers were added to the microcode engines of Intel processors.


You can trust a chip to correctly do cryptographic computations by comparing with another, more trusted system (an FPGA, if you want to go to absurd lengths).

You can protect yourself against faulty key generation by generating the key offsite or on a HSM.

However, a flaw in a RNG that allows a third party (hello NSA) to break cryptography - you cannot defend from that, you can't even detect it.


> However, a flaw in a RNG that allows a third party (hello NSA) to break cryptography - you cannot defend from that, you can't even detect it.

You always put bad randomness through enough calls of one way functions that reversing them is computationally infeasible for your adversary for the lifetime of the secret.


Deterministic functions do not increase entropy, bad entropy in is bad entropy out.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: