For the rest of the chip / hardware there are at least some chances to test what they do and discover any suspicious behavior.
Any well-designed back-door in a black-box RNG cannot be discovered by testing.
Except for the RNG, the only other thing that cannot be trusted at all is because your computer/smartphone might allow remote connections to a management component of its hardware, regardless how you configure it.
Wired connections are not very dangerous, because you can pass them trough an external firewall and you can block anything suspicious.
The problem is with the WiFi connections, e.g. of the WiFi Intel chipsets in laptops, and obviously the least under your control are the mobile phones.
However even a smartphone can be put in a metal box to ensure that nobody can connect to it (even if that also defeats the main use of a mobile phone, it can make sense if you are worried about remote control only sometimes, not permanently).
On the other hand, a RNG that cannot be verified to be what it claims to be, is completely useless.
> For the rest of the chip / hardware there are at least some chances to test what they do and discover any suspicious behavior.
That there are some chances to test them does not provide any measure of trust... you actually have to perform the audit to achieve that.
>... On the other hand, a RNG that cannot be verified to be what it claims to be, is completely useless.
If we are going to take such an absolute line over RNGs, then, to be consistent, we should take the same attitude to the rest of the hardware and software we use - but, per my previous point, that means actually evaluating it, not just having the possibility of doing so.
One might, instead, argue that we should use only verifiable RNGs because that is actually feasible (at least for non-mobile equipment), but that does nothing to bring the rest of the system up to the standard of your last paragraph.
Like I have said, besides the RNG, the only other problem is with the possibility of remote connections to the computer chipset.
Any other malicious behavior is implausible, as either easy to detect or requiring too much information about the future (to be able to determine what to do and when to do without a remote control connection). Self-destruct is something that could happen e.g. after a certain number of active hours, but this would make sense only if you are a target for someone able to load special firmware into your own computer. If that would be a general feature of a computer, it would be easily detected after it would happen randomly.
So if you do not trust the HW, you must prevent remote connections. This is easy for desktop/server computers, if you do not have WiFi/Bluetooth/LTE and you do not use Intel Ethernet interfaces (or other chipset Ethernet interfaces, with remote management features) connected to the Internet or to any other untrusted network. Towards untrusted networks, you must use Ethernet interfaces without sideband management links, e.g. you may use securely USB Ethernet interfaces.
Unfortunately, currently there is no way to completely trust laptops when WiFi connections are possible, even if they claim that e.g. Intel vPro is disabled. In any case, it is still better if the manufacturer claims that this is true (like in my Dell laptop), even if you cannot verify the claim with certainty.
Even if someone would be able to connect remotely to your computer and spy you, they will have access only to your unencrypted or active documents.
If you use a bad RNG for encryption purposes, then the spies could also access any encrypted and non-active documents, which is a much greater danger.
In conclusion, the RNG is still in the top position of the hardware that cannot be tested and cannot be trusted. Nothing else comes close.
I can design a CPU that reads the operating system’s clock (easy-peasy) and is actually a time bomb that does something at a predefined time.
You could now argue that we can use some workbench that simulates these conditions, but then I will catch you on the fact that you can also determine that random output is changed (statistically for example) for some parameters of the system.
Either way enumerating all of these parameters is intractable due to the size of the search space.
Same thing goes to the fact that you can’t definitely say whether your CPU will always do what it says it does, purely because you can’t enumerate all possible execute paths as they are infinite.
* I see a lot of LUTs, adders, and common operations(not a theoretical problem).
* There's memory and IO, but that shouldn't matter for defining a single clock tick.
* Thermal/power monitoring - not sure how important that is, but sure that's outside the definition of a FSM.
I have written simple software and FPGA CPU cores, and I would describe all of them as FSMs. It's possible that newer CPUs wouldn't qualify because e.g. they rely on metastable circuits for randomness, power-monitoring, etc. but most of it should be a FSM and the exceptions aren't anything like "infinite execution paths".
Yes, you are repeating yourself without addressing the issue that the ability to verify does not, by itself, confer any trust. Even if we accept your conclusion, it does not mean the other risks are inconsequential.
I think his point is the most consequential issue is the RNG, which to me makes a sort of sense.
You can never be truly secure and eliminate all potential concerns (what if there are invisible people with nanotech in your house - not a known concern, you can always manufacture some artificial concern), so you try to address concerns in an order of priority.
This seems like a very reasonable thing to do, in fact everyone does it - it is why we even bother to call anyone "paranoid" to begin with, i.e. we have our own list of concerns and try to address what we can...
Deniable backdoors are a much bigger risk than reproducible backdoors.
I trust my hardware manufacturers to be afraid of putting a backdoor into their chips if a binary captured via network surveillance could be used to show that a backdoor existed. This would be devastating to their business. Therefore, I trust them to not do anything that risks this occurring.
This is why people were so uneasy when internally-accessible unique serial numbers were added to the microcode engines of Intel processors.
You can trust a chip to correctly do cryptographic computations by comparing with another, more trusted system (an FPGA, if you want to go to absurd lengths).
You can protect yourself against faulty key generation by generating the key offsite or on a HSM.
However, a flaw in a RNG that allows a third party (hello NSA) to break cryptography - you cannot defend from that, you can't even detect it.
> However, a flaw in a RNG that allows a third party (hello NSA) to break cryptography - you cannot defend from that, you can't even detect it.
You always put bad randomness through enough calls of one way functions that reversing them is computationally infeasible for your adversary for the lifetime of the secret.