Like I have said, besides the RNG, the only other problem is with the possibility of remote connections to the computer chipset.
Any other malicious behavior is implausible, as either easy to detect or requiring too much information about the future (to be able to determine what to do and when to do without a remote control connection). Self-destruct is something that could happen e.g. after a certain number of active hours, but this would make sense only if you are a target for someone able to load special firmware into your own computer. If that would be a general feature of a computer, it would be easily detected after it would happen randomly.
So if you do not trust the HW, you must prevent remote connections. This is easy for desktop/server computers, if you do not have WiFi/Bluetooth/LTE and you do not use Intel Ethernet interfaces (or other chipset Ethernet interfaces, with remote management features) connected to the Internet or to any other untrusted network. Towards untrusted networks, you must use Ethernet interfaces without sideband management links, e.g. you may use securely USB Ethernet interfaces.
Unfortunately, currently there is no way to completely trust laptops when WiFi connections are possible, even if they claim that e.g. Intel vPro is disabled. In any case, it is still better if the manufacturer claims that this is true (like in my Dell laptop), even if you cannot verify the claim with certainty.
Even if someone would be able to connect remotely to your computer and spy you, they will have access only to your unencrypted or active documents.
If you use a bad RNG for encryption purposes, then the spies could also access any encrypted and non-active documents, which is a much greater danger.
In conclusion, the RNG is still in the top position of the hardware that cannot be tested and cannot be trusted. Nothing else comes close.
I can design a CPU that reads the operating system’s clock (easy-peasy) and is actually a time bomb that does something at a predefined time.
You could now argue that we can use some workbench that simulates these conditions, but then I will catch you on the fact that you can also determine that random output is changed (statistically for example) for some parameters of the system.
Either way enumerating all of these parameters is intractable due to the size of the search space.
Same thing goes to the fact that you can’t definitely say whether your CPU will always do what it says it does, purely because you can’t enumerate all possible execute paths as they are infinite.
* I see a lot of LUTs, adders, and common operations(not a theoretical problem).
* There's memory and IO, but that shouldn't matter for defining a single clock tick.
* Thermal/power monitoring - not sure how important that is, but sure that's outside the definition of a FSM.
I have written simple software and FPGA CPU cores, and I would describe all of them as FSMs. It's possible that newer CPUs wouldn't qualify because e.g. they rely on metastable circuits for randomness, power-monitoring, etc. but most of it should be a FSM and the exceptions aren't anything like "infinite execution paths".
Yes, you are repeating yourself without addressing the issue that the ability to verify does not, by itself, confer any trust. Even if we accept your conclusion, it does not mean the other risks are inconsequential.
I think his point is the most consequential issue is the RNG, which to me makes a sort of sense.
You can never be truly secure and eliminate all potential concerns (what if there are invisible people with nanotech in your house - not a known concern, you can always manufacture some artificial concern), so you try to address concerns in an order of priority.
This seems like a very reasonable thing to do, in fact everyone does it - it is why we even bother to call anyone "paranoid" to begin with, i.e. we have our own list of concerns and try to address what we can...
Any other malicious behavior is implausible, as either easy to detect or requiring too much information about the future (to be able to determine what to do and when to do without a remote control connection). Self-destruct is something that could happen e.g. after a certain number of active hours, but this would make sense only if you are a target for someone able to load special firmware into your own computer. If that would be a general feature of a computer, it would be easily detected after it would happen randomly.
So if you do not trust the HW, you must prevent remote connections. This is easy for desktop/server computers, if you do not have WiFi/Bluetooth/LTE and you do not use Intel Ethernet interfaces (or other chipset Ethernet interfaces, with remote management features) connected to the Internet or to any other untrusted network. Towards untrusted networks, you must use Ethernet interfaces without sideband management links, e.g. you may use securely USB Ethernet interfaces.
Unfortunately, currently there is no way to completely trust laptops when WiFi connections are possible, even if they claim that e.g. Intel vPro is disabled. In any case, it is still better if the manufacturer claims that this is true (like in my Dell laptop), even if you cannot verify the claim with certainty.
Even if someone would be able to connect remotely to your computer and spy you, they will have access only to your unencrypted or active documents.
If you use a bad RNG for encryption purposes, then the spies could also access any encrypted and non-active documents, which is a much greater danger.
In conclusion, the RNG is still in the top position of the hardware that cannot be tested and cannot be trusted. Nothing else comes close.