Hacker News new | past | comments | ask | show | jobs | submit login

Since all these CPUs are turning out to be so inherently insecure, maybe we should work harder to run only trusted/verified code.



There’s a limit at which improving the car compensates for the degradation of the road. At some point, repaving is the only correct fix.


Unfortunately, there is only so much trust we can put in hardware that is "known" to be secure. What other exploits are possible that we don't know about?

While I wouldn't want to compel people to run only verified binaries, I think it's important that we make it an option. Right now, it really isn't, especially with proprietary software.


Is retpoline the equivalent of picking another road in this analogy?


no, it's the equivalent of slowing down because you're hallucinating behind the wheel and can't trust what you see.


More like picking special tires that can run on the shitty road. The roads are the hardware itself and their vendors (I would assume).


I can see that making sense. Still see the retpoline reducing the forks in the road, forcing the "proper route".


No you still misunderstand. The road in the analogy is the hardware itself. And the car is the OS. The OP's point was that we have to replace the hardware, because at some point you are hacking up your car to absurd degree just to be able to drive on the road of the hardware. It has nothing to do about forks in the road and the analogy to branching.

He could be arguing for anywhere from, we need to replace all our hardware now, to we need to start over with how we design chips.


Trusted and verified are separate things. Trust is by the user. Verification is by user or trusted third party.

Do you trust random JavaScript from the network? No? I didn't think so. So you trust RandomThirdParty verification?

Even banking code (ran on Secure Elements in credit cards) has levels of verification (by certified bodies) and trust (by banks and card processors). Where did you draw the line?


How does a user decide which third-party to trust for code verification?

E.g. do all/some “certified bodies” accept financial liability if their verification is later found to be incomplete? Since Spectre/Meltdown were not widely known, what happens if previously-certified code was exploiting this vulnerability?


I could act like Richard Stallman, and refuse to run any JavaScript. That doesn't sound very appealing, though. I would argue, however, that we need a culture shift away from JavaScript.

As far as verification goes, if authors use a GPG private key to sign their code, and distributors (package maintainers, etc.) do the same, they can be "verified" by the web of trust. That verification could also be hardened by a third party certificate body, although I'm personally wary of letting certificate authorities decide which code I can run on my computer.


One problem with that is Javascript. Users download and run dozens of untrusted programs a day. I have JS mostly disabled in my browser, but that leaves websites partially broken.


Is there a point to running verified code on insecure hardware?


If you verify that it does no funny business with the hardware bugs it should be no problem, right?


I figure comprimising alien/userland code could well be an issue, something like Javascript in a browser.


I assumed that the GP included userland, and the browser, when they advised only to run trusted code.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: