Hacker News new | past | comments | ask | show | jobs | submit login

I think there's always another layer of trust issues no matter how trustworthy your system is.

It might be you shouldn't even trust the physical environment, even power supply can be used to do evil things. Or radiation, even ambient temperature.

Don't forget code itself can affect the environment in unobvious fashion.

EDIT: My point is that we need to be aware security will never be a solved problem. We can't also consider security as a software-only issue, but have to have a holistic viewpoint considering the whole system.

There's of course the point where risk mitigation is not worth the cost. That's another matter.




That's true, but if I'm reading the subtext of your post correctly, you're saying that this is pointless, and I don't think it is.

For a lot of people, we're well below the level of security where hardware security is relevant. The average person is still willing to run software that downloads/installs/runs malware on their machine (browsers with Javascript enabled). So in that sense, we've got a lot of work to do to get developers and consumers to practice better security at the software level before the hardware level is even a significant concern.

However, I do hope for a future where software security is both a fairly solved problem and the norm, and if that time comes, I want to be ready for the inevitable arms race when hackers move to hardware vulnerabilities.

Recognize that yes, there are always more layers, but each layer of vulnerability has decreased capabilities. It's a lot easier to read someone's hard drive using an installed app with full filesystem permissions than it is to read the same hard drive using their power supply.

Also recognize that these layers require more capability from your attacker. A Russian hacker can write an app that reads people's hard drives from anywhere in the world. The power grid simply lacks the networking infrastructure for long-distance remote access in many cases, requiring more physical locality.


> That's true, but if I'm reading the subtext of your post correctly, you're saying that this is pointless, and I don't think it is.

I'm not saying this is pointless at all! Just that so far there seems to always be another layer that can be used to circumvent the protections.

I guess I'm more like saying we should not ever take security for granted. Instead, we should look at the whole system and try to find the easiest attack vectors. Which might not be purely software issues anymore.

For example, at some point we might need to consider the physical effects of the code being executed or how the physical environment can affect it.

When the software is sufficiently strong, the attacks will be directly or indirectly against the hardware.


I think we're in agreement, then. I'm sorry for accusing you of saying something you didn't say!


So far though, software has been the soft underbelly of secure systems. The hardware is assumed to be compromised at a level that is shockingly bad and nobody seems to care, we just accept it and move on (IME, PSP, TrustZone).


I once read an article about computer security that talked about why your passwords are displayed as asterisks but omitted mention of Van Eck radiation.

https://en.wikipedia.org/wiki/Van_Eck_radiation


You are right, but in the scope of chip design we can solve all that. What we can not solve is supply chain trust (how do you now your temp sensor has not been tampered?)


Or your temperature sensor being rendered useless via a clever attack?

Perhaps by something as simple as by a clever pattern of temp sensor reads causing your code to think temperature is in safe range for your application?

Or causing your code to execute extra multiplies to generate heat to hide a momentary temperature drop.

My point is even if you could 100% trust your supply chain and 100% validate your silicon matches your design, there are still residual issues.

For example, power supply glitch attacks [0] are nowadays well known, including techniques to make them sufficiently reliable.

[0]: https://www.google.com/search?q=power+glitch+attack


You should be able to exhaustive test a temperature sensor, it is typically a two lead passive device.


It was just a simple example. There are counter-measures WAY more complex than a simple two lead sensor. I was part of a CC certified chip design, and I can tell you we must implement tests for every single countermeasure in the chip. But still you have no means to check whether your sensor/countermeasure hasn't been tampered all together in the supply chain.


> There are counter-measures WAY more complex than a simple two lead sensor.

Fair enough.

Has there to your knowledge ever been documented evidence that a mask had in fact been tampered with in a way that caused a system to be compromised?


During a CC certification, the design-house, mask shop, and fab are certified to reduce the chances of the chip being tampered. The certification ensures that all those places have decent security practices and protocols. It helps but is quite far from completely mitigating it.

I don't have any reference of a mask being modified to give to you, but it is so easy to do it that we don't actually need evidences to be worried.

If you think about it, just by changing implantation parameters on the transistors that form a ring oscillator for generating random numbers can bias it (this does not require not even modifying the mask).


Ah yes, that makes good sense. I once built a hardware RNG and it was surprisingly hard to keep it stable over the longer term to satisfy the certification criteria. In the end I managed but it was a lot of analog voodoo and I can see how easy it would be to tamper with that in a way that would not be detectable unless you monitored the device continuously.

RNGs are a weak spot. Thank you for that example.


Randomly pick samples every now and then, in randomized intervals and test them?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: