Hacker News new | past | comments | ask | show | jobs | submit login

I completely disagree. Can you give me an example? Perhaps you can change my mind.



We ran a 100-petabyte cluster with all Meltdown/Spectre mitigation turned off because there was no foreign code running on it that didn't have access to the data itself.

It's all about the threat model. Engineers at the company were considered trusted actors and they were the only ones permitted to connect. If that layer failed, there is no way cache invalidation errors would be the fastest way in.


A machine which is turned off is much slower and more secure than a machine which is turned on, but for some reason people insist on turning their computers on.

Security mechanisms which prevent you from doing the thing you're setting out to do are worthless. Making a computer too slow to be useful is one of the ways to do that. In this specific case, if moving Little Snitch's functionality to userland means that the performance hit of running it was large enough that I have to turn it off when doing network performance sensitive things (say, video conferences) then it'd be a net loss in security compared to the status quo of it running in kernel mode.


/dev/random vs /dev/urandom, you could argue that a new seed via /dev/random is somewhat better, but you wouldn't block everything constantly to get new entropy


/dev/urandom is better than /dev/random in almost every case, so much so that on macOS they are identical.


I can't give you an example but it's perfectly plausible that many users don't store volatile data on their computer and/or are not careless with downloading and running programs. These users might prefer the extra speed up.


Frequently security = correctness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: