Hacker News new | past | comments | ask | show | jobs | submit login

The flip side is, if you don't do auto updates and an exploit is published and used against you and you haven't yet tested / pushed the patch, that you would have been protected against if it had auto updated, you are up the creak without a paddle in that situation as well.

To some degree you have to trust the software you are using not to mess things up.




So since I do mission critical healthcare I do run into this concept. But it's not as unresolvable as you portray. Consider for example HIPAA "break the glass" requirement. It says that whatever else you implement in terms of security you must implement a bypass that can be activated by routinely non-authorised staff to access health information if someone's life is in danger.

Similarly, when I questioned, "why can't users turn off ZScaler in an emergency" we were told that it wouldn't be compliant. But it's completely implementable at a technical level (Zscaler even supports this). You give users a code to use in an emergency and they can activate it and it will be logged and reviewed after use. But the org is too scared of compliance failure to let users do it.


While I agree with the requirement, but it sounds like a vault would need to have an unlocked door with a sign.


Well, if the vault says you have COPD, and the devious bank robber is interested in your continued breathing, perhaps we can just review the footage after the fact.

This is one of those cases where you don't disable emergency systems to defend against rogue employees. If people abuse emergency procedures, you let the legal system sort it out.


A vault with firearms in the police station to which every staff member has a key. Sounds reasonable to me.

Users are not prisoners left in the burning building without a fire escape.


> It says that whatever else you implement in terms of security you must implement a bypass that can be activated by routinely non-authorised staff to access health information if someone's life is in danger.

Huh.

I can see why this needs to exist, but hadn't thought of it before. Same deal as cryptography and law-enforcement backdoors.

> logged and reviewed after use

I was going to ask how this has protection from mis-use.

Seems good to me… but then I don't, not really, not deeply, not properly, feel medical privacy. To me, violation of that privacy is clearly rude, but how the bar raises from "rude" to "illegal" is a perceptual gap where, although I see the importance to others, I don't really feel it myself.

So it seems good enough to me, but am I right or is this an imagination failure on my part? Is that actually good enough?

I don't think cryptography in general can use that, unfortunately. A simple review process can be too slow for the damage in other cases.


Yes. And the vast majority of the time, it doesn’t mess things up.

The notion that you may take on risk to net alleviate risk is somehow lost on a lot of people in these conversations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: