Hacker News new | past | comments | ask | show | jobs | submit login

I bumped into controls mandating security scans, when people running the scans don't need to know anything about the results. One example prevented us from serving public data using Google Web Services because the front-end was still offering 3DES among the offered ciphers. This raised alerts because of the possibility of Sweet32 vulnerability, which is completely impractical to exploit with website scale data sizes and short-lived sessions (and modern browsers generally don't opt to use 3DES). Still, it was a hard 'no', but nobody could explain the risk beyond the risk of non-compliance and the red 'severe' on the report.

We also had scans report GPL licenses in our dependencies, which for us was a total non-issue, but security dug in, not because of legal risk, but compliance with the scans.




"Why do we have to do X? Because we have to do X and have always had to do X" is a human problem coming from lack of expertise and lack of confidence to question authority.

It's a shame, your story isn't unique at all.


Not just lack of expertise and confidence, but also lack of trust, and possibly also a real overhead of running a large org.

Like, IT sec does not trust employees. This burns absurd amount of money day in, day out, due to broadly applied security policies that interfere with work.

Like, there's a lot of talk about how almost no one has any business having local admin rights on their work machine. You let people have it, and then someone will quickly install a malicious Outlook extension or some shit. Limits are applied, real-time scans are introduced too, and surely this inconveniences almost everyone, but maybe it's the right tradeoff for most of the org's moderately paid office workers.

But then, it's a global policy, so it also hits all the org's absurdly-highly paid tech workers, and hits them much worse than everyone else. Since IT (or people giving them orders) doesn't trust anyone, you now have all those devs eating the productivity loss, or worse, playing cat-and-mouse with corporate IT by inventing clever workarounds, some of which could actually compromise company security.

In places I've seen, by my guesstimate that lack of trust and ability to issue and monitor exceptions to security policies[0] could easily cost as much as doubling the salary of all affected tech teams.

As much as big orgs crave legibility, they sure love to inflict illegible costs on themselves (don't get me started about the general trend of phasing out specialist jobs and distributing workload equally on everyone...).

--

[0] - Real exceptions, as in "sure whatev, have local admin (you're still surveilled anyway)", instead of "spend 5 minutes filling this form, on a page that's down half the time, to get temporary local admin for couple hours; no, that still doesn't mean you can add folders to exclusion list for real-time scanner".


Another of my favorite examples is companies going "everyone needs cyber security training" and applying a single test to their entire global staff with no "test out" option. I watched a former employer with a few hundred thousand employees in the US alone mandate a multi-hour course on the most basic things, which could have been negated with some short knowledge surveys.

The same employer also mandated a multi-hour ethics guidelines course yearly that was 90% oriented towards corporate salespeople, and once demanded everyone take what I believe was a 16 hour training set on their particular cloud computing offerings. That one just have cost them millions in wasted hours.


> nobody could explain the risk beyond the risk of non-compliance and the red 'severe' on the report.

Isn't it just a burden on the security team & the organization at a whole if nothing else? If every team gets to exempt themselves from a ban just because they use the thing responsibly, then suddenly the answer to the question of "are we at risk of X which relies on banned thing Y" can become a massive investigation you have to re-do after every event, rather than a simple "no".

I don't know the details of your situation obviously, maybe there's something silly about it, but it doesn't seem silly to me. More generally, "you can only make an exemption-free rule if 100% of its violations are dangerous" is not how the world works.


This is often the result of poor risk management or lack of risk management understanding.

Compliance assessments at least the assessments I have worked with, take a risk based approach and allow for risk based decisions/exemptions.

If you have a vulnerability management process which takes what the scanning solution says at face value and therefore your process assumes ALL vulnerabilities are to be patched, then you're setting yourself up for failure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: