Hacker News new | past | comments | ask | show | jobs | submit login

Other than the myriad of problems with passwords that NIST has killed in competent circles, what are some other "useless controls"?



>that NIST has killed in competent circles

Just because this is my favorite soapbox - anyone that has to deal with passwords should go read NIST SP800-63B:

https://pages.nist.gov/800-63-3/sp800-63b.html

I was kind of shocked by just how gosh-darned reasonable it is when it came out a couple of years ago. It's my absolute favorite thing to cite during audits.

"Are you requiring password resets every 90 days?"

"No. We follow the federal government's NIST SP800-63B guidelines which explicitly states that passwords should not be arbitrarily reset."

I've been pleasantly surprised that I haven't really had an auditor push back so far. I'm sure I eventually will, but it's been incredibly effective ammunition so far.


I've done the same thing, with the same results. These guidelines are impressive. 1Password created an excellent summary:

https://blog.1password.com/nist-password-guidelines-update/


Alas, in Australia one of the more popular frameworks in gov agencies is Essential Eight, and they are a few years away from publishing an update with this radical idea.


My understanding is that Essential Eight doesn't require password rotation


If so then I'll be doubly frustrated - I've been assured by our domain experts that this is a requirement of the model.

Did it used to be and was since retracted? I suppose it may be a local or state-based 'implementation augmentation'.

I've trawled just now through the signals directorate site and can find plenty of references to passwords, but nothing specifically covering this.


It may have been as password rotation was a requirement thrown around, but to my knowledge it's not come up in assessments for a long time.


I bumped into controls mandating security scans, when people running the scans don't need to know anything about the results. One example prevented us from serving public data using Google Web Services because the front-end was still offering 3DES among the offered ciphers. This raised alerts because of the possibility of Sweet32 vulnerability, which is completely impractical to exploit with website scale data sizes and short-lived sessions (and modern browsers generally don't opt to use 3DES). Still, it was a hard 'no', but nobody could explain the risk beyond the risk of non-compliance and the red 'severe' on the report.

We also had scans report GPL licenses in our dependencies, which for us was a total non-issue, but security dug in, not because of legal risk, but compliance with the scans.


"Why do we have to do X? Because we have to do X and have always had to do X" is a human problem coming from lack of expertise and lack of confidence to question authority.

It's a shame, your story isn't unique at all.


Not just lack of expertise and confidence, but also lack of trust, and possibly also a real overhead of running a large org.

Like, IT sec does not trust employees. This burns absurd amount of money day in, day out, due to broadly applied security policies that interfere with work.

Like, there's a lot of talk about how almost no one has any business having local admin rights on their work machine. You let people have it, and then someone will quickly install a malicious Outlook extension or some shit. Limits are applied, real-time scans are introduced too, and surely this inconveniences almost everyone, but maybe it's the right tradeoff for most of the org's moderately paid office workers.

But then, it's a global policy, so it also hits all the org's absurdly-highly paid tech workers, and hits them much worse than everyone else. Since IT (or people giving them orders) doesn't trust anyone, you now have all those devs eating the productivity loss, or worse, playing cat-and-mouse with corporate IT by inventing clever workarounds, some of which could actually compromise company security.

In places I've seen, by my guesstimate that lack of trust and ability to issue and monitor exceptions to security policies[0] could easily cost as much as doubling the salary of all affected tech teams.

As much as big orgs crave legibility, they sure love to inflict illegible costs on themselves (don't get me started about the general trend of phasing out specialist jobs and distributing workload equally on everyone...).

--

[0] - Real exceptions, as in "sure whatev, have local admin (you're still surveilled anyway)", instead of "spend 5 minutes filling this form, on a page that's down half the time, to get temporary local admin for couple hours; no, that still doesn't mean you can add folders to exclusion list for real-time scanner".


Another of my favorite examples is companies going "everyone needs cyber security training" and applying a single test to their entire global staff with no "test out" option. I watched a former employer with a few hundred thousand employees in the US alone mandate a multi-hour course on the most basic things, which could have been negated with some short knowledge surveys.

The same employer also mandated a multi-hour ethics guidelines course yearly that was 90% oriented towards corporate salespeople, and once demanded everyone take what I believe was a 16 hour training set on their particular cloud computing offerings. That one just have cost them millions in wasted hours.


> nobody could explain the risk beyond the risk of non-compliance and the red 'severe' on the report.

Isn't it just a burden on the security team & the organization at a whole if nothing else? If every team gets to exempt themselves from a ban just because they use the thing responsibly, then suddenly the answer to the question of "are we at risk of X which relies on banned thing Y" can become a massive investigation you have to re-do after every event, rather than a simple "no".

I don't know the details of your situation obviously, maybe there's something silly about it, but it doesn't seem silly to me. More generally, "you can only make an exemption-free rule if 100% of its violations are dangerous" is not how the world works.


This is often the result of poor risk management or lack of risk management understanding.

Compliance assessments at least the assessments I have worked with, take a risk based approach and allow for risk based decisions/exemptions.

If you have a vulnerability management process which takes what the scanning solution says at face value and therefore your process assumes ALL vulnerabilities are to be patched, then you're setting yourself up for failure.


I actually wrote blogs about two of my (least) favorites: [VPNs](https://securityis.substack.com/p/security-is-not-a-vpn-prob... [Encryption](https://securityis.substack.com/p/security-is-not-an-encrypt...). Thank you for pointing out I don't link to them in this original post.

Password resets are definitely one, and I still have to tell prospects and customers that I can't both comply with NIST 800-63 and periodically rotate my passwords, every single day. Other ones I often counter include other aggressive login requirements, WAFs, database isolation, weird single tenancy or multitenancy asks, or for anti-virus to be in places that they don't need to be.


in the spirit of this article, can anyone explain why the Linux host-level firewall is a useful control?


I think it depends a bit on circumstance, but I think I'd start with "way too much software binds to 0.0.0.0 by default", "way too much software lacks decent authn/z out of the box, possibly has no authn/z out of the box", and "developers are too lazy to change the defaults".

So it ends up on the network, unprotected.


Do you mean "why is running a firewall on an individual host useful"? Single-application hosts are quite common, and sadly some applications do not have adequate authentication built-in.

Do you mean "why does Linux allow firewalling based on the source host"? Linux has a flexible routing policy system that can be used to implement useful controls, host is just one of the available fields, it's not meant to be used for trusting on a per-host basis.


It's a catch-all in case any single service is badly configured. This often happens while people are fiddling around trying to configure a new service, which means they are at the most vulnerable.


There's always an edge case, gotta know the various sec controls to slice the target risk outcome, vs target outcome == specific implementation. Security hires who are challenging employees are the latter types.

Edge case and your answer, in spirit - public-facing server, can't have a HW firewall in-line, can't do ACLs for some reason, can't have EDR on it.... at least put on a Linux host-level FW and hope for the best.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: