Hacker News new | past | comments | ask | show | jobs | submit login
They want us to be compliant, not secure (go350.com)
198 points by _uvd9 on Dec 26, 2020 | hide | past | favorite | 174 comments



The author doesn’t really try to identify from the regulator’s perspective, and it weakens their argument.

To the state, one of the most important goals is legibility. The state has an enormous burden of regulation in its various enterprises, and the less variance the easier the job. This certainly isn’t ideal in many, many cases, but in this case a person who likely didn’t make the standards quite rationally trusts a vetted standard over one small trivial test given to them by a company they’re evaluating. FIPs standards are probably much more exhaustive than just a hashing performance test on one piece of hardware.


^^^^^^. This x1000.

Yes, it's bad when bureaucracy means that individual bad decisions are made. But casually saying "oh, well, this is idiocy, the government doesn't care about security" ignores the fact that organizing the regulation of human behavior is immensely costly, and the government needs to optimize on those costs.

A good dev analogy is using a library function vs rolling your own. The government's library is the list of things that its own engineers approved. The equivalent of rolling their own is for some individual auditor to make an independent decision to deviate from the list. Now, it might be that you can squeeze out an extra tiny bit of performance[/the government can squeeze out some extra security] by rolling your own sort function or whatever, but most wise devs will balance that off against the risk of introducing extra bugs, the extra writing and maintenance time involved, etc.

Reality is that this sucks. It would be amazing if it wasn't actually super-super costly to organize human beings to all point in the same direction and make sure that people don't cheat. Government would be a lot cheaper and more efficient. But, alas, reality does suck.


I feel bad for the author for publishing this article.

If your (the author) mindset is that "everyone else is an idiot", be it management, or regulators, auditors or some other group that is not your group (engineer?), then unfortunately you have an attitude problem.

If furthermore you maliciously misinterpret the other group, saying "they want us to be compliant, not secure", based on misunderstanding the domain expertise _of your own group_ (!), then you are a liability to your business and should be kept at arms length from external auditors.

Because the author clearly has no clue what he is doing, and based on his technical shortcomings he is speaking down on others who are _actually_ doing their job correctly: the auditors.

Why is the author comparing bcrypt using 32 iterations with sha512crypt with 5000 iterations? Why not increase the sha512crypt iteration count by orders of magnitude if he want the hashcat test to fair equal or better under the testing methodology in use? Why is he using hashcat and basing decisions solely on that tool? Why is he using shacrypt at all and not a a proper KDF, such as PDKDF2, plugging in SHA2?


Compliance is in fact not the same as security. Just like our non-technology laws are not equivalent to ethics. Laws exist for many reasons, and change over time as societies ideals and requirements change.

For example, just because we have laws against burglary, it doesn't mean that you can leave your door unlocked. Being "compliant" in no way means you are being secure. This is highly dangerous thinking that many many many organizations are guilty of.

As for the regulators, yes, they care about you being compliant not secure. Just like lawyers care about things being legal, not "right." It's their job to make sure you comply with the law, it's not their job to make sure the law is correct, or that you are doing things in your best interest outside of what the laws say.

Also, just like every law, security compliance regulations do become outdated, in some cases even being actively harmful. For instance, look at how password compliance has been overhauled in the last several years. Previous to this, many people knew based on technical data and research that old password compliance rules were not best practice, or even outright harmful. The auditors didn't care, it wasn't their job to evaluate whether the existing compliance rules were right. Eventually standards bodies caught up, but even today you have some compliance people who won't let go of the old ways.

So in my opinion, the author has a very good and correct point, compliance isn't about being secure. It's necessary to have, like many of our laws and regulations, but it's not the same as security. We should strive that our laws always match outcomes and current understanding of best practices, but there is never a 1:1 equivalence.


You're criticizing the author for not adequately understanding the regulatory concerns but letting the regulators off the hook for not adequately understanding the engineering concerns. Frankly the latter feels more important and more beneficial (to everyone) to improve upon.


The author didn't relate how the exact conversation went down. The closest we get is this:

> But the auditors were adamant. They did not care that the approved algorithms were weaker. Nothing would change their decision.

That's the 100% the author's perspective. It's quite possible those auditors were aware of the existence of bcrypt and what it does.

Even so, their personal opinions simply didn't matter. And they, quite likely, tried to point this out to the author. Yes, a stronger algorithm - bcrypt - existed, but as long as it wasn't a formally vetted FIPS approved algorithm, it simply wasn't an option for operational use.

Compliance isn't bad in itself. It's a tool. Compliance is implemented to ensure that large and complex organizations can continue to function as a structured system. It helps ensure accountability, audit-ability, integrity, reliability, sustainability and other "abilities".

There is a trade-off, though, when it comes to day-to-day operations. You simply can't improvise. In a large organisations, that's a good thing, which the author clearly hasn't understood.

While the direct result is that their systems security is now weaker, that same compliance also shields them from any legal repercussions. If those systems get compromised, they can always point to the list of sanctioned algorithms and the oversight. That's how accountability works.

You'll find the freedom and the flexibility to implement your own solutions in smaller organizations. However, that comes at a price: The buck stops at your desk. When something breaks, all eyes will turn to you in order to fix it. And there will be little to hide from.

Of course, I'm well aware that there's more nuance to it. There are plenty of stories on both sides of the aisle were the lack of or very existence of formalisms either saved the day or directly lead to catastrophe. That doesn't imply that one is always better then the other.


The way to approach infosec is, don't trust anyone and verify everything. This is just part of the IT world.

I'm not an auditor, but my impression is they not only don't trust each other, they also hate each other. Auditing is a job for people who don't like other people, which is ok. Just be aware.


I'm going to take flak for this, but I believe it needs to be said. (And for the record, I have been dealing with audits and auditors continuously for the last 5 years. In 2019 I spent 25% of all my working hours directly working on audits or babysitting auditors.)

Auditors are accountants, who secretly harbour dreams of being lawyers. With years of disappointment and self-loathing, they do the human thing: take it on everyone else.


Yeah - utterly agree. The point raised in the article is valid - but the correct thing to do isn't to persuade the auditor they are wrong, it's to push FIPS to include bcrypt instead of SHA2 (although as below, this isn't the right answer either - it should use PBKDF2 or similar).


You could also consider it a M&M test for both FIPS and the auditor. If they get something this simple wrong, what other issues lay hidden?


FIPS is never going to include bcrypt. Not including things like bcrypt is literally part of the point of FIPS.


Why is this by the way?

I've googled it and most of the responses I've found were "NIST doesn't talk about bcrypt and probably won't because it's related to blowfish and not SHA". And the other being "Blowfish is equivalent to SHA so if anything is approved blowfish would be approved, not BCrypt which is similar to PBKDF2, which is approved because it uses SHA".

Is it just bureaucratic inertia?


The point of FIPS is to deliberately narrow down the range of acceptable primitives, to simplify the evaluation of systems sold to the government.

(FIPS is very bad).


Can you elaborate on why this is a point of the FIPS? It sounds as if it's specific and by intention.


Which aspects of bcrypt are you referring to by "things like bcrypt"?


Modern, superior alternatives to standard algorithms and constructions. Canonical example: Curve25519. WireGuard, for instance, will never be FIPS compatible.


Don't the drafts for NIST FIPS 186-5 and SP 800-186 both include Curve25519-based algorithms?


They do. This person doesn't know what they're talking about.

The point of FIPS is to provide a set of approved algorithms (and other standards) that are considered to be secure due to thorough analysis. Curve25519 has become widely implemented, and hasn't been broken, and has some advantages over the other NIST-approved curves, so NIST will likely approve it.

Password hashing has been rapidly changing over the last few years. PBKDF2 isn't memory-hard, but can be instantiated with only a FIPS-approved cryptographic hash. Bcrypt is OK, Scrypt has different tradeoffs, Argon2 has different tradeoffs (and even more tweakability with multiple variants), Pufferfish2 is cache-hard instead of just memory hard (so potentially better than any of the only-memory-hard systems), etc. NIST probably don't see any reason to update the standard for a marginal improvement when they can wait for the cryptographic research in the area to stabilize.

There's also a lot of research into ways to avoid having to send passwords to servers in the first place with strong asymmetric password-authenticated key exchange (sAPAKE) algorithms. So NIST might end up standardizing a password-based key derivation function separately from a PAKE system.

Personally I think standards should be updated a bit more frequently than they are, but I can understand NIST's conservative stance here.


Wow, I'm wildly wrong about Curve25519 and FIPS. I should probably not write HN comments in bed immediately after waking up. Thanks for the correction. (Ironically, we had an HN thread almost exactly 1 year ago about the same thing; I'd just refer to my comments there, I guess.)

https://news.ycombinator.com/item?id=21744469


How then, can refusing superior alternative be anyone's point? Are you merely calling them overly conservative to the point of incompetence (I'd be tempted to), or is there another aspect to it?


One angle to consider here is that FIPS is informed by whatever NSA knows about cryptography, so not all the evaluation criteria is public. You and I may think djb's inventions are superior, but for example NSA might have an active research project into elliptic curves, and refused to endorse Curve25519 until those results came in.


Incidentally I've noticed the voting bouncing around in this comment but you all should be hammering it, it is a supremely dumb comment (WireGuard will never be FIPS compatible, but Curve25519 is obviously not why).


No, the right answer is to push no one on anything, comply with the FIPS standard and suck up the marginally increased probability of a breach. They made the right decision throughout.


Yep. That which is globally optimal may be locally suboptimal.


I think your analogy misses the most important benefit: you must use the government's library. If people were allowed to choose other libraries, then they might choose wrong. And since the government protects companies from the consequences of making bad choices, the government can't let them make those choices.

The great failing of so many here is that they don't recognize that uniformity and centralization are more important than innovation. If those innovations really are better, then you can vote to elect representatives to nominate regulators to eventually change the requirements. This is how things work in a civilized society.


Great analogy.

It is a frustrating end result in this case. I wonder if there's any possibility for escalating this to a regulator's technical resources? I wonder if this kind of thing is typically left to the auditor discretion.


You make government cheaper and more efficient the same way you do with code, by removing as much as possible.


Cheaper is not always better.


Likewise, neither is smaller. Code golf is just for fun.


The regulator believing "this system isn't legible to us and so adds burden to our work" is roughly equivalent to the author's conclusion: "They want us to be compliant, not secure".

The correct response to all of this would have been an accepted means for individual departments to submit requests for updates/changes to the standard to be voted on before they are forced to comply. The issue at hand is no mechanism for feedback. You can only comply or leave. This is the consistent failure of top-down governance.


> The correct response to all of this would have been an accepted means for individual departments to submit requests for updates/changes to the standard to be voted on before they are forced to comply

This would result in, what's pejoratively called, "design by committee" which inevitably comes with bike-shedding, tit-for-tat politics and so on.

The NIST is an public institution with a long history which traces back to the origins of the United States. It's a formal part of the state. And it's raison d'être is to ensure that decision making around standardization actually happens. [1]

[1] https://en.wikipedia.org/wiki/National_Institute_of_Standard...

The point of all of this harkens back to fundamental principle:

If you want to travel fast, go alone. If you want to travel far, go with many.

The organization of a state catering to many millions follows the latter part of that principle, for better or worst. Standardization doesn't guarantee the optimal solution to a problem, it guarantees a workable solution for the majority of a given problem space.

And that's exactly what public governance is about. Ensuring the continued existence of the state as a legitimate, legal entity that governs and serves public interests at large.

> The issue at hand is no mechanism for feedback.

Well, it's perfectly feasible to... just e-mail them or give them a call. The NIST ITL CSRC lab can easily be reached via their website. [2]

If you want to know the decision making process that went into a standard, you can submit a FOIA request. [3]

[2] https://csrc.nist.gov/about/contact-us [3] https://www.nist.gov/foia


That's cool and all, but this mentality creates animosity in the system's participants where it's unnecessary. You're focused on just the system's outcome and find NIST to be instrumental in delivering those outcomes. I agree.

But that fails to address the unneeded stress on the system by antagonizing these isolated groups that have made decisions for reasons and have their own history. Appealing to this top-down system ignores how participating is dehumanizing even as its purpose is to serve the people.

I'm arguing you can have your cake and eat it too, we're just choosing not to because decision making carries risks. We want to limit decisions to specific people out of fear of that risk, not out of respect for a system that has proven it's correct.


> ignores how participating is dehumanizing even as its purpose is to serve the people

I feel this is a bit strong worded. I think this gets into the realm of worker/employer relationship.

Regardless, as an employee, you are always subject to the authority of the owner of the business, or in public context, the interests of the public. If you want agency and freedom, then the way forward would be starting an independent, private venture.

> unneeded

I think that's in the eye of the beholder. I don't contend the merits of bcrypt over other algorithms. And I'm just as sensitive as the next person that technology often moves faster then large and complex state machinery.

But when you decide to be part of such a machinery, you very much accept that your own role comes with limits and boundaries. And that your sphere of influence only extends so far.

> decision making carries risks

The inherent risk is erosion of the legitimacy of a public institution. That legitimacy is protected through showing public accountability, and transparency in decision making.

Remember, public institutions don't exist on their own accord. The budget they receive comes with public oversight. And that budget isn't private either. OP's salary is paid for by taxpayers, and as such, OP needs to work in such a way that their decision making process can be scrutinized when asked.

Legitimacy is, when it comes down to it, a murky concept based on trust, confidence, integrity, honesty, reliability, equability and so on. But, more important, all of that is always in the eye of the beholder. These are subjective concepts. And they can easily be pulled apart.

A good analogy of how that fails is what happened to Boeing's 737 Max debacle. How the integrity of the R&D process broke down and, ultimately, led to the deaths of dozens. Boeing lost its most valuable asset: its credibility as reliable aviation manufacturer.

The difference between Boeing and the U.S Public Adminstration as that the latter literally governs and serves an entire nation of 330 million citizens.

In this specific case, OP's system will be - arguably - weaker, but that's the traded off against avoiding a potential scenario where system engineers are left free to make their own decisions with little to no public oversight, which would be far harder to defend to the general public - all 330 million - in terms of accountability in case of a breakdown. The latter would simply perceive that as a violation of their constitutional rights.

The attacks caused by the Solar Winds security failure this month are a good example of that. This is an analysis from the Congress Research Service stating exactly that. It refers to operational procedures and reiterates where accountability lays, and refers to the complexities and challenges of managing cybersecurities. [1]

[1] https://crsreports.congress.gov/product/pdf/IN/IN11559


I get it, the gov has a hard problem to solve. It's a case of both of them being right - the gov needs a common level of gradable security applied, and these folks are in an uncommon position of being able to test and prove that their choice is more secure despite not being a graded choice that can be approved. The auditors are not subject matter experts, they work for bottom dollar to ensure fiscal efficiency and only check boxes. The best choice is shouted down in the name of efficiency.

This is also how smart folks will nearly always be chased out of government work. It's always safer and less work to make the dumb safe choice rather than the smarter choice that costs less or works better. This explains most of the big problems we have in government. We're not allowed to make smart choices ever, only mediocre choices at best, worst choices most of the time.

How long can we sustain this line of behavior though? We can do it for quite a while yet, but eventually the overhead will kill the state in multiple ways. Of always making mediocre decisions in order to guarantee mediocre results.


Also, the core technical argument - that hashcat proves crypt is better than SHA2 - is weak. I could write a hashing function that looks good if you throw hashcat at it, but I wouldn't recommend you use it.

I believe bcrypt is a better choice, but if I didn't I wouldn't have been persuaded.


> The author doesn’t really try to identify from the regulator’s perspective

The regulator qua regulator is remote from the narrative, which deals only with the auditors. Who don't even work for the same agency.

The agency issuing the rules is concerned with compliance in a way in which actual security is not a factor which mitigates or alters, which is the conclusion made, and is entirely factually correct.

The whole story is that important decisions are made extremely remotely (in all of time, organizational structure, domain, and concern) from the project.

Now if this was an argument for a specific alternative, or even just generally against the practice without an alternative replacement, it would need to go farther and address whether the approach deals with different, potentially even more serious problems (corruption and/or ineptitude in over-empowered public agency managers, for instance) whether it was a net positive of negative in that context and whether, if it was a net positive, either the specific alternative proposed (if any) or any conceivable alternative would likely be a greater net positive.

But that's not what this piece seems to be trying to do.


Sure, it wasn't a particularly convincing test. But for this kind of standard to work, it has to be kept relatively up to date.

It's not like bcrypt is a new one-off algorithm that they invented for this project.


> To the state, one of the most important goals is legibility.

You're making the argument from the book Seeing Like a State. But the thrust of that book is that those practices lead to bureaucratic catastrophes like so-called "scientific forestry" -- which is exactly the author's point here in the context of security.


I can't agree. This goes to show how bureaucracy often gets in the way of better solutions. While I'm sure he can understand why compliance makes the government's job easier, it didn't make HIS job easier, so he had a valid concern. He had to bend to the government's will because they paid the bills however. Government is a lot of things but it's often the lowest common denominator solution.


Very well said, and agreed 100%. It is crucial that this understanding be built to make the lives of both engineers and auditors simpler!


The product I work on is geared towards big corporate IT environments, and I can confirm that this sort of thing is not unusual at all.

A recent support ticket went along the lines of:

Customer: An audit discovered that JDK version X was installed as part of your software. It has a vulnerability and we demand a way to upgrade to JDK X+1 that has the fix.

Our support team: We're already aware of that and the latest point release of our software bundles JDK X+2, which fixes that vulnerability and 2 others. Please upgrade.

Customer: Our compliance team requires JDK X+1. Please provide a way to install this version.

We eventually solved the problem by having them upgrade to the latest major release of our software, which doesn't use Java at all, but it boggles my mind that they wanted a _less_ secure JDK.


After years of being beaten by customers with stories like these, I learnt to treat InfoSec and Compliance teams as finite state machines, particularly at banks and other financial institutions. Learn not to question the sacred spreadsheet, or debate the merits of a request. It's pointless, and you keep rolling your eyes will only end up with you at the optometrist.

Instead, treat compliance like part of your API. Ensure your product delivers on the expected answer, while continuously improving the security of your products in the parts that are not directly visible.


However DO get in writing that the option was offered to them for possible future court battles so that the onus was on them for failed security damages.


Maybe JDK X+1 had gone through a deep and thorough review at some point that got it put on some "OK" list somewhere? And maybe X+2 was too new to have made it through that same deep and thorough review. It makes sense from an auditor's perspective, maybe X+2 has new bugs that X+1 didn't have. They want the good version, not the newest version.


Maybe. Doubtful in practice, though.


Actually it's super-realistic in practice, especially the JDK, given the short-short-long support duration cadence for JDK releases. e.g. I am totally uninterested in someone telling me I need to use JDK 12 rather than JDK 11: the former is already out of support and the latter will be supported until at least 2026.


They could be referring to the list of FIPS-validated crypto modules.


OP's story and the article's author are kind of missing the point. These are both simple stories of a vendor failing to meet a [presumably] written requirement: The customer, or regulator, required X, and vendor decided instead to provide Y, and then were dumbfounded when that was deemed unacceptable. OP's vendor went farther, offering Z instead, and the customer again reminded them that X was required. It doesn't really matter if there are better alternatives than X. Those alternatives are not part of the requirement.

Whether Y=X-1 or Z=X+1 is irrelevant. Customer requires X, you provide X or they'll find another software vendor.


I doubt they cared as much about security as ticking the compliance checklist.


True, but competent company management / security professional would see version x+2 and go ahead and tick the x+1 item off the list.


That’s not really the case. Version X+2 is not automatically better than Version X+1.

It may have fixed known vulnerabilities a,b, and c but introduced vulnerabilities d, e, f, and g. How would anyone know?


Correct! And for the auditor, version X+2 has not been evaluated and certified, so they cannot really "approve" it. When one realizes the auditors are simply doing their job, and the end goal is more or less the same, the going gets much smoother. :-)


This sort of "it's not been vetted and approved" business can get really silly though. Like one company I worked at a couple of decades ago that mandated Windows 95 for employees. IT staff would actually take new machines shipped with Windows 2000, wipe them, and install the corporate Win95 image.


This is something I would totally understand. Many software packages had compatibility issues when moving from 9x-based Windows to NT-based Windows (like expecting they could do things that NT didn't allow), so the last thing you want to some random person complain that their computer is broken. Everyone gets the same system, where the issues are at least semi-known.


A common mistake I've seen in the industry is to "look down" upon the IT staff, which makes it much difficult to get a meaningful conversation going.

Yes, wiping W2K and installing W95 is problematic and insecure, but I always believe in the power of a polite conversation and have had great success in persuading the powers-that-be to change their stance. :-)


Oh, but I was IT staff at the time. I wasn't in charge of desktop support, but I knew the guys. They didn't care for it either, it was policy from a high level in a company that, as a whole, did look down on IT. One of my coworkers actually went through the whole process of talking and negotiating to try to get a Win2K laptop instead of having W95 forced on it. No luck.

And, if anyone has been wondering, yes, there was hardware in the brand-new laptop that W95 didn't support. It "worked" with a generic fallback driver but lost some of the functionality that made the laptop worth buying.


Eh -- for some software packages, maybe. I probably trust that new versions of the JDK are generally better, and probably have fewer security issues, or at least fewer known security issues, but I definitely don't trust every new version of every software library or package. New security bugs are introduced all the time, and if a new version represents a major refactor or a change of maintainer, it also represents a major unknown.


This is true in general. But this also applies to the mandate to upgrade from X to X+1. For most (but not all) software it is fair to assume that a patch version does not represent a major refactor.


If something does go wrong, the people who approved x+2 will be the first ones handed pink slips and the bureaucrats will get a finger wagged at them. That's why it's better to comply and have it noted that x+2 was offered but ultimately poo-poo'd by management.


That was the point of the comment.


Well how do you know that newest version is not having even worse security issues??

You argue that people should just install latest versions without thinking? This did not went well with SolarWinds case.


One of the big companies in corporate security compliance is literally called Checkmarx. They couldn't be any more honest about it.

https://www.checkmarx.com/


I feel kind of betrayed that this domain is not hosting a communist interest group.


"but it boggles my mind that they wanted a _less_ secure JDK."

This should not be hard to understand at all.

A lot of things may have changed in those revs, beyond the 'extra patches - those affect systems. It's likely the software was not approved in the new operating environment.

You can just go ahead and use Java 11 on software designed for Java 8, there may be issues.

The 'patch' that go into one rev is what they want - not more.

Most individuals are not qualified to make versioning decisions.

The larger the company, the more at stake.

It's possible it was due to bureaucratic numbness, and the agent probably should have maybe 'checked harder' about the new versions, but wanting a specific version is reasonable depending on the circumstances.


Our Support Team: We will be happy to comply with your request once you sign this liability release form indicating that you want the less-secure X+1 update, and not the more secure X+2 update which we recommend.


I know this is presented as a sort of ridiculous "government bureaucracy" story, but anecdotally, a non-trivial chunk of the security/compliance industry is built on compliance over security. Not all, of course, but enough that I think it's a big problem.

I've literally had auditors ask me whether I'm actually interested in security or whether I just need the sign-off ("collect dust in a corner").

Not that this is unique to security. Similar things definitely happen with accounting or any industry where you pay the people who audit you (and probably even ones where you don't but the auditors still have an interest in not pissing you off, eg a government entity where officials may want to work for your company some day).


I have worked with a large healthcare firm at two different agencies now who outsources compliance checks to overseas teams. There is nothing but a lengthy checklist to move through and if you do not meet the standards of the checklist, you fail.

There is no critical thought or evaluation of each step; it’s a simple pass/fail. Example would be “are your drives encrypted at rest?” It doesn’t matter if you’re in a SOC3 facility, located 25 feet below street level in 8 foot thick concrete walls and your files are distributed in pieces across millions of drives throughout the data center. Nope. The drives must be encrypted at rest. Pass. Fail.

Sigh.


You seem to have a problem with the standard that says SOC3 facilities should have encrypted hard drives, not the auditor who is actually trying to enforce that standard. You take it up with SOC3, not the auditor. :-)

You're also assuming the auditors are familiar with all sorts of technology and security mumbo-jumbo. In my experience, they typically are not - their skillset is to "audit", not make sense of latest 10000 rounds of mybestcrypto.

Today they may be auditing a SOC3 facility, tomorrow it'll be a car manufacturing plant. The only "source of truth" is the standard they carry, and any deviation has to be noted. It is as simple as that!


> You take it up with SOC3, not the auditor.

But that's the problem. There is no viable mechanism for doing this. Even if you somehow actually succeeded in convincing them to change the checklist, it would be next year's checklist, or the one ten years from now, not the one you have to establish "compliance" with right now in order to get the bureaucratic sign off, and thereby implement the spurious requirement.


Two things that I am aware of: (a) All standards provide a "comment period". Along with contact details for those standard bodies, there is a viable mechanism to some extent. (b) Standards are broad, which makes the language a bit abstract, and this can be a problem. However, they also include "compensating controls", which are typically used as a wedge to avoid a compliance failure.

I have had great success with several auditors with a polite conversation trying to help them map their goals to our controls. Yes, you will always meet an auditor or two who won't accept anything but the written word of the standard. Like any other industry, there are smart and dumb auditors. :-)


> You seem to have a problem with the standard that says SOC3 facilities should have encrypted hard drives, not the auditor who is actually trying to enforce that standard.

Isn't that exactly what the article says? That auditors are looking for compliance, not security?


I do take issue with audits that do not require critical thought; you are correct. The audit itself could be (and in this case is) self contradictory, which can be dangerous.


"The files on my drive are encrypted using the UTF-8 algorithm".


"UTF-8 is not in the list of approved algorithms. You failed to meet your compliance requirement."


"We have changed the algorithm to the approved ISO-8859-1 algorithm"


> SOC3 facility, located 25 feet below street level in 8 foot thick concrete walls

One of objectives for encryption at rest is shredding. The drives may not remain in the secure facility after the end of life.

> your files are distributed in pieces across millions of drives throughout the data center

Distributing data could actually secure it if individual pieces are meaningless.


Shredding happens on site but your point would stand otherwise.


Had auditors actually surprised that I wanted to fix the issues they’d found, they were under the impression we were going to shelve the report.


Many people need to fail, including the author, for such a situation to arise.

The author failed to understand the mission of the business and how non-compliant technology posed a risk to it.

The auditors failed to understand the developer, and educate them on the compromises needed to remain compliant. They also failed to engage in a discussion around alternatives.

Everyone failed to actually care enough to solve this problem, but the author still found time for a snarky misinformed article.

The vast majority of grating security/compliance tales come from a similar place of ignorance, apathy and snark. The people are the problem, and the misperception that they don't need to own these requirements for the business and are just in a hurry to check a box (on all sides).


Agreed. I generally encounter these kinds of complaints from safety-compliance perspectives. At the core is a lack of alignment and collective ownership of the necessary final outcome.

My overriding impression has always been that at the high level, both safety inspectors and the people in the labs want the same thing: a safe working environment and the ability to get science done. There is always some tension there (scientists, especially young scientists, are sometimes willing to accept more risk to get science done, as it is their time, and hence lives, being gradually consumed by additional measures to improve safety), but there is general agreement that safe working conditions are a huge net benefit.

It is difficult for the workers in the labs to place themselves in the shoes of the safety inspectors -- what seems like paperwork is actually a surface check to see if you are organized. If you can't explain your safety procedures to someone versed in scientific safety, how can you possibly explain them to an untrained worker/student?

The one thing that can frustrate the entire process, which I suspect is at the root of most university troubles, is a lack of goal-alignment between departments within the organization. If safety inspectors don't feel ownership of getting quality science out the door and only want to reach internal safety targets and scientists are only interested in holding their small lab's accident rate to zero, rather than that of the much-larger university as a whole, nobody's goals will ever get met.

When things are going right, inspections are a chance to show off how awesome your systems are and an opportunity to improve.


> My overriding impression has always been that at the high level, both safety inspectors and the people in the labs want the same thing: a safe working environment and the ability to get science done.

The interests of workers, safety auditors and institutions only partly intersect - like a venn diagram.

All parties can (hopefully) agree they don't want anyone killed or seriously injured.

But only some parties are interested in shielding the institution from liability.

Only some parties are interested in bright-line rules that choose rule simplicity over rule accuracy.

Only some parties are interested in stopping the institution from skimping on safety equipment to save money.

Only some parties are interested in seeing workers respected as masters of their crafts, able to use their own judgement.

Only some parties benefit from producing the portion of compliance documentation that nobody will ever refer to.

And only some parties are interested in work getting done in a timely manner.


The people are the problem, and the misperception that they don't need to own these requirements for the business and are just in a hurry to check a box (on all sides).

Agreed. I think the framing of compliance in most companies helps reinforce this mindset. It comes from a place of compliance as "avoiding downside in the form of fines or gov't censure", not compliance as "a set of standards we abide by, that provides upsides to all of our customers by virtue of our following them".


I can confirm this. We had to install virus scanners on our self-driving car Linux boxes which are disconnected from any networks... Fun part is that the AV scanner sometimes takes so much CPU time that the pedestrian detection algorithm fails and the car has an increased chances hitting them.


This is insane, and should be considered criminally negligent.

Linux is not a high assurance RTOS. It does not have adequate reliability nor can it provide any guarantee of hard realtime, and thus it should not be found anywhere near a "pedestrian detection algorithm" that's supposed to protect a car from hitting pedestrians.

There's proper operating systems[0] for this sort of scenario.

[0]: https://sel4.systems/About/seL4-whitepaper.pdf


You're thinking about this in terms of deadlines rather than trade offs.

If you build a million cars, they're going to hit a certain number of pedestrians, statistically. Literally zero is the ideal but not necessarily achievable.

If you spend more computation on pedestrian detection, it will do better. If you have to spend computation on useless antivirus, it can't be spent on pedestrian detection, or some other thing that improves safety. And pedestrian detection is itself a trade off -- one algorithm might be more accurate but slower, and so give the vehicle less time to respond after detection. Using a RTOS doesn't save you -- if the CPU isn't fast enough to run both the algorithm and the antivirus then it could have to starve the antivirus of resources indefinitely, which might not be compliant. So then the presence of the antivirus requires you to use an algorithm which is faster but less accurate.

You could also use a faster processor, but that's still a trade off. It could increase the cost of the vehicle and cause some people to continue to use vehicles that are less expensive and less safe, leading to an overall cost in lives.

Any time you're making a trade off where one of the variables is human lives, any inefficiency that requires you to make the trade off in a worse way is potentially costing lives. And installing antivirus where it doesn't belong is an inefficiency.


I assume they're using something like https://en.wikipedia.org/wiki/RTLinux to make it a hard RT system


Clearly not, since the AV can take too much CPU time and prevent other tasks from running.


RTLinux is dead, and it never had a high assurance story to begin with.


Gotta call BS on OP here. Anything time sensitive like self-driving cars absolutely has to be built on a real-time operating system. If you’re in the U.S. there are Dept of Transportation requirements to even be allowed to test drive the thing on any road surface other than your own driveway.


OP might mean "When I was a student working on a self driving car student project, which we mostly tested in simulation, occasionally on private land with a lot of extra safety precautions, and never on public roads"


1) It’s one of the biggest car manufacturers 2) It’s in the US 3) It’s in public roads (with a special testing permit and safety drivers)


That seems like a serious engineering ethics problem that needs to be escalated to the highest level possible and if that doesn’t work, then leaked to the media.


That's the moment where you have to sue, sabotage the auditors, enable politicians or go to the press. There has to be a line in the sand, and that's when clueless bureaucracy like that endangers life. When (not if) that car kill someone it's not only on the auditor, it's also on you ("you" as in "the people complying").


Impressive how professional negligence is the "fun part" for you. If it ever goes wrong, I hope someone goes to jail for that. (yes, stupid requirements suck. but if they actually impact things that matter, "fun" is not the appropriate response)


If that story is true, then I hope that at least some of involved in this will have substantial loses and company will go out of a business.

Before you will kill or maim someone innocent.

If you must comply with this scanner then buy 16 GB RAM and SSD (or more hardware, depending on bottleneck) rather than plan to kill people.

Also, who made self-driving car not operating as a real-time system?


Surely this is a troll.....


I highly suspect it is. Every audit I know of allows for mitigating controls. Having a system properly air-gapped would allow a system to be run without antivirus. I doubt many auditors would require antivirus on network switches.


If I may ask, says who?

I've done a little work with safety-critical systems, and that's certainly a new requirement to me both in theory and practice.


I couldn't have even imagined somebody coming to that.

I don't think a company like that can really function.

My advice, leave before it implodes.


This sounds strange because safety systems are usually hard realtime. I can't imagine those folks tolerating something that can randomly decide to eat time slices as it pleases.


not sure about this one. Viruses dont need a network or internet connection to spread. They used to spread just fine via floppy disks.

EDIT: Obviously the fact that the cpu cant handle virus scan + pedestrian detection at the same time is shockingly bad. ... But a self driving car with a virus that could cause it to do potentially anything is even worse.

Why not just run the scans when the car is not in motion, or when charging?


In another timeline:

Auditor: This internet-connected device has no floppy drive. No antivirus needed.


Is it that unreasonable for them to want people to use FIPS approved algorithms? I mean all you're showing them is how fast the current implementations of the algorithms are, and obviously being too fast isn't great for password hashing, but your benchmark doesn't in itself actually prove the security of bcrypt as a hash algorithm so I don't know why you would expect that to convince them.

Maybe the real problem is that the FIPS standards are too conservative?


> Maybe the real problem is that the FIPS standards are too conservative?

Quoting a professional cryptographer: FIPS compliance means that your code/system is sufficiently mediocre.

Conservative would be one thing. Stubbornly out of date would be another.


> Conservative would be one thing. Stubbornly out of date would be another.

Indeed. Also often impractical.

A way to solve this is to support both a regular mode of operation and a FIPS mode of operation. I've worked on multiple enterprise products at different companies that take this approach. The full-on FIPS compliant mode is there to check all the boxes for customers who need that for their auditors. Even withing the set of those customers, a majority don't actually run the product in FIPS mode because it's too limited.


Sha-2 isn’t appropriate as a password hashing algorithm so there’s something wrong with the compliance team and the engineers not correcting them that they need a KDF. Apparently NIST even calls out regular hash functions as being unsuitable for hashing passwords [1]. You’d need to use PBKDF2 although maybe this advice has been updated since? Maybe they were being recommended PBKDF2 because that too is more susceptible to GPU attacks? Really should be using scrypt rather than bcrypt anyway.

All that being said, using sha2 isn’t necessarily “wrong” if you can force your users to use longer passwords. My understanding (could be wrong - I’m not a security algorithms expert) is that all these “hard” hashing algorithms are trying to provide guarantees even if your user has poor security hygiene. If you can force your user to use a randomly generated 20 digit password then you can significantly reduce your server load and reduce latency by using a faster hashing algorithm.

[1] https://stackoverflow.com/questions/11624372/best-practice-f...


The real problem is that security decisions are being made by bureaucrats. Full stop.

Bureaucracy invariably requires compliance, and only compliance. Some problems can be cast in terms of compliance and thus solved in a bureaucracy, but security is not and never will be one them.


Are you saying the auditors are the bureaucracy or the FIPS standards approvers are the bureaucracy? All the auditors are saying is that you have to use a FIPS approved algorithm. The FIPS people have real cryptography experts who research this stuff on a daily basis that also help set the security standards for the entire U.S. government. The author is coming along here saying they are using something else that’s not approved by FIPS (for good reason I might add) and they’re mad the auditors won’t let them simply because “everyone else is using bcrypt”.


> Are you saying the auditors are the bureaucracy or the FIPS standards approvers are the bureaucracy?

Both.

> All the auditors are saying is that you have to use a FIPS approved algorithm.

... which is a problem, because FIPS approved doesn’t imply secure.

Bcrypt is a better password hashing algorithm than any FIPS-approved algorithm at this time. Don’t agree with me? Read up on it. I’m not necessarily recommending it though - there are others that may be even better, depending on the circumstance.

> they are using something else that’s not approved by FIPS (for good reason I might add)

What reason? Is there an expert cryptographer’s opinion or any expert cracking reports you can reference? Just because a group of experts decided a certain list of algos were “approved” several years ago doesn’t mean that newer algos designed or popularized since then are less secure.

Authority alone doesn’t confer reality or truth.

And unlike compliance, security is a process, not a destination.


Bureaucrats track compliance. People who know security and bureaucracy (ideally) design the standards. Engineers are still responsible for building secure systems that happen to also check all the compliance boxes.

Compliance is mostly intended to prevent really hideous errors from falling through the cracks. Nobody believes that you can't build a compliant, insecure system, or that you can't build a secure, non-compliant system.


Agreed, this is the unfortunate truth at present.

Though by preventing hideous errors from falling through the cracks, compliance requirements also often prevent the best decisions from being made.


I would say it is somewhat of a misuse of the standard. That a cryptographic hash makes an ok password function is somewhat accidental.

If they had required PBKDF2 that would have made more sense. That at least confirms you used the pseudorandom function correctly.


The author’s assertions are misleading at best, outright false at worst. Their testing is inherently flawed, and they’re misinterpreting the output from hashcat. Although their choice of bcrypt is a good one, they clearly don’t understand how to actually evaluate different algorithms, and I commend the auditor for not allowing them to do so.

The author’s process doesn’t prove that bcrypt is more secure than whatever SHA2-based alternative was being proposed (from the example, seemingly sha512crypt). It simply proves that the number of rounds they chose for sha512crypt didn’t match the timing factor they chose for bcrypt. That’s just dumb.

I could just as easily provide a counter-example by stacking the odds in my favor. The time it takes to brute force a bcrypt or sha512crypt hash is configurable when generating the hash; I could just as easily choose options that appear to support sha512crypt being more secure.

What matters here is that the company wanted to use an algorithm that wasn’t requested by their customer. Their customer had a detailed document explaining which algorithms they would prefer. Although bcrypt is generally considered top notch security, vulnerabilities have certainly be found in various implementations over time.[0] This company’s customer—the US government—wanted something they had personally vetted and approved, which is understandable. Even if you could prove that one algorithm is slower than another, that doesn’t necessarily mean it’s more secure; it’s just more resistant to brute force attacks.

Furthermore, the author says “SHA2-based” without elaborating, causing several HN commenters to assume raw SHA2 was used here. However, the author’s hashcat example shows sha512crypt. That means it’s not raw SHA2; it’s been adapted to be made proper for password hashing, including salting and multiple rounds. It’s the same as calling bcrypt “Blowfish-based:” yes, it’s true, but it’s somewhat misleading if you completely omit any mention of bcrypt. Raw Blowfish should never be used for password storage; it isn’t designed for that, much like SHA2.

[0]: https://en.wikipedia.org/wiki/Bcrypt#Versioning_history


While I don't approve, I suspect the auditors were in a difficult position: allow bcrypt, and if it blows up down the road, suffer consequences; or insist on that which will never get them in trouble, even if it is demonstrably inferior.


"Nobody ever got fired for buying IBM."


In 1998, I literally got fired for buying IBM (actual IBM) over Apple. I don't think that me telling people this in the last 20 years has done much to counter this particular meme.


That sounds like a good blog post!


That really doesn't make any sense unless you were told to buy apple but said "hey these IBM laptops are cheaper and faster! I'll get them and they'll love how much I saved them."


Sympathies! Now I'm curious, how did it happen? What equipment was it, that your employer preferred Apple over IBM, let alone had such a vehement preference as to actually fire you over it?


I would love to hear this story too. Sounds very interesting!


I’m facing this challenge right now with a regulator.

We have a financial system, that uses managed cloud SQL hosted in one of the regions. The database is also in high availability mode and has daily backup scattered all over the various regions.

All this is being managed by the cloud provider we are using.

One of the regulator has asked us to maintain a “local backup” of the database. When I asked what the reason is they just said “we require it”.

Since creating another copy of the database outside the managed environment increases risk I told them “based on my assessment we are deliberately increasing risk of a vulnerability, it’s my job to report and you to sign that you have acknowledged the increase in risk based on assessment”.... No response from the regulator that this is now required.

I can relate to what these guys are going through mostly I find regulators just read an old book and since they are usually regulators not engineers they use the rule book to make engineering decisions. While regulators have to follow a book and its their job its also important for them to understand that every system is different and that they are not experts, and that the book they follow are not always up-to-date.

Sometimes the regulators job is also to say “let me check on that” and find another expert or better yet consult with the person who wrote the book to “update” and improve on the “standard” to ensure that the highest standard is being propagated to all the parties they manage.


Without local backup you cannot be sure of integrity of the data. In that sense the regulator is right. I would even question why the data is stored in cloud?


Lol you obviously didn’t read my comment. We have backups... scattered all through out the multiple regions. They just want it on “local soil”. As in it should not be in another country.

> why is it in the cloud?

Because we’re allowed to use cloud.


nevertheless you cannot completely trust public cloud provider when it comes to data integrity.


Then what can you trust? Your own hardware? What about if they go bad? Make multiple copies? How many? And what are the implications of managing multiple local backups scattered everywhere on your own hardware? What are the costs involved? How many people should handle that? Who should hold the encryption key? How many keys? What happens if key is exposed and we have to regenerate?

At the end of the day every system has risks. It’s about what is acceptable. It’s engineer’s job to reduce risk. In my case I didn’t say what is right or wrong I just mentioned that increased risk is coming at regulator’s decision they should acknowledge and sign a piece of paper.

When you make decisions there are repercussions, you have to accept. That’s all.


Okay step by step:

You can trust your own hardware more than someone else’s hardware. Everything can go bad, but if you physically have access, it increases the probability of recovery. (Since you care more) Usually one copy where you have easy physical access and sure other than certain people cannot access this room (has all the required indicators and alarms in case of high temperature etc) another one within the space of governmentally approved area (usually this area has the least probably of seeing natural disaster) should be enough. Costs? Data security cannot be subject to tradeoffs. Usually two people own two physical keys to the backup machine, without one , the machine should not allow any access (in case one of them threatened to give up the data). you ought to keep the keys in secure place.

It’s firstly information security engineer’s responsibility to take care of the Information security.


> Costs? Data security cannot be subject to tradeoffs.

That's obviously ridiculous.

Should I hire an army to protect my server room?

Should I hire the entire NSA to do a 2 decade long audit on my code, hardware, employees, etc, before I launch the service?


I don’t understand what are you talking about. Honestly.


Or you can just pay for all that stuff to be managed. That's the whole point of the cloud. Everything you just listed is done by our cloud provider.


I've had luck showing that the cloud provider fulfills the 'local backup' requirement as part of their compliance (provided they do meet the same framework that you're trying to meet), and us inheriting the control from the cloud provider.


>“They want us to be compliant, not secure.”

No they just want to do their job & go home to their family.

Arguing about every variance and merits of crypto with the counterparty is not their job, especially when there is central team - FIPS - specifically set up to centralize this and avoid the exact thing the authors team did: Every department freestyling it.


I think it's more apt to say they lack the expertise to second guess the established best practices. There's also the question of indemnity. In case of a breach, they need to prove they followed established best practice in writing.


These drones were compliant

https://www.cryptogon.com/?p=12669

I can't find the link, but I recall that the video feeds were unencrypted because there was no approved encryption system for the use, so the contractor was not allowed to just use commercially available solutions and left the links unencrypted.


Sounds like “approved” might be synonymous with “NSA hackable.”


People without a cryptographic background may not be aware that this is even worse than how it sounds.

SHA-256 is a cryptographic hash function. It's a good cryptographic hash function. But it's not a password hash function. They have different goals. Particularly a cryptographic hash is trying to be fast, while a password hash is trying to be slow.

You can use a cryptographic hash function as a password hash, it's better than nothing. But it's a really poor choice. If this story happened as it's told, it not only means that compliance is making wrong decisions, it also means the auditor didn't know what he was talking about.

There are FIPS-approved password hashes, even ones building upon SHA256 (PBKDF2-SHA2-256). It would've been less bad to propose to move from bcrypt to such a function. (Still not really justified, as bcrypt is a fine password hash function.)


Imagine you have no idea of cryptography and someone says to you: "We use foocrypt. It's fast, it's safe." How do you know?


As an engineer I think all you can do is respectfully and publicly (within org) raise your concerns. If shit hits the fan 5 years later you have a life jacket and they are the ones looking like idiots.


We had to delay rolling out one of the more recent Mac OS releases (Catalina, maybe) because it used the more secure “encrypt the data and throw away the key” instead of “write 0 to the sector 5 times” secure delete thing. Microsoft did research on how bad password rotation is, but it’s still the rule. Regs that can’t keep up with the pace of the thing they’re regulating are as bad as none at all, I think.


NIST finally jumped on the “long passwords are stupid and password rotations are for suckers” bandwagon about 3 years ago.

Problem is PCI, HIPAA and others haven’t changed their rules to accommodate. However, auditors aren’t stupid. I’ve told mine, “here, look at the new NIST standard” and they say “okay, yea MFA is better”. As long as you can point to their technology standards agency (NIST, FIPS, DISA, etc) you should be good to go.


This has been my experience too. Typically, standard bodies move at a different pace, and as long as you use a reputable known standard, you're good to go.


I work in the Public Safety space where we are subject to a long list of CJIS guidelines. Some of the regulations, particularly like FIPS seem quite outdated and actually require systems to be less secure than they could be.


I sell B2B security software, which is mainly used for compliance in a niche market.

The software, of used correctly and integrated into processes, really can add a meaningful extra layer of security. However, lots of our customers just want to tick a compliance box that they can point assessors to. Kind of sad :(


When you’re protecting someone else’s data you have no incentive to keep it secure beyond the minimum to cover your ass from regulators and lawsuits.

It’s not so sad so much as maligned incentives. Data breaches are common, companies aren’t really punished for them, and the harm to the user is vague and doesn’t typically materialize in any visible way so they’re little outrage outside our bubble.


There are always two sides to the coin. The auditors are just doing their job, telling you that the software you use is not compliant. What could have been done here is to demonstrate the safety of the algorithm to the organization behind FIPS, so that they can approve it.


* Someone had access to a delivery system that could be used to attack thousands of people's machines by rolling out a rogue release. * This someone was forced to install a specific antivirus due to compliance with an industry certification.

This antivirus called home, and was from an obscure vendor. This someone wasn't able to change any of this, and also didn't have power to choose another vendor.

If you followed the rules by the book, you couldn't even introduce a third machine to authenticate things without this silly protection. So, what should did they do?

Installed the antivirus on a virtual machine with no access to the host computer, and program it to be fired up for a couple of minutes a day, just to keep the stupid antivirus happy by calling home.


Compliance is security, or said differently, non-compliance is a security vulnerability.

In an isolated instance, it may seem obvious that bcrypt is superior to SHA-2 (it sure is to me). But managing security posture at scale, across a massive enterprise with disparate teams is tough. Rogue groups make bad security decisions all the time. Committees are formed, and policies are put into place to guard against this. Auditors are charged to enforce those policies. Changing policy itself presents risk (both organizational and personal/job-related risk). Therefore committees are conservative with adapting to evolving environments.

This is the Occam's Razor explanation for why Things Just Don't Make Sense around many corporate IT security policies.


I tried looking up if there are any previous examples of FIPS 140 that allow bcrypt. I can find 37 matches when searching for "bcrypt" here: https://web.archive.org/web/20141226152243/http://csrc.nist.... . Seems that BitLocker in Windows used BCrypt at one point. Author doesn't mention how old this is, but BCrypt seems to have been approved for FIPS 140-2 compliance since 2008. If the event described was after 2008, I would not hire those same auditors again...


There is a difference between FIPS validated and FIPS compliant. I may be mistaken, but from looking on CVMP, it does not appear that Bcrypt itself is FIPS validated algorithm.

Edit: List of approved security functions for FIPS 140-2 can be found here: https://csrc.nist.gov/csrc/media/publications/fips/140/2/fin...


Cryptographic question: Why don't you just SHA-2-hash the Bcrypt hashes?

  hash = sha2(bcrypt(plaintext))


You could use hash the password with sha2 and then with bcrypt, but not the other way around.

Hashing with sha2 will give you the same result every time. That's why it's appropriate to use as a checksumming algorithm - the results are repeatable. Bcrypt on the other hand does not generate the same result every time by design due to it using a randomly generated salt.

However, if you did hash with sha2 the first time, you may end up regretting that decision if sha2 collisions end up being possible and/or practical.


bcrypt(plaintext) != bcrypt(plaintext)

The output of the function includes a random salt.

The verify function looks like

verify(plaintext, hash)

This is why the bcrypt cyphertext needs to be stored


That would not be possible, but one could use bcrypt to generate a salt to use with SHA2 as the last step.


Compliance is important when you are selling stuff to customers. As they also often want compliance. So less secure, but compliant solution is sometimes needed from business perspective... Which I think is likely be better than just telling customers "trust us".

Fault is really with standards moving too slowly, though this might not really be negative thing as sufficient time for researches to reach conclusions is also needed.


> “They want us to be compliant, not secure.”

Suppose the auditor looked at the output from the author's single test and said, "Oh, ok, you're good to go."

In this version of history, perhaps the author's organization saw a moderate benefit from the more resilient crypto. However, the set of all organization audited would be significantly less secure because the audit becomes nearly meaningless.


In a well-functioning system, the auditors ensure the agreed security policy is implemented; they shouldn't have any power over what the policy says.

As such, the author cannot (should not be able to!) change the policy by appealing to the auditors.

Explaining why you are non-compliant to the auditors is a wasted effort: you are talking to the wrong people.


Security people come from auditing and government backgrounds. They aren't about providing solutions or even keeping up with cracked algorithms, exploits, and hacking techniques.

They are almost solely about establishing a policy and enforcing it. In large IT organizations, security managers keep their jobs by doing things that upper management understands: policies and enforcement.

In reality what will help things are providing reference implementations of things like login flows, secured operating systems, secured web browsers, etc. In the age of AMI and docker images, providing key building blocks for secured systems would go a lot longer than a lot of the insanity I've seen over the years.


Well, this article at least explains why Microsoft is touting their "highly secure" salted SHA password hashing for Azure and not using bcrypt, PBKDF2, or Argon2. They must be aiming for FIPS compliance.


I'm curious: If the regulators required SHA-2 hashes, and the author desired blowfish, why couldn't they implement blowfish+SHA-2? I.e. sha2(cost(blowfish(password, salt), cost_factor))


So roll your eyes and use PBKDF2 with SHA2.

Regulators have a job and auditors are not intended to evaluate everyone's hand-rolled crypto (obviously a bit if an exaggeration calling bcrypt hand-rolled...).


It's always difficult to assess these kinds of decisions without all the information, though. There might have been very good business reasons for this.

Maybe to qualify for the insurance cover for intrusion, there was a specific list of algorithms that the insurance company said had to be used.

Maybe it costs the insurance company so much to update its actuarial model for the policy that it's only worth doing every 10 years even if that means it's less secure.

I'm not saying that this is true for this case. But it could be.


Yes, software often has requirements from the people paying the bills, and when those people are from large bureaucracy, requirements are often shaped by bureaucratic rules that move much slower than understanding of the subject area.

It's even more fun when there is a chain of those involved, because then you can get bureaucratic rules that move slower than the external bureaucratic rules which they are serving, which in turn move slower than knowledge of the subject domain.


Box tickers are the bane of a free and competent society.


I empathize with the author, but definitely they are missing the Big Picture. Just because everyone’s using SolarWinds, does that make it safe to use? No. Now replace “SolarWinds” in that previous sentence with “bcrypt”. Author’s sole argument for security is that ‘everyone’s doing it’.

Auditors also can’t just allow any Tom, Dick, or Harry to write their own cryptography hashing algorithm and call it secure.


OP is biased, in a sense that there is no such thing as “X is better than Y, because test proves that”. And it’s not regulator’s responsibility to decide whether X better than Y, there are rules, list of approved algorithms and whatnot. It’s that simple. If one can violate the rules, virtually everyone can violate the rules. Whether X is better than Y that thing can be debated of course.


This seems to blow past the issue where rules are creating the precise problem (increased vulnerability), that they were expected to minimize.


Then the right action would be to fight for including bcrypt in the approved algorithms list.


> right action

If a fight is required, 'energy-draining', 'needless' or 'counterproductive' would be more appropriate adjectives than 'right'.


Here's a fun read for anyone interested in government compliance requirements. As a side note, many state government agencies also use this because they have your IRS tax returns.

https://www.irs.gov/pub/irs-pdf/p1075.pdf


I think some of this is things with good intentions. For example to prevent someone trying to blab it's secure, when it's really not https://www.troyhunt.com/lessons-in-website-security-anti/


Isn't the issue here that the FIPS equivalent of bcrypt is PBKDF2 which uses SHA512 like bcrypt uses Blowfish? (FIPS do not directly certify PBKDF2 only the underlying SHA512, but that is good enough for NIST).


Couldn't you have satisfied the auditor's need for compliance as well as your own sense of security by simply composing SHA2 with bcrypt?

Hash(password) = SHA2(bcrypt(SHA2(password)))

A SHA-2 based hash with similar performance as bcrypt...


It makes me suspect that the government standard was deliberately insecure to make a more "thorough audit" easier.


Work for the govt, can confirm.


a) Consistency and standards are important

b) I'm not sure if such a test really proved anything. Crypto is hard, we don't like to do experiments, it may take time for larger orgs to accept new crypto.

c) The standard algs were almost assuredly 'good enough' for the job. By introducing new tech, you introduce risk, fragmentation etc..

In the bigger picture, the regulators were probably correct. They were keeping the system pragmatically 'more secure' even if not so much on a theoretical basis.

There are examples of bad bureaucracy, this isn't one of them I think.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: