Hacker News new | past | comments | ask | show | jobs | submit login

> Keep in mind that you're arguing against an existence disproof. The Microsoft stack, for example, is a pretty big target for attack, and has seen its share of security issues over the years. > But developers don't need to make any code changes or redeploy anything to mitigate those security issues.

I don't believe it. Most security issues are not just an implementation issue in the framework but an API that is fundamentally insecure and cannot be used safely. Most likely those developers' programs are rife with security issues that will never be fixed.




Which is fine, as long as they are understood and mitigated against. If your security policy consists entirely of "keep software up to date", you don't have a security policy.


In practice trying to "understand and mitigate against" vulnerabilities inherent in older APIs is likely to be more costly and less effective than keeping software up to date.


If there is a problem in an older API, it's probably time to update. That's understanding and mitigation.

The discussion is about the difference between updates when there's a valid reason and updates that are imposed by cloud providers, nobody advocates sticking with old software versions.


> nobody advocates sticking with old software versions.

In my experience that's what any policy that doesn't include staying up to date actually boils down to in practice. Auditing old versions is never going to be a priority for anyone, and any reason not to upgrade today is an even better reason not to upgrade tomorrow, so "understanding and mitigation" tends to actually become "leave it alone and hope it doesn't break".


In practice you don't mitigate against specific vulnerabilities at all, you mitigate against the very concept of a vulnerability. It would be foolish to assume that any given piece of software is free from vulnerabilities just because it is up to date, so you ask yourself "what if this is compromised?" and work from the premise that it can and will be.


Sounds clever, but what does it actually translate to in practice? And does it work?


Let's say I have a firewall. If we assume someone can compromise the firewall, what does that mean for us? Can we detect that kind of activity? What additional barriers can we put between someone with that access and other things we care about? What kind of information can they gather from that foothold? Can we make that information less useful? etc.

You think about these things in layers. If X, then Y, and if Y, then Z, and if X, Y, and Z do we just accept that some problems are more expensive than they're worth or get some kind of insurance?


I've found that kind of approach to be low security in practice, because it means you don't have a clear "security boundary". So the firewall is porous but that's considered ok because our applications are probably secure, and the applications have security holes but that's considered ok because the firewall is probably secure, and actually it turns out nothing is secure and everyone thought it was someone else's responsibility.


I think you're projecting. The whole point is reminding yourself that your firewall probably isn't as secure as you think it is, just like everything else in your network. This practice doesn't mean ignoring the simple things, it just means thinking about security holistically, and more importantly: in the context of actually getting crap done. Regardless, anyone who thinks keeping their stuff up to date is some kind of panacea is a fool.


Personal attacks are for those who know they've lost the argument.

Keeping stuff up to date is exactly the kind of "simple thing" that no amount of sophistry will replace; in practice it has a better cost/benefit ratio than any amount of "thinking holistically". Those who only their things up to date and do nothing else may be foolish, but those who don't keep their things up to date are even more foolish.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: