I guess we have different philosophies then. My take is that software in production should not require maintenance to remain in production.
Imagine a world where you didn't need to spend a whole week every year, per project, just keeping your existing software alive. Imagine not having to put off development of the stuff you want to build to accommodate technical debt introduced by 3rd parties.
That's the reality in Windows-land, at least. And I seem to remember it being like that in the past on the Unix side too.
Your vision is only workable for software for which there are no security concerns. This might improve to the extent industry slowly moves away from utterly irresponsible technologies like memory-unsafe languages and brain damaged parsing and templating approaches and more or less the whole web stack. I wouldn't hold my breath though. And even software that's not cavalierly insecure will have security flaws, albeit at a lower rate.
Keep in mind that you're arguing against an existence disproof. The Microsoft stack, for example, is a pretty big target for attack, and has seen its share of security issues over the years.
But developers don't need to make any code changes or redeploy anything to mitigate those security issues. It all happens through patches on the server, 99% of which happen automatically via windows update.
So many open source hackers do not know the basic tecniques for backwards compatibility (e.g. don't reaname a function, just intoduce a new one, leaving the old available).
I'm spending very significant efforts maintaining an OpenSSL wrapper because OpenSSL constantly remove / rename functions. I hoped to branch based on version number, but they even changed the name of the function which returns version number.
And that's only one example, lot of people do such mistakes costing huge efforts from users.
And this popular semantic version myth, that you just need to update major version number when you chane the API incompatibly to save your clients from trouble.
> So many open source hackers do not know the basic tecniques for backwards compatibility (e.g. don't reaname a function, just intoduce a new one, leaving the old available).
I'd dispute this, or at least I think this doesn't capture the whole picture. Microsoft makes money with backwards compatibility and can afford to spend significant effort on to the ever-growing burden of remaining backwards-compatible indefinitely. Open source volunteers are working with much more limited resources and I think that it comes down much more to intentional tradeoffs between ease of maintenance and maintaining backwards compatibility.
If you have a low single-digit number of long-term contributors, maybe the biggest priority to keep your project moving at all is to avoid scaring off new contributors or burning out old contributors, and that might require making frequent breaking changes to get rid of unnecessary complexity asap. Characterizing that as "they don't know that you can just introduce a new function" doesn't seem like it yields instructive insights.
Yes, this is exactly the wrong reply I often hear when complaining about backwards compatibility.
The mistake here is that in 99% of cases backwards compatibility costs noting - no efforts, no complexity.
Of two equally costing choices the people breaking backwards compatibility just make a wrong choice.
> maybe the biggest priority to keep your project moving at all
When you rename function SSLeay to OpenSSL_version_num, where are you moving? What does it give to your project?
Ok, if you like the new name so much, what prevents you from keeping the old symbol available?
unsigned long (*SSLeay)(void) = OpenSSL_version_num
(Sorry for naming OpenSSL here, it's just one of many examples)
When developers do such things, they break other open source libraries, which in turn break other. It's a huge destructive effect on the ecosystem. It will take many man-days of work for the dependent systems to recover. And it may take years for the maintainers to find those free days to spend on recovery, and some projects will never recover (e.g. no active maintainer).
With a lift of a finger you can save humanity from significant pain and efforts. If you decided to spend your efforts on open source, keeping backwards compatibility by making the right choice in a trivial situation will make you contribution an order of magnitude bigger, efficient.
So, I believe people don't know what they are doing when they introduce breaking changes.
I saw developers introducing breaking changes, then finding projects depending on them and submitting patches. So they really have good intentions and spend more their volunteer open source energy than necessary. And when the other project can not review and merge their patch (no maintainers) they get disappointed.
So please, just keep the old function name. It will be cheaper for you and for everyone.
Microsoft makes money with backwards compatibility
That's a good way of putting it, and it gets to a key difference between open source and proprietary software.
In the open source world where a million eyes make all bugs shallow, developer hours are thought of as free. So if you change something it's no big deal because all the developers using your thing can simply change their code to accommodate it. It doesn't matter how many devs or how many hours, since the total cost all works out to zero.
In the proprietary world, devs value their time in dollars. The reason they're using your thing is because it's saving them time. They paid good money because that's what your thing does. Save time. Get them shipped. As a vendor, you're smart enough to realize that if you introduce a change that stops saving your customers time or, worse, costs them time or, god forbid, un-ships their product, they'll do their own mental math and drop you for somebody who understands what they're selling.
In the end, all we're talking about here is the end product of this disconnect in mindset.
Microsoft also isn't your average developer that imports libraries from strangers.
Ever time I run an audit (which is monthly) I see at least a dozen conversations in NPM packages we use. Sure, some of them don't apply to our usage, and others can't really impact is, but occasionally there is one we should be concerned about.
We server admins can push buttons to upgrade, but that doesn't mean developer code will keep working.
Many developers live in this world were they think server admins will protect their app... But we're more likely to break things by forcing your neglected package upgrades
> Keep in mind that you're arguing against an existence disproof. The Microsoft stack, for example, is a pretty big target for attack, and has seen its share of security issues over the years.
> But developers don't need to make any code changes or redeploy anything to mitigate those security issues.
I don't believe it. Most security issues are not just an implementation issue in the framework but an API that is fundamentally insecure and cannot be used safely. Most likely those developers' programs are rife with security issues that will never be fixed.
Which is fine, as long as they are understood and mitigated against. If your security policy consists entirely of "keep software up to date", you don't have a security policy.
In practice trying to "understand and mitigate against" vulnerabilities inherent in older APIs is likely to be more costly and less effective than keeping software up to date.
If there is a problem in an older API, it's probably time to update. That's understanding and mitigation.
The discussion is about the difference between updates when there's a valid reason and updates that are imposed by cloud providers, nobody advocates sticking with old software versions.
> nobody advocates sticking with old software versions.
In my experience that's what any policy that doesn't include staying up to date actually boils down to in practice. Auditing old versions is never going to be a priority for anyone, and any reason not to upgrade today is an even better reason not to upgrade tomorrow, so "understanding and mitigation" tends to actually become "leave it alone and hope it doesn't break".
In practice you don't mitigate against specific vulnerabilities at all, you mitigate against the very concept of a vulnerability. It would be foolish to assume that any given piece of software is free from vulnerabilities just because it is up to date, so you ask yourself "what if this is compromised?" and work from the premise that it can and will be.
Let's say I have a firewall. If we assume someone can compromise the firewall, what does that mean for us? Can we detect that kind of activity? What additional barriers can we put between someone with that access and other things we care about? What kind of information can they gather from that foothold? Can we make that information less useful? etc.
You think about these things in layers. If X, then Y, and if Y, then Z, and if X, Y, and Z do we just accept that some problems are more expensive than they're worth or get some kind of insurance?
I've found that kind of approach to be low security in practice, because it means you don't have a clear "security boundary". So the firewall is porous but that's considered ok because our applications are probably secure, and the applications have security holes but that's considered ok because the firewall is probably secure, and actually it turns out nothing is secure and everyone thought it was someone else's responsibility.
I think you're projecting. The whole point is reminding yourself that your firewall probably isn't as secure as you think it is, just like everything else in your network. This practice doesn't mean ignoring the simple things, it just means thinking about security holistically, and more importantly: in the context of actually getting crap done. Regardless, anyone who thinks keeping their stuff up to date is some kind of panacea is a fool.
Personal attacks are for those who know they've lost the argument.
Keeping stuff up to date is exactly the kind of "simple thing" that no amount of sophistry will replace; in practice it has a better cost/benefit ratio than any amount of "thinking holistically". Those who only their things up to date and do nothing else may be foolish, but those who don't keep their things up to date are even more foolish.
> But developers don't need to make any code changes or redeploy anything to mitigate those security issues
Right, so all deployed Active X based software magically became both secure and continued working as before after everyone installed the latest Windows patches?
The trivial patching only works for security issues due to implementation not design defects. If you have a design defect, your choice is typically either breaking working apps or usage patterns or breaking your users security. Microsoft has done both (e.g. Active X blocking, vs continued availability of CSV injection) and both have negatively affected millions.
We're using the definition a few notches upthread: "Dear manager, for the next weeks/sprint the team needs X days to upgrade the software to version x.x.x otherwise it will stop working"
As opposed to:
2011: deploy website, turn on windows update
2011-2019: lead life as normal
2019: website is up and running, serving webpages, and not part of a botnet.
That's reality today, and if it helps to refer to it as "maintained", that's fine. The point is that it's preferable to the alternative.
I think that the parent commenter is referencing node 4.3 being past EOL and being unmaintained software and therefore unfit for prod, unlike the ms stack which is receiving patches
I was referring to comments that MS is good at backwards compatibility and “if you write application, it will run forever” and I pointed out that MS also breaks backward compatibility what regards languages.
Installing Security patches for a ruby stack takes a full code coverage test suite, days of planning and even more to update code for breaking changes.
Installing security patches for a Microsoft stack requires turning on windows update.
There's a BIG difference. Once you write your msft stack app, is done. Microsoft apps written decades ago still work today with no code changes.
What if the new node version fix an bug / issue / CVE that doesn't concern the software ?
Is it resonable to postpone the upgrade for later ?
Example : the software uses python requests. A new version fixes CVE-2018-18074 about Authorization header, but you don't use this header, for sure. Is it resonable to upgrade a little bit later ?
Depends on how mature is your security team/process. Can you spend time tracking separate announced bugs and make case by case decision for each cve? How much would you trust that review? Do you review dependencies which may trigger the same issue?
Or is it going to take less time/effort to upgrade each time?
Or is the code so trivial you can immediately make the decision to skip that patch?
There's no perfect answer - you have to decide what's reasonable for your teams.
The cool thing about serverless infrastructure is that it does not really concern you. As long as you are on a maintained version of the underlying platform your provider will take care of the updates.
If your software runs on a unmaintained platform there won't be any security fixes and that's why amazon forces you to upgrade at some point.
You are looking to save yourself a week of time a year and then 3 years later for some reason or another you will HAVE to upgrade and good luck making that change when the world has moved past you.
Your describing traditional sysadmin vs devops. Devops means repeating the stress points so that they are no longer stressful and automated as much as possible.
I like it way better then the classic, "don't touch this, it's working and the last guy that knew how to fix it is gone.
you don't need maintenance to remain in production, you need maintenance to reduce the tech debt in the infrastructure you decided to use (code, frameworks, third party libraries, security issues).
Even just vanilla languages get upgraded every X months/years etc.
Not maintaining the code is just a bad gift you are giving to your (or someone's) future.
I have been in upgrades from perfectly working software written in an older (almost 4) version of java that was needed to add new features and it took a hell of a time and I have never seen it working at the end.
I don't think it's a safe choice to "let it be" when it comes to software.
Imagine a world where new exploits and hacks didn't come along every day and compromise the systems your app sits on because you didn't keep up with patches and upgrades...
That also describes my (very small in scope) PHP and Javascript things. They all still work, and I love that to bits. Admittedly, the price of that probably is keeping it simple, but if I needed to update it all the time just to keep it from not sinking under its own weight or the ground shifting beneath it, that would be no fun for me.
Imagine a world where you didn't need to spend a whole week every year, per project, just keeping your existing software alive. Imagine not having to put off development of the stuff you want to build to accommodate technical debt introduced by 3rd parties.
That's the reality in Windows-land, at least. And I seem to remember it being like that in the past on the Unix side too.