Hacker News new | past | comments | ask | show | jobs | submit login
Let’s Get Excited About Maintenance (nytimes.com)
234 points by nothinggoesaway on July 24, 2017 | hide | past | favorite | 113 comments



I mean, I think the best way to start with this is to use a word other than "maintenance." That's not really the most sexy word if you really want people to get behind it. Furthermore it just suggests that the work is keeping something as good as it was from the beginning. Filling in holes, giving it a coat of paint every now and then.

What it really should be called is "refinement." The innovation ends up being incredibly crude but it gets the job done. How can we build on that, make it better and less coarse than it was? How can we make it more efficient?


The fact that you need a sexy word is indicative of the problem.


The fact that you need a sexy word is indicative of _human nature_


Words don't become sexy by themselves, they become sexy because they're labels for sexy things.


If it were something utterly intrinsic to Human Nature, there would be no opposition to it.


By that definition there is no true human nature.


Bingo


Reproduction is intrinsic to all life -- yes, including us humans -- and yet there are people who oppose it.


I'm not sure that can be said with certainty, perhaps it is cultural.


Oh, come on. Remarketing words is buzz, and we really shouldn't encourage it. This is why we're having that ridiculous title-bubble as well, Senior Site Reliability Engineer. Sure, whatever. /sysadmin, who's been doing sysadmin, devops, backend dev, and sre stuff and sometimes do pretty css flies away/


Continuous Improvement is the lean manufacturing term for it.


Continuous improvement is more like the kind of maintenance you do to software than the one you do to infrastructure.

I don't think the name would matter in any way. It's more a problem of what voters value, or what politicians think that voters value.


"Quality of life" improvements would be better. People can motivate themselves if it improves their quality of life. We do daily maintenance, life itself is mostly maintenance, but you can make it worthwhile, and sometimes fun, by focusing that on improving your quality of life and time.


Operational innovation is maybe the term you are looking for.


Sustainability Engineering?


Let's all go remaster our local park!


"Reliability & Recovery Engineering"


My choice is 'survival'. Less easy for business-types to dismiss.


Call it long term support logistics and operations.


Or maybe just "Support"


But refinement doesn't seem accurate, as it implies that there are changes being made, while this is not necessarily the case.


(tangentially related to this article)

I have recently come to realize that, at least in my world, source code older than five years is basically doomed. Developers simply refuse to work on it.

The code that makes it to five years is extraordinary as most of it "dies" before reaching the eighteen month mark.

As a result I have recently been shifting my view to support replace-ability vs maintainability whenever possible. I'm not totally sure how to achieve it, though. Most current trends seem to be towards increasing baggage. (docker)

Data lives on and on and on, however. Data is king. :)


I used to work on Windows and would pretty often edit source files that were written over 20 years ago, sometimes by the guy four levels above me in the chain of command, back when he was an individual contributor. I actually preferred the C and older C++ components, with no exceptions and manual memory and error handling, to the "modern" C++ components.

I recommend Raymond Chen's blog, The Old New Thing, for some cool stories about how and why ancient Windows/Office components were written a certain way, and what the effects down the line have been. Among other things. https://blogs.msdn.microsoft.com/oldnewthing/

What you're saying is probably more true in the web dev world.


Heck, I work on a web product, and I regularly work on code a decade+ old. None of the developers who wrote that code are around anymore, except for the heads honcho. It always surprises me when I go to use git blame to see code that old (going to git blame to get a better understanding of the purpose), and at that point it's was actually an svn commit I'm looking at.

Fun times. It shouldn't surprise me that since the code from that time uses Hungarian notation.


> What you're saying is probably more true in the web dev world.

Any world in which OOP is the primary paradigm. Then again, I've seen truly horrid procedural code as well. Do you have any specific ideas on why the code you're working with seems to be the exception to the rule? Or is it pretty much all in that blog you linked?


A few things come to mind:

- Windows is (mostly) a platform, not an end application. So you need...

- Backwards compatibility. It is a pretty hard requirement, as one of the major selling point of Windows is that your existing stuff will keep working. Programs targeting 16-bit Windows from the early 90s still work on Windows 10 (though they recently removed the 16-bit subsystem from 64-bit Windows). I believe it's the same with Office document formats. You already have the code that handles all of the old APIs and functionality, so...

- If it ain't broke don't fix it. The old code has been thoroughly battle tested. Obscure edge cases have been addressed. Bugs have been fixed over the course of many releases. There is no chance that your rewrite will avoid regressions, and you'll almost certainly introduce new bugs too. It's better to make necessary changes to the old code than to try to start from scratch. And...

- It's a lot cheaper too.

Joel Spolsky has a great blog post about rewrites: https://www.joelonsoftware.com/2000/04/06/things-you-should-...


I have seen or worked multiple still maintained projects way older then 5 years - with parts of old code still there. Most of them in java or .net.


5 years old Java code is really no problem as long as it uses maven as build tool (and don’t use long deprecated code generation tools).

This is the main difference l see between maintaining old Java code and e.g old Delphi code:

In maven/java I can run a couple of commands to build it, open it in any of my three favourite IDEs (Netbeans, IntelliJ and Eclipse in that order) and it just works.

In other languages there is often some dependency hunt involved before you can even start. Bonus for dependencies that must be installed using .exe installers, in a specific order (hint: if you don't get it right - good luck with cleaning the registry before trying another installation order.)

I might have been very unlucky in my encounters with legacy code in other languages but so far every encounter makes me love the Java ecosystem more.

(Although I'll happily admit that parts of it might be related to the fact that Java developers seems to avoid using non oss code to a much larger degree than anyone else - maybe except from php and js devs?)


This is a product short tenures: software developers switch jobs every 18 months on average, so they don't get to see the long-term effects of their decisions. This leads to decisions with big payoffs in the short term that come back to bite you in the long term. Some examples:

1. Pervasive, maximal frameworks that favor terse magic over clear explicitness. At first this allows you to spin up a website in a few lines of code, but years down the line you're spending all your time reverse-engineering a workaround for a bug in Ember 0.2 so you can implement the same functionality in Ember 17.3 when the controller that it referred to is no longer in the codebase and the naming convention for tying together models and views has changed twice.

2. 0.x versioned libraries which solve the problem you have today, but cause 100 problems tomorrow when the maintainer deletes the repo.

3. Global state, which lets you bypass having to pass data through to components until later when you want to have more than one component per page.


Global state is actually a good model, see https://awelonblue.wordpress.com/2012/10/21/local-state-is-p....

The issue is more one of fragmentation; figuring out which file(s) / method(s) are responsible for some item of state can get to be impossible without a powerful debugger, and constantly changing frameworks / libraries don't help.


Whatever this article is talking about, it is not what most developers mean when they say "local state".

For a start, one does not persist "local state", and this is a clear give-away:

> state cannot be cheaply recomputed or regenerated like other live values

The author seems to be arguing against a global state encapsulated into a tree of disjoint accessors. And his solution:

> our programs become stateless logics manipulating a stateful substrate

Is verbatim the FP-way of avoiding working on global state.


You haven't worked in a large company. :-) Much of the code behind your phone, the train system, chain restaurant ordering and any paper bill you get is likely 10+ years old. These companies undertake 5-year programs to replace ancient systems, and the 5-year programs frequently fail, and run late when they don't.


Yup, been through a couple of those. What tends to happen is the very senior engineers get architecture-itus and believe they'll actually be able to create this wonderful, flexible architecture that will withstand any future req change. Millions of dollars later, it's tossed in the trash and the old system is still selling, but customer retention is so high a focus at this point to stem the tide of exodus that all the developers do is put out fires now.


Hell, look at the discovered/mdnsresponder disaster that happened with Apple a few OS releases back for a publically visible instance of this.


> I have recently come to realize that, at least in my world, source code older than five years is basically doomed. Developers simply refuse to work on it.

What world are you living in? I work for one of the big 5 and 75% of the software stack features I use have been in continuously developed for over 5 years.


Yea ditto, I just fixed a bug in some other team's 5 year old code cus it was breaking on some of my team's stuff. Beyond some ungainly test code involving counting byte offsets, it wasn't that bad. The fix involved dropping in some other 5 year code via a library and voila it worked.

I get some of what gp is saying, some older stuff gets abandoned and not improved much, while other stuff gets modified so much that it's sorta like that "at what point does a ship become a new ship though gradual replacement of all timber?" question.


You have obviously not been a developer for very long. I commonly use code that is 20 years old. Of course that code is well maintained C and C++ code.

This is one reason the not acquiring or correcting technical debt is so important for the long term health of a company.

Now I agree that data is king, but if you have to rewrite you code base every 5 years you are doomed.


I've been writing code for over 20 years now, longer in the industry than in academia now. The trend toward using the latest, shiny new thing seems to me to have accelerated, too. I would like to go back to a job where I work on code that includes decades old C and C++. Unfortunately those jobs tend to pay significantly less than others I can get.


I have used my share of shiny new things, I have created many shiny new things. But you can not build a society based on shiny new things. At a certain point the churn rate gets too high.

A friend of mine is working for an internet security company that seems to have the enlightened attitude towards software, everything is test driven and the majority is C++ and Java. This code will be around in 20 years, and will bring in millions of dollars of revenue with minimal maintenance.


I don't disagree. I'm just saying the market doesn't currently seem to value that.


Apparently developers compare their sizes too.


Totally with you. I've been thinking of it as rewrite-friendliness, and have some ideas for encouraging it: http://akkartik.name/about

Even if people don't believe that all code over five years old is doomed, writing code to be easy to rewrite still seems like a useful idea. You're preserving optionality, you're making it easier to try out multiple designs, and you'll have an easier time recruiting collaborators because they'll have more fun projects to try, and so they'll be more likely to progress 'down the funnel' from using a project to hacking on it to contributing it all the way to taking over its running.


I love your website. My experience (web app for heavily regulated industry):

- Customers don't want major versions to be continuously deployed. Each deployment involves validation and training. They want the software to be left alone, perhaps with the occasional bug fix.

- The business needs new features to stay competitive.

- So you're left actually needing to host multiple versions. All the major versions the customers are on, and want to stay on, and one latest-and-greatest version for winning new business.

- Never force a new feature on a user. It becomes obvious that you're building things that most of your customers don't want. This means don't create a major version and automatically deploy it.

- Moving data between systems is hard. Plan to never move data between systems, or major versions of the same system. Users prefer to see their data where they left it, looking exactly how they left it. A major version should be empty. If you constrain a new version of your software based on old data, sooner or later you're stuck, either because of schema or because of volume.

- So if you have a big new feature, you save it for the next rewrite.

- Once you've decided that at some point in the future you're going to do a rewrite, things fall into place nicely...

- You avoid the situation where your dev team is mostly useless apart from the few guys who originally wrote the system.

- You avoid the endless accumulation of data.

- You avoid being commanded by the product team.

- You avoid being hated by your users, and your clients.

- You avoid spending huge amounts of time and money on things you didn't really need to do.

- You avoid the end of your business, which is inevitable given ~10 years of 'current best practice'.

- The part that I don't know yet is... developers have been avoiding rewrites for so long, we haven't really got good at it. We need to do it, do it often, learn how to do it well, and share that experience.


1-4 jibe with my experience.. that's why we built a SaaS cloud that allowed each customer (company, not individual users) to run a specific version of the platform, rather than the full multi-tenant model where everyone is on the same track. The latter may work for peripheral applications, but for core business apps it's a non-starter.

Unfortunately, it often felt that we were having to write everything from scratch; you can barely find anyone talking about this model, or any tools designed to handle it. Everyone seems to be either fully multi-tenant or manually launching individual servers (which is not feasible for low price SaaS).


We're in the same boat as you and our SAAS model involve custom deployment versions for each client. We're looking at trying to keep the databases unique per client, but handling the core requests in a multi-tenant environment, but it's difficult to plan ahead for all the different asks our customers need.

I often wish I worked at one of the SaaS product companies that target small-business.


I've considered that approach, but that means keeping the code fully retrocompatible, and that kinda leads to madness :D

Plus if you have different processes per client, you can provide better security, particularly in terms of leaking customer data to each other. In our case, we just used different Linux and Postgres users for each client, but even just that meant that a few security bugs discovered in the application couldn't be used to read/write data of other customers.


You would be surprised how long lived some code is. I used to work on codebase that was started during 70s and currently working on codebase written around 1995 - 1998 and maintained until today.

There is wast amount of code used daily that is decades old, especially if those systems are not user facing.


I don't share the same sentiment. I routinely see active code being supported that 10 to 20 years old. Granted, it's no joy to work on, but it does get maintained.

I'm curious what kind of environment you work in where where most stuff stops being used after 18 months...


I believe the right way to go is to focus on library code. If the long lived code is to prove exceptional it has to do so by being grabbed and bolted on to the new thing over and over, and that tends to favor an approach of libraries that assume very little, don't have many dependencies themselves, and opt for simple/robust API over being efficient. The API user can always recode the API for efficiency in their use case while as a library vendor your ability to guess at hotspots is limited at best, and as a consumer one is always looking for a library that can be used easily and disposed of quickly. In effect, "design by placeholder".


I don't argue your point here, but I'm seeing more and more developers that cut their teeth in the age of libraries that are terrified to touch library code.

There's an assumption out there for some that library code is flawless, and there's an imposter syndrome type aversion to touching it, with the developer fearing the code was made in a certain way for a certain reason and that they're not skilled enough to work on it.

This is made worse by the few brave developers that will dive in getting all the anger if they make a mistake, at a massive scale if the library is popular.


I suspect that the MS and Java tech stacks might factor into this a bit.

I can't speak for other languages, but in Go, the ability to navigate to the definition of any documented symbol via GoDoc has been most interesting for learning about how the sausage is made in certain areas. I don't have a comprehensive knowledge of all of Go yet, but I've learned that environment variables, for example, are backed by a `map[string]string`.

Perhaps something like that might help with the imposter syndrome about it?


I work on a closed source project using closed source libraries with some restricted source access. In general, if I find a bug in that library, it's less hassle for me to report it to them, and move on, than report it, fix it, and deal with the upgrade path later. It's unfortunate, but if I fixed every bug I found in the third party libraries, I'd never get any of my own work done..


This tends to be mainstream opinion today: figure out the timeless essence at different scales, decompose it into abstractions, freeze their interfaces so that people can start relying on them, and so on. But it doesn't seem to have helped for forty years of trying to do it. The world changes too quickly, we aren't quite as good at designing libraries as we think, the world is filled with historical accidents in interfaces. With hindsight, it seems clear they were prematurely frozen.

Rather than dismiss these observations as isolated cases of people not practicing good behaviors, I tend to see them as evidence that we should be creating libraries far more conservatively, freezing interfaces far more late in the life cycle, perhaps even decades late.

If I'm right, we are also overusing industrial notions of assembly lines and division of labor. Libraries with non-trivial functionality take a long time to get right, and in the meantime they are produced more like guilds of craftsmen than factories. (Even if the products themselves permit factory-like operation at scale.) In that initial bake-in period we are ill-served by conventional metaphors of software components, building blocks, etc. We should be dealing more in vertically-integrated self-contained systems rather than plug-and-play libraries. More OpenBSD, less `gem install`.

I've been thinking about this for at least five years, ever since http://akkartik.name/post/libraries. More: http://akkartik.name/prose


Libraries do take a while to get right, but I think we can only get them right by making them as libraries. All good libraries I've seen started out as part of a vertically-integrated piece of software, but if there's currently an immature library for x, using that immature library generally puts us further along the path to getting a mature library for x than writing your own vertically-integrated implementation of x.

I'd focus on making it easier to migrate between libraries, easier to improve interfaces, and so on. (I'd argue to a certain extent that's already happened, and that's part of why we're using more and smaller libraries).


Getting more users for x certainly helps mature it. However, making it a library also tends to freeze it. So ideally we'd have ways to encourage people to use something without guaranteeing its interface. I think this is a social problem; we need more libraries that have signs on them saying, "alpha software, compatibility not guaranteed," and we need greater awareness among developers that this is a good thing, that being willing to switch interfaces every once in a while results in a better eco-system in the long term.


Agreed. But I think the idea of listing and cutting down your dependencies (which is what I understood your post to be suggesting) is contrary to that; rather depending on a huge number of tiny libraries makes it easier for libraries to evolve more flexibly.


In general I don't see the connection between the size or number of dependencies and how easy they are to evolve. Truly tiny libraries like left-pad have trivial implementations. Why not just inline them into your project? Then they're no longer libraries, just functions. You get the abstraction benefits, but you don't need to enter into a counter-party relationship with the author.

At scales above the absolutely trivial I think libraries evolve more flexibly based on social rather than technical considerations. It's about not pissing people off when you change the interface. Even something as complex as Go was able to make incompatible changes pre-1.0.


My sense is that smaller libraries are more able to evolve at their own place - e.g. the separation of django-rest from django core, or languages moving more things out of their standard library. Pre-1.0 is the easy part, we'll see how Go does is 5 or 10 years once it has a base of programs to maintain compatibility with - I predict it will find it harder to evolve and its library will fall behind.


I agree. It's especially pungent in JavaScript, in which the average 'JavaScript developer' is playing some perverse game of library pokemon.


Definitely stealing Javascript Pokemon. Seriously why are people still adopting more libraries to perpetuate this insanity.


Further thought: this is a similar problem to creating too many standards. https://xkcd.com/927/


>Most current trends seem to be towards increasing baggage. (docker)

I'd disagree, docker for example makes it easier to create infrastructure as code which in turn makes it easier to replace pieces of that infrastructure. Same with micro-services which allow you to eventually replace isolated pieces of your infrastructure. The more modular and isolated pieces of code and infrastructure are the easier they are to replace piecemeal.


Disagree. Simplicity in unnecessary abstractions is still convuluted.

Although, I concede Docker is the bees knees for local dev.


Docker is a step backwards in terms of how transparent/code-ey your infrastructure is - a docker container is very opaque and you easily end up duplicating things between multiple containers in slightly different ways, whereas e.g. puppet was much more friendly to DRY.


I'm guessing the smallest possible chunk of code is a cloud function.

Easy to rewrite, replace, and deploy.


>not totally sure how to achieve it [replaceability]

One of the most crucial keys is: make your code greppable. For example, don't treat OOP classes as a license to use generic method names ("add", "set", "close", etc.), or it'll be difficult to weed those classes out.


Xamuel, where did your old blog go? The one where you talked about Esperanto as a gateway language to get people over their fear of learning a new language? I try to cite it regularly but it’s offline and Wayback Machine is giving me lip. (Sorry to barge in on this thread, I hope it’s the most reliable way to get to you.)


Depends. If you have a statically typed language, you can usually just delete the class and fix all the compile errors. With a dynamically typed language, your point stands and I very much agree.


Replaceability is just system-level maintainability.


This is exactly why I roll my eyes when devs passionately argue in favor of "consistent code style" and spend hours of their lives manually linting their code. What's the point? It will be gone next year.


Did you tell the developers of 10 years ago that? I deal with their surviving shit daily, in mission-critical software, even. Gone in a year? Hah! I wish!

Not just naming conventions, or assignments in if-clauses, but files there the first indentation level is three spaces, and the rest are all four...

P.S.: And because it's that old, the change-history of a given line can be very important, so it's too late to mass-reformat everything. So you either make the file uglier by mixing "standards", or you have to hand-tweak all your changes to match.


Manual linting is silly, but having a linter installed really saves a lot of pain. Git diffs should not include unrelated linting changes.


Automated formatting like gofmt is great, and I'm a huge proponent of it. But code is so temporary that it makes no sense to spend man-hours making it look pretty.


One-off script sure, but for core product code I disagree. It's temporary in the grand scheme of things yes, but how many times will it be read before it is gone? Taking a couple extra minutes to clean things up and standardizing style pays dividends in readability, and traceability via git-blame, etc. Same goes for extended commit messages that explain the approach taken.

It's not a question of sinking tons of hours into this stuff, if you're using good tools it just becomes the way you work. Could you gain 5% efficiency by just blasting out the ugliest code you can? Yes maybe, but if it's actively developed code you could easily cost future developers much more.


Having a company wide git hook for clang-format-diff [1] really solves a lot of problems. No unrelated linting changes (unless people don't know how to use their editor, but that gets caught in code review and people learn) and consistency is kept with very little man-hours spent. Proper tooling applied pragmatically is nice.

[1] http://clang.llvm.org/docs/ClangFormat.html#script-for-patch...



How is that relevant


The parent to my original post is an example of exigology.

Paraphrasing: "My code will be gone next year, so I don't care about code style"


The SMBC comic mocks the people for not being self-aware enough to realize that they are causing their own problem by being apathetic.

> "I'm not good at this because I don't try, so I won't try"

> "Girls don't like me because I have bad hygiene, so why should I bother with hygiene?"

etc.

This would only apply to what I wrote if my code was disappearing because it's not formatted to pass a linter. You and I both know that this isn't the reason code gets deleted. It gets deleted because it gets refactored, or the feature/product it's implementing gets killed.

Furthermore, code quality is not the same thing as code style.


> code quality is not the same thing as code style.

I did misparaphrase you. My apologies for that. Fixed.

But as for the other argument, I think the comic's examples and your case represent feedback loops in which two factors are both cause and consequence of each other.

I believe you that you started not caring about code style after you noticed your code tends not to last. Similarly, life was short already before bacon existed. Would you agree that your code is more likely to dissapper now that you don't care about style?


No, I don't think it matters. Obviously there's a line you have to draw, like tabs vs spaces, but I don't care if you place your commas differently from me. Code being so ugly it's hard to read is a problem, but arbitrary fashion is not worth the time.


"replace-ability vs maintainability" They are the same thing...


Congratulations, you are one short step away from realizing, that replace-ability for their own software is the least thing any company wants.

There are many ways into the walled gardens, but none out.


This article posits that you can either maintain, or innovate. This is not at all true. We can, and should, innovate while maintaining to improve our maintenance.

We can use automation to gather data we've never had before. We can use this data to help prioritize maintenance tasks, and get them done faster with less interruption to service.


The problem seems to be that we've invested far too much federal money into projects that have to be maintained by local sources of funding. Big federal grants for development that will not produce enough tax revenue to offset the externalities and infrastructure costs that are required to maintain that development are just albatrosses around states and cities' necks.

https://www.strongtowns.org/journal/2017/1/10/poor-neighborh...


Strong Towns is great! There's another concept I was introduced to via Strong Towns that I think is important: infrastructure is not an asset. It is a liability. Yet, most cities account for them like they would an asset.

https://www.strongtowns.org/journal/2014/8/19/is-a-street-an...


In Philadelphia, I passed under the Columbia Bridge by foot, and nearly fell over when I looked up. An entire section of the bridge has spalled and a huge gap of reinforcing steel rods is rusted and exposed. This bridge will collapse. I found no evidence that there is any plan to reinforce or replace it. Luckily it "only" carries CSX freight. Anyone passing under it should be extremely wary.

https://goo.gl/maps/uFkLJoKU1DB2 https://goo.gl/maps/767CYu5Mwd62 (it actually looks worse than this up close)

I'd be interested to find out what the track record is of maintenance of infrastructure by private vs public entities.


I always think this is the big strength and weakness of the US at the same time. This country is more willing than other countries to abandon old things and move on to the next thing. But right now it seems to be falling into the trap of a lot of pseudo innovation while the foundation is crumbling. Not sure what the best way is to move forward.


Gerrymandering severely curtails political incentives for infrastructure investment. Not saying it's the only issue, but it's a clear case of undermining the best outcomes from representational democracy.


Sometimes you need to stop maintaining things. Towns rise and fall because the economy changes. If there's no economic activity bringing money into a town, it should eventually disappear. You need the political courage to stop wasting money on propping it up. Some bridges, pipes, roads, and trains shouldn't be repaired or replaced. They need to be closed if dangerous, and allowed to disappear into history. If holdouts want to keep living in their dead town, good for them, but don't make the rest of us pay for it.


So you're saying that NYC's infrastructure (the city in the article's opening paragraph) is crumbling because there's no economic activity bringing money into the city?


No, I'm speaking in general.


Stewart Brand, "The romance of maintenance is there is no romance of maintenance". This was basically his point in "How Buildings Learn" (and really the Long Now too)


While maintenance is certainly undervalued, I don't think that means innovation is overrated.


I think there is a lot of fake innovation happening these days. I bet if the programmers of the 70s had had the same amount of computing resources available as we have now they would have built great systems even with the tools available then. Other than the cloud and more computing and network power I don't think software development has really moved forward much.


Here's a nice infographic: https://medium.freecodecamp.org/the-biggest-codebases-in-his.... Unix 1.0 (1971) has a third of the number of lines of code as the average iPhone app. I know LOC is a very poor proxy for complexity, but I have a hard time believing that 70s engineers could replicate the functionality of Google's 2 billion LOC with tools like ed.


If as a system is small enough for a single programmer to hold in their head, you can remove a lot of layers and be maybe 10x more productive. Those '70s systems were all built like that. But eventually you reach the limits where a system is too big for that and you have to take the hit and build it collaboratively; it's also the only way to get systems that are maintainable in the long term.


> Other than the cloud and more computing and network power I don't think software development has really moved forward much.

So a person from the 70s who was instantly transported to today would feel like the only difference is that instead of having our own servers we ship things to Heroku? Please...


Have you seen the Alto demos? Quake would be new to them, maybe the browser. As for shipping to Heroku IBM was renting mainframe timeshares in the 1960's.


I think they could get up to speed pretty quickly, yes. The ideas were all there.


We stepped out of the plane in New York and soon my Iranian wife called her mother. "Mom, the roads are about as bad as in Tehran!". Recently, we were in Tehran again: they had invested heavily in the maintenance of their infrastructure, making the roads better than in many American cities.


Its hard too. my current job with a large existing code base (perl/python/shell scripts and java), and keeping it up to date is a large portion of my time.

The thing is I get very little credit for fixing something that is broken, but creating something new generates accolades and the illusion of productivity...


That's the problem of being a maintainer. You point at something and say "that's still working", and no-one knows if it's because nothing has happened anyway, or because you ended up fixing some fatal error in it.

New features are also pretty easy to measure in terms of throw money in, get features out. Maintenance... how hard do you go? How much is too much? Do you just need a light check-in? Do you need full reviews? Or is that just wasting money for no reason?


I agree with others on the thread that "maintenance or innovation" is a false dichotomy. In particular, I think we need innovative maintenance. For e.g., this article about new ways to fix potholes: https://www.economist.com/news/science-and-technology/217003...


interesting article and comments. one commenter points out that roads break down because of overloading (heavy trucks). another points out that, in colder countries, ice is a major problem for roads.


Freakonomics did a great story on maintenance last October. http://freakonomics.com/podcast/in-praise-of-maintenance/


>The American Society of Civil Engineers considers 17 percent of American dams to be “high hazard potential,” including the one outside Oroville, Calif., that nearly collapsed in February.

Is this exaggeration? Would any reasonable person say that the Oroville dam nearly collapsed?


  Would any reasonable person say that the Oroville
  dam nearly collapsed?
According to Wikipedia [1], "Erosion at the base of the weir—which was expected—progressed much faster than anticipated. The headward erosion of the emergency spillway threatened to undermine and collapse the concrete weir [...] Fearing a collapse, the Butte County Sheriff's Office issued an evacuation order of the Oroville area. [...] Engineers worried that [...] damage to the main spillway could grow uphill to the point that it endangered the main spillway gates, leaving no safe way to release water. [...] By February 13, 188,000 people in the vicinity were reported evacuated. About 23,000 National Guardsmen were ordered to be ready for 'immediate deployment if the dam spillway should fail' to help with evacuation and relief efforts."

That sounds a lot like a near-collapse to me.

Of course, you could argue the authorities acted with a surfeit of caution - perhaps there was only ever a one-in-a-thousand chance of the dam collapsing, and the threshold for 'nearly collapsed' should be a one-in-five chance of collapsing. However, I think most reasonable people would say that a one-in-a-thousand chance of killing 188,000 people living below the dam is several orders of magnitude too high.

[1] https://en.wikipedia.org/w/index.php?title=Oroville_Dam_cris...


I think a large part of this is how we hire people. It looks much more sexy on your resume (and prepares you better for interview type questions) if you have experience building from scratch a fancy machine learning pipeline, than maintaining a 100K+ legacy (say > 15 years old) C++ code base.

Not only this, but I've lost count of the number of young guys I see come into my company (especially the machine learning guys), spend a year exploring some new technology, then jump ship to a different company when it comes time to integrate it with the existing legacy code base.


I'm a big fan of this topic. Something we seem to forget is that sometimes new is not necessarily better.


HN had another article about this recently:

https://news.ycombinator.com/item?id=14293775


"Americans have an impoverished and immature conception of technology, one that fetishizes innovation as a kind of art and demeans upkeep as mere drudgery."

Pretty much hits the nail on the head, although I want to note this does not exclusively apply to Americans (but maybe a bit more than Europeans; technology is often a lot cheaper in the US: it is not uncommon that the amount of dollars paid for an item is lower than the amount of euros, even though euros are worth quite a bit more).

It is especially bad for phones. Many people I who are (almost annoyingly) aware of the environment buy a new phone every year.

I am guilty myself too. My current phone is about two years old. It has a full HD screen, a luxury that I don't even need on my laptop (about half of the time, I work on a 1440x900 thinkpad). And, to be honest, it is getting quite sluggish. When I open dropbox or tinder, I experience a delay up to 10 seconds, which is quite ridiculous (it especially bothers me that software seems to get slower, more bloated, and more abstract and complicated). People's first reaction to hearing this is "Just buy a new phone, man". I shouldn't be, but I am indeed considering this.


How many times are they going to write this article?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: