IMHO boring tech is great because it lets you focus on the actual tech of your product. I run a SaaS app and I'd like to think we do alot of cutting edge things in various places, as it relates to the actual product. For things that are "behind the scenes" like databases, backend fraemworks, etc, etc, I prefer to keep all that stuff as boring and as stable as possible. For me working solo on a project, my time is very limited. So I would much rather work on interesting new features for my product than having to touch crap that my customers don't care about. Because at the end of the day my customers don't know and don't care that I use Node vs Deno or Bun, or that I use NPM instead of pnpm, or that I'm not on the latest version of node or Postgres. They don't know all that but they do know how well my app works or what features it has.
Exactly - This is known to graybeards and mostly ignored by the youths.
We've seen the wheel re-invented many times and would prefer to work on something other than the wheel again. Stuff like solving user problems and making money.
Meanwhile you have the coworker who uses some new but soon to be deprecated language/framework on every project, leaving a field of unsupportable debris in their wake..
I experienced this few years ago. I was hired with a team of consultants (I know.. I know..) because the system in question would have data randomly disappear from their databases every few weeks and it would take the operation team a month to notice, so by then the issue was harder to rectify.
The engineering team in question had proposed to rewrite an old Java app with AWS Amolify and replace their Postgres database with Dynodb. The whole thing was then ducktaped with Lambdas and spread across multiple regions. They had not bothered writing a single line of test, never mind having any kind of build and deployment pipeline.. they didn’t even have basic error reporting.
After doing a deeper dive, we discovered that engineers didn’t have a local environment but only had access to staging, however they could easily connect to the production database from local by just updating environmental variables. It turns out that one of them had forgotten to switch it back after debugging something on production from his machine and had forgotten to switch it back.. he had a background cleanup job running in intervals which was wiping data..
It was a complete nightmare, the schema less nature of Dynamo made it harder to understand what is happening and the React UI would crash every 15 minutes due to an infinite loop.
The operation team had learned to use the Chrome console and clear the local storage manually before the window feeezes..
Being charitable, in order for one of those new technologies to become mature and boring it requires guys like that to actually use it for things. So while it might be misguided we thank them for their sacrifice of getting caught on every sharp edge so they might be dulled (or at least documented) later.
And if the new tech is beneficial and adds enough value that it makes it worth replacing the old tech, then by all means go for it.
However, I can't count many companies I've seen decide to get into "the cloud" only to do lift-and-shifts and are now running their stack in slower and more expensive ways.
Over the last decade I worked for a fintech that did analytics for the investment banking industry and between 2016-2020 the amount of people that were shocked we weren't trying to shove blockchain somewhere was surreal.
Oh yeah, SOMEONE has to be be first.
It just doesn't have to be YOU!
If you want to be successful in your career, when you are put in charge of a big new project, on a tight timeline, high management visibility, etc.. you dig into your existing tried & true toolkit to get the job done. There's so many other variables, why needlessly add more risk no one asked for?
But yes, I'm glad there are maniacs out there.. I just don't want to work with them.
Granted, I remember symfony was very good, even in versions 1.x. It truly made working with PHP not suck as much as it did back in the day. I don't remember if there was ever a product of which it was born.
> you dig into your existing tried & true toolkit to get the job done
Haha. Yes. But when the c-suite is made up of top level management pushed out of the s&p 500, they always assume it’s their tried and true toolkit from another company. Believe me, it’s never the hammer the current engineering staff is holding. I’m slow clapping so hard for the business school graduates right now…
I've seen projects akin to "Excel sucks because its written in X, I am very smart, we will rewrite it to run in browser only with this new framework I read about, backed by microservices in Kotlin/Rust/some new fad, running server less on AWS (using this new thing thats just out of beta that I saw at re:Invent)".
And we need to stop all new feature work on Excel since its legacy, so give me 80% of the dev team to do above. Oh btw, they don't know the alphabet soup of stuff I decided to use, so we will also start firing them as well, as I need to hire for these special skills.
Eventually you need to admit COBOL is dead and rewrite. Eventually some library/framework is dead and you need to rewrite. However there are lots of options, and often your best is in place rewrite just small parts at a time, slowly getting rid of the legacy code over a couple decades - there is no real hurry here.
Eventually styles will change and you will have to redo the UI. This will happen much more often than the above. Your program may look very different but if you have a good architecture this is a superficial change. It may still be expensive, but none of your core logic changes. Normally you keep the old and new UI running side by side (depending on the type of program may be different builds, other times it is just a front end) until you trust the new one. (depending on details it may be an all at once switch or one screen/widget at a time)
Just because it is populate doesn't mean it isn't dead. Or should be dead. COBOL was really innovated in the day, but many of those innovations proved to be bad ideas. However switching to something else is very hard and expensive - thus it continues on.
Well, I'm not sure I'd say Windows is a good example of that anymore, in fact I was going to use it to argue the very point: as they seem to rewrite the UI in new frameworks, we've lost a lot of features (not even talking about speed and reliability)
I can, off the top of my head, name at least half a dozen of "those guys" and describe in detail the wreckage they left behind. Including one that was a major contributor to an 80%-of-the-billings client finding an alternative agency resulting in almost 100 people losing their jobs. :sigh:
Your phrasing implies that the Greenfield rewrite is always a mistake.
Depending on the project, it might actually be the only sane option if you're still required to make significant changes to the application and features have be be continuously added - and the project has already become so full of technical debt that even minor changes such as mapping an additional attribute can take days.
As an easily explained example for such: I remember an angular fronted I had to maintain a while ago.
The full scope of the application was to display a dynamic form via multiple inputs and interdependent on the selected choices of the available form (so if question 1 had answer a, question 2+3 had to be answered etc).
While I wouldn't call such a behavior completely trivial, on a difficulty line it was definitely towards the easy end - but the actual code was so poorly written that any tiny change always led to regressions.
It was quite silly that we weren't allowed to invest the ~2 weeks to recreate it
Another example that comes to mind is a backend rewrite of a multi million PHP API to a Java backend. They just put the new Java backend in front of the old API and proxied through while they implemented the new routes. While it took over 2 years in total, it went relatively well.
But yeah, lots of Greenfield rewrites end in disaster. I give you that!
My company spent $billion and several years in a rewrite of a core product a few years ago.
I'm convinced on hindsight that we could have just refactored in place and been just as well off. Sure there would be some code that is still the ugly mess that made us jump to the big rewrite in the first place. However we would have had working code all along to ship. Much more importantly, we fixed a lot of problems in the rewrite - but we introduced other problems that we didn't anticipate at the same time, and fixing them means either another rewrite, or do an in-place refactor. The in-place refactor gives one advantage - if whatever new hotness we choose doesn't work in the real world we will discover before it becomes a difficult to change architecture decision.
A few years after a greenfield rewrite, the codebase is going to go back to the same level of quality it was before the rewrite, because the level of code quality is a function of the competency of the team, not the tech stack.
The only time it really makes sense to do a rewrite, is when either a new architecture/technology is going to be used that will impact team competency, or the team's competency has improved significantly but is being held back by the legacy application.
In both of those situations though, you could and should absolutely cut the application into pieces, and rewrite in place, in digestible, testable chunks.
The only time it makes sense to do a wholesale greenfield rewrite, is political. For example, maybe you have a one-time funding opportunity, with a hard deadline (rewrite before acquisition or ipo, etc).
I think it is safe to say we have improved as a company between when the code was started and the rewrite. And we have improved a lot since then.
We have also improved a lot as an industry. The rewrite was started in C++98 because C++11 was still a couple years away. Today C++23 has a lot of nice things, but some of the core APIs still are C++98 (though less every year) because that is all we had the ability to use. And of course rust wasn't an option back then, but if we were to start today would get a serious look.
Once we did a major rewrite from Perl/C++/CORBA into Java, during the glory days of Java application servers, three years development that eventually went nowhere, or maybe it did, who knows now.
In hindsight, cleaning up that Perl and C++ code, even where both languages stand today, would have been a much better outcome, than everything else that was produced out of that rewrite.
But hey, we all got to improve our CVs during that rewrite and got assigned better roles at the end, so who cares. /s
> Another example that comes to mind is a backend rewrite of a multi million PHP API to a Java backend. They just put the new Java backend in front of the old API and proxied through while they implemented the new routes. While it took over 2 years in total, it went relatively well.
Their next example was exactly what you asked for, 2 years rewrite.
Bonus points from me because they didn't wait for the whole rewrite to be done, and instead started using the new project by replacing only parts of the original one.
If you are not architectured around the bridge it is really hard to add it latter. Probably the biggest advantage of micro services is they have built in bridges. Monoliths often have no obvious way to break things up. Every time you think you want to you discover some other headache.
Totally agree, but how often is the young coworker using the hip new language in their position because they were told learning it was the key to getting a job in a competitive market and it is what they have to fall back on?
Somewhat similarly, I feel like the boring/mature infra often gets ripped up in favor of something hip and new by a CIO who wants a career checkmark that they "modernized" everything. Then they move on to the next company and forget the consequences of breaking what was stable.
Modernization can mean replacing 50 year old technology with "20 year old" technology. I stuck in the quotes because something that is perceived to be that old is probably under active development, and modern while remaining boring.
There are different 50 year old technologies. Fortran is older than that, but still going strong in some niches because for a few niches it works good enough. (for some numeric operations it compiles to the fastest code and that matters). COBOL is older than that too, but everyone is replacing it because nobody wants to know it - even those who do know it prefer something else (or so I'm told - I don't work here). C is more than 50 years old, but it works good enough for a lot of people and it is still getting better. C++ is nearly 50, and getting a lot of useful features. Rust is todays hotness, but it continue or just be another fad - ask me in 30 years.
I've seen many fads over my lifetime and I expect to see many more. Some fads I regret their death, while others I'm glad we saw the light and don't use that anymore.
Actually seen a ton of this from mid-career people who should know better but don't (or don't care). New grads aren't leading Greenfield multi-year projects.
Speaking as a mid-career who does know better, the thing is that I still have to compete in said competitive market along with everyone else and lots of recruiters/hiring managers/linkedin influencers/etc... don't seem to know better either.
I still remember the day I saw a posting for a job that required 10 years of Java, 5 Years of C#. At the time Java was about 8 years old, and C# about 3. I'll bet they hired someone anyway and nobody clued them into how impossible their requirement was.
Yeah, Resume-Driven Development is unfortunately such a common methodology and companies indirectly push for this by punishing candidates who admit to doing maintenance work and bug fixing instead of rearchitecting everything every year.
> We’ve seen (…) and would prefer (…) making money.
You think “the youths” don’t care about making money? That’s got to be in the top two reasons why people get into coding for at least the last decade. It’s also one of the top two reasons everything is shit, too many people only care about a quick buck.
Three times now I've argued strenuously against using MongoDB and Elasticache as the primary data store in an app, as they will inevitably turn into relational databases.
They did.
For Mongo going to postgres went great, but for complicated reasons we're stuck with Elasticache forever on our main product.
The only reason to ever use a non-relational db is for scalability and performance reasons. Joins and transactions are hard to do correctly and efficiently in a distributed system. So “NoSQL” solutions can be a good fit if your data is too big to fit on a single host and you can get by without joins and transactions.
(This is a massive oversimplification, but still used rule of thumb.)
And most companies vastly overestimate their data, and believe it to be "big", when it could be trivially handled decades ago by server-grade hardware.
> The only reason to ever use a non-relational db is for scalability and performance reasons.
And, most importantly, lack of market availability. Nobody is going to sell you a relational database nowadays, and rolling your own is... daring. Postgres was the last serious kick at a relational database, but even it eventually gave into becoming tablational when it abandoned QUEL and adopted SQL.
But tablational databases are arguably better for most use-cases, not to mention easier to understand for those who don't come from a math background. There is good reason tablational databases are the default choice.
Yes. The only wheel re-inventing I agree with is for learning purposes. I one would say rewrite SQLite, be my guest, it's a really good way to learn. But proposing it as a replacement is....bold. Might still be something to it, but that usually doesn't stem from a simple greenfield project.
It's often ignored because not all people get to work on interesting new features or care about the customers. This is all cool and dandy if you're a founder or working a job where the actual product is interesting, but most jobs are boring with no actual incentive or even a way how to care about the customer and then making it at least technically interesting in some aspect is a way not to go insane.
Just playing the devil's advocate here. I would prefer using boring technology that gets the boring work done as quickly and easily as possible anyways, but that's because I have more fun doing other things than working.
I think it is important to point out that boring tech is not Java 8 or .NET 2.0 that some companies are stuck with.
There is still lots of interesting stuff if your company uses newest Java or newest .NET but both are „boring” in a good way so mostly stable and changes to those are incremental and progress is stable.
Heck Angular with its latest improvements in v17 and v18 is quite interesting - but counts as totally boring and stable tech. Migration to signals and standalone components is a bit of a hassle but still is rather easy.
Hot new stuff is wish fulfillment by younger devs. They know it’ll take years to catch up with what the greybeards know about Linux or databases. If ever. If someone throws a curveball in then they reason everyone is on the same page. That nobody can be much more of an expert on 2 year old tech than they are.
What they don’t understand is that after 10 or 20 years of this drama you start to realize how much of this new shit is a repetition of the shit the “old shit” replaced, but with new jargon. Similar Shit, Different Day. Progress is not a ladder in IT, it’s a spiral. The scenery is never exactly the same but it swings through the old neighborhood all the fucking time.
At one point you need to admit Linux knowledge is unnecessary, beyond the basic cd/cp/chmox. Docker replaces it. Should I choose init.d or SystemD? None, just launch a new container.
I’ve spent too much time helping figure out why their container is broken to agree with this. It’s true that I use more general Unix knowledge than Linux knowledge, but that still counts for a lot.
I would only agree it can be a Bus Number skill rather than an everyone skill. My point is some day there will be a new OS the kids will embrace because everything is new to everyone and it’s their chance to shine.
When I worked at AWS as a support engineer I was unfortunately dumped into the containers team (EKS, ECS, Fargate, etc) despite being a "greybeard" in background.
A customer wrote in trying to figure out why his Fargate application kept crashing. The app would hit 100% CPU usage and then eventually start failing health checks before getting bounced (rebooted)
I relayed this back to the customer who insisted the app shouldn't be spiking in CPU usage and wanted to know why. Of course being a Fargate workload there's minimal ways to attach debugging to it. You can't just spawn htop on Fargate!
Doing due diligence I fired off an email to the team that managed the infrastructure. They curtly replied
"it failed healthchecks and got bounced"
"Okay but why"
"It hit 100% CPU"
"Okay but why?"
"It failed healthchecks and got bounced"
At no point were they either willing to interrogate or even consider the lower layers of the stack. The very existence of everything below the containerized app was seemingly irrelevant to them.
After going back and forth about this for nearly a month and a half with the customer I asked my boss to add me to the "Linux" support Slack channel. Reasoning that there's got to be other greybeards in there who (frankly) knew what they were doing better than these kids.
After writing a multi-paragraph explanation of all my findings along with the customer, moments before I hit SEND I got an email
The customers app was not releasing threads properly and causing the system to reach thread exhaustion and begin context switching, a CPU intensive process that eventually would take so long that the health check probes would breach timeout and say the app was down and restart it
Saying that "Linux knowledge is unnecessary" is to put it bluntly, ignorant to the point of clownishness. Having a holistic understanding of how a system operates is invaluable
> What they don’t understand is that after 10 or 20 years of this drama you start to realize how much of this new shit is a repetition of the shit the “old shit” replaced
That happens, sure, but all too often these same greybeards will fail to recognize when there is a truly novel thing. So do give new tech an honest chance before writing it off as reinventing the wheel.
Hyperscaler computer racks are reinvented supercomputers, and Cloud Software is quite frequently recreating features available on them in the 90's. Change My Mind.
> Because at the end of the day my customers don't know and don't care that I use Node vs Deno or Bun, or that I use NPM instead of pnpm, or that I'm not on the latest version of node or Postgres.
They don't know. And they shouldn't!
I don't have any idea what kind of tooling was used to create the drill I bought, just that it drills holes and is at the right price. Same for any other number of products and services.
The person who made the drill cares deeply, as they should. But consumers don't.
I'm starting to think there should be a term like the "HN effect" where engineers start to limit test into absurdities like "show HN: I built a rust backend with an elixir front-end that's actually running doom in pdf hosted on a distributed server composed of a network of the microchips in a Saudi camel herd, get fucked nerds" it sort of lends itself to this notion that the tools matter as much as the outcome (because oh boy do we love building tools, product be damned). I mean, building tools is fun compared to sitting down in front of a client and trying to explain to someone who can't use a mouse why they can't use the top of the line 4 gb machine from the 80s isn't working.
> absurdities like "show HN: I built a rust backend with an elixir front-end that's actually running doom in pdf hosted on a distributed server composed of a network of the microchips in a Saudi camel herd, get fucked nerds"
You leaned too far into the joke and it undermined the point. There’re roughly three kinds of Show HN:
1. “I made this product to solve this need I perceived. I hope it does well and makes me money. Come check it out.”
2. “I had this insane idea which is funny just from the description and made it as an experiment. Let me share what was interesting about it. Oh, and here’s the code.”
3. “I made this AI NFT Data Harvesting scam and think everyone is too stupid to notice this isn’t a real product filled with fake reviews. Come click this link from this account with 80% flagged comments.”
What you described seems closer to 2 than anything else. There’s nothing wrong with those. They stimulate curiosity, which is HN’s goal.
When I was younger and into electric guitar, I gushed over gear so much both at stores and on the internet that I forgot to actually practice playing my instruments.
In some cases people can even admit it. I know people who collect guitars. I know people who trade guitars. Both of the above can generally pick out a tune or two, but will admit they are not good.
I lust over a lot of gear (not guitar gear, but gear), but I keep telling myself that my goal is to play, and so I focus more on what I already have than the next gadget even though sometimes it really is my bad instrument that is the problem. Sadly, despite the above, I'm not any better than the other two groups (even when my equipment is a problem, better wouldn't help much since I'm still a large part of the problem)
Your drill was probably designed by an engineer under time and cost pressure, and then assembled in a factory by someone who absolutely did not care because they're thinking about going back to the sweatshop with slightly worse conditions but slightly higher pay.
No, but when your drill breaks there's still plenty of potential for it to physically injure you in different ways. As mooreds said, the end user doesn't care how you made the tool, they care that it won't go catastrophically wrong on them. How you accomplished that is not relevant.
Because software is living, often online, has various types of your and others' personal data, and usually runs on devices that you use for other things as well. The attack surface is massive, and more often than not, accessible. (Your drill might have a way to override the safety switch by soldering stuff to it, but you need physical access for that, and the reward is minimal; if you get into Equifax' publicly accessible servers, you get lots of important and lucrative data).
You might not care that your bank's website uses some obsolete Java framework with security holes, but you will care if they use it to drain your bank account. You might not care that your recipe generator app uses an old version of Jenkins for releases and doesn't do code signing, but you will if someone hacks them and releases a fake version that installs a keylogger on your machine, or exploits a bug in their backend to ship you malware that scrapes your Pictures folder with your nudes and important documents.
And on and on and on. A piece of software not updated for years, or a phone with no security updates, or with horrible legacy like Jenkins, is a security risk you as a user have to take at least some interest in. Yeah, you won't pick your bank by their tech stack, but when you see an atrocious website with UX out of the 1950s, absurd password length restrictions, broken domains/certificates/etc., you can be pretty sure their tech stack is a disaster, and maybe, you can try to avoid them.
I call it "Stackoverflowability". For every question you might have about mature tech, there'll be a list of similar relevant questions on SO that you can consult. The length of the list (and thus the probability of you finding your answer) closely correlates with the maturity of the tech.
I feel you on this, I've been planning out my next side project, and I've picked a Django based project that handles a good amount of what I need, I know I could build it all from scratch, but why would I invest a monument of hours for almost no benefit. I'd rather spend more time focusing on the key things that makes my idea special and not reinventing the wheel code wise.
being out of date on dependencies isn't the same thing as using mature tech. I sure hope you're updating on a regular schedule, for the bug and security fixes at the very least
Updates generate more bugs and more security problems than they fix, so that's not really an argument. If you've made your system sensibly, it will be immune to any security problems.
First, I want to see numbers to back that up. New versions bring new bugs that aren't known yet, by anyone. Old versions have older, more widely known bugs. From a stability POV, sticking with the old may be good: you know how to work around the old issues. From a security POV, that's probably bad: every script kiddie has a Burp Suite plugin to exploit it.
Second, there ain't no such thing as an immune system. You can asymptotically approach zero, like you can approach the speed of light, but it would require infinite resources to reach either.
Of course you can have an immune system, where things are not exposed or connected to where they could be vulnerable. For example, there is nothing that a "script kiddie" could write in the comment field of Hacker News that would be able to take control of your computer.
I don't buy into the "cyber security" arguments, and frankly I consider it a grift to keep hackers employed by playing on the fears of people. The same thing as "anti-virus" software, which never really worked in real life and isn't widely used anymore.
There have been image library exploits where uploading image to site that processes it gives access. The only solution was to update the library.
Or how about Heartbleed where the OpenSSL library had bug. OpenSSL is on the external web server and the attack could compromise server public keys. Perfect for impersonating the server. The solution was to update the OpenSSL library.
There have been browser zero days. Hacker News sanitizes input so user can’t compromise anything. But Hacker News could do an attack.
Consider a fairly normal web site that will send an e-mail from a customer form to the owner, with customer orders. That form is not connected to any private information or any money, at most you will get a spam order if the form is "hacked". Big deal.
Just between us, you do understand the point of an illustrative example, right? In this case, the person above me said you could have an immune system. I don't believe that's really true anymore. We've moved past it.
Keeping your dependencies up-to-date (at least updating known vulnerabilities) is very different than anti-virus software and the other check-list-oriented "security" industry.
The first is just blatantly irresponsible and dumb "advice", while I do agree that most of the "you need to tick this box in order to get the contract" kind of "security" software is just malware, and often worse than what they supposedly cure.
We're using a web host with an operating system and web server that are "obsolete" and haven't received any updates since a few years. There are no contact points where that server could access any of our machines. Not anymore likely than it accessing your machine. It serves hyper-fast web pages and receives customer orders. There's nothing sensitive there. If the server hall burned down or got hit by a tactical nuke, it would take 10 minutes to get stuff up on another server from backups.
For most businesses, credit card processing is outsourced to Stripe or similar services, and the security for that is their responsibility. Customer data is only stored on local machines with encryption. So it's very possible to architect solutions that aren't vulnerable. Unless you want to go into very unlikely scenarios.
In the worst case scenario, an attacker can send in one nonsense customer order that gets deleted by staff when they see it. This happens about twice per year. Customer orders are not stored anywhere on the server.
So you can't even fathom a scenario where an order is fulfilled without the payment going through, causing a huge amount of losses? Or leaking private data which is a huge deal in a post-GDPR world?
If you separate ordering, invoicing, and delivery, it is impossible for that to happen.
As for leaking private date, now you're in the territory of some hackers having access to reading RAM memory. Which I guess is a possibility, but not something that every business in the world needs to concern themselves with.
If you call your local auto dealer and say you want to buy all their cars, don't you think they have some process stopping them from just sending all their cars to your adress? A hacker could make that call, you know...
in my experience these new stacks, new frameworks, fancy new libraries, haven't added much in terms of user experience. I think React has but, with React comes a cacophony of ick.
Just this week, npm rotated some keys for the first time in a decade [0] and broke all of the sites I had deployed on multiple providers using pnpm. I had three sites on digital ocean which I had to switch to npm from to fix it.
The entire is ecosystem is the definition of not-boring
IMHO boring tech is great because it lets you focus on the actual tech of your product.
Well obviously you need to catch up with the times. If the 'CTO' (who has at least 5 years of experience) of my shiny new SaaS can't do a regular blog post about how clever he is solving this week's obtuse problem in version 0.3.75beta01 of this MemeDrvrUltra thing he found last month and bet the company on, is it even really a SaaS startup? And he'd be denied that multipart year end expose (crossposted to every social platform on the planet using AIoftheWeek v.五.九 to dress it up to fit) about HOW FUCKING HEROIC his team was staying up for 8 days straight migrating from MemeDrvrUltra to SuperMegaMemeblaster ("Closed Private Beta FTW! We're special") and how it almost worked because MemeDrvrUltra was at least a couple of years old and clearly not what the VCs were talking up at the last speed pitch angel event (tho version 0.4.22.beta03 did close a bunch of our tickets (what's EWONTFIX mean again?) and changed its mascot to some funny looking frog). If all you did in life is be old and lazy and boring and pick "what works" or "what's well supported" or "stuff that doesn't get me an outage call once a week at 2:45am", what's the point of even living? Really, what sort of loser wants to work at a company like that?
I see your point here, but I want represent a counterpoint to this line of reasoning, from my personal experience. I've been in a lot of situations where someone simply wants their way--in this case they want the organization to choose their personal software preference--and so they call it the "boring" choice.
By calling it boring, they characterize their _preference_ as the majority accepted, mature, obvious decision and anything else is merely software engineers chasing shiny objects. The truth is almost always more nuanced. Both solutions have pros and cons. They trade-off different values that resonate more or less with different people.
So, please be careful with the "it's boring and therefore obviously better" argument. Don't let it be a way to summarily dismiss someone else's preferences in favor of your own--without a deeper discussion of trade-offs. Otherwise it's no better than any other condescending attempt (ex. I'm in charge, so we are doing it this way. No one ever got fired choosing IBM/Microsoft/..) to win an argument without having to present real arguments.
> be careful with the "it's boring and therefore obviously better" argument
The same way I would say be wary an argument which can be boiled down to "it's newer so it's obviously better". I used to made such argument myself in the beginning of my career but now I see that proffering everything new is a bad strategy.
Comparing software objectively is hard (if not impossible), and there is a place for personal presences too. But if after a discussion of pros and cons it's not obvious which of two options is significantly better I would be inclined to choose an older and more established technology.
Demagogy is MUCH better than real arguments. Most people you work with will have trouble even understanding the latter. They don't have time, they're overloaded, their kids are waiting at home. Give them the warm feeling that they're doing the right thing. Make them feel smart, experienced,
elite. That's how you get support, not by appealing to (ugh) reason.
I think people are just tired of arguing with people who think you need, say, an entire cluster orchestration system, and a renderer inside your renderer just to serve up some web pages. It's a lot easier to just tell people to use "boring technology". The ones who get it, get it.
I would only add "stable" to what "boring" tech is. Notably, though, not "stable" as in "doesn't crash." But more as in "doesn't change."
I think you typically see this with older, established, things. But there is nothing guaranteeing it. And, indeed, it is often the result of specific action on the stewards of a technology.
This can often be couched in terms of backwards compatibility. Which is a clear action one can pursue to get stability. However, it can also be seen in greatly limiting scope. As an example, we love talking about scale, but that doesn't mean you have to design and build for scale you will never see.
Not all boring tech doesn't change. Rails has been around for 20 years now and actually sees a great deal of change. There are many new and better ways to solve problems in the framework with each release.
I think it’s kind of moot to argue of over the definition, because in practice “boring” just means that someone likes something. Similar to other terms like, “best practices,” the term is now vapid. It might have at one time meant something, but is now just synonymous with “I like this.”
If you like something it’s a “best practice.” If you don’t, it’s an “anti-pattern.” If you are used to something and like it, it’s “boring.” If you are not used to something and expect you won’t like it, it’s a “shiny object.”
IME, these sorts of terms are not helpful when discussing tech. They gloss over all the details. IMO its better to recognize the various tradeoffs all languages and tools make and discuss those specifically, as these sorts of labels are almost always used an excuse to not do that.
Apologies on missing this yesterday. I would add that saying it is "boring" with regards to "stability" means I also trust that what I learned about it last year is largely still relevant today. May not be cutting edge relevant, but is unlikely to bite me by being flat out wrong.
"Boring" works in this regard because you are saying there is not a lot of activity behind the scenes on it. Most of the work is spent doing the "boring" parts of the job for most of us. Documentation and testing.
Rails isn't boring by any definition, it's full of surprises, metaprogramming magic and DSLs and a culture that hates code comments. Plus a community that keeps changing best practices every other year.
Which doesn't mean it's bad or anything. But "boring" shouldn't be redefined to "something I like" or "something I make money with".
The eternal discussion isn't about old vs new or boring vs exciting. Mature is mature regardless of age.
A system that breaks when updating dependencies, introduces unexpected behaviour through obscure defaults, forces you to navigate layers of abstraction isn't mature... (looking at you Spring and Java ecosystem), it's old and unstable.
Stability, predictability, and well-designed simplicity define maturity, not age alone.
Is Python mature and boring? With toolchain issues and headaches of all kinds... Newer languages like Go or Rust i.e solve all these toolchain issues and make it truly boring in the best way possible.
Go and Rust are only "boring" if you vendor all your 0.x.y -versioned dependencies (or even worse, dependencies where the "version" is just the latest git commit to trunk) and carefully vet every single update for breakage.
The nice thing about Go (compared to NPM) is that a lot of those libraries are just nicer APIs for the standard libraries and not some core tech you need. You can go for the 0.x version with the absolute assurance that you can fork it or vendor it with minimal cost in support time.
(Genuine question). I only occasionally write python, but I just use venv and install requirements file. What toolchain challenges are out there for python?
For a large enough project, the dependency conflicts can get extremely frustrating, especially when it's time to update them. You may need to upgrade a dependency for security reasons cough cough requests cough cough, but some other dependency that calls it has pinned another version (range).
Dependency conflicts become an issue for large projects in any language. It's less of a problem when the language's runtime is feature-rich since libraries will be less likely to use a third-party HTTP client. You can choose libraries with fewer dependencies, but that only gets you so far. At some point, you can put the libraries in you monorepo, but upgrades come with a large cost.
Yeah that is a nightmare. But isn’t that a problem on all package systems except more dynamic runtimes like NPM which can load many copies of the same library?
It's a problem all languages have, but some are better at sorting it out. The way NPM does it solves one issue, but causes others.
The big issue, IMHO, is that when you're dealing with interpreted languages it's very hard to lock down issues before runtime. With compiled or statically typed languages you tend to know a lot sooner where issues lie.
I've had to update requests to deal with certificate issues (to support more modern ciphers/hashes etc) but I won't know until runtime if it even works.
Everywhere I've worked, I've had a few cases where we updated some dependencies on machine A (e.g. a developer's macbook), everything ran fine, we did the same updates on machine B (e.g. an Ubuntu EC2 instance) and everything broke. This is especially case with the numpy/scipy/pandas/etc. ecosystem. In one case this took days to fix, which is insane. I haven't had that experience with any other language.
It's worth noting that all of these involved anaconda, which was the recommended way to install numeric libraries at the time. Other package managers might be better.
Rust is the opposite of mature in practice. A language can be mature but if the style of devs that chose to write in it aren't, on average, then it doesn't matter.
Sometimes old versus new effects this. For example in Rust the language improves so fast I've literally had a 3 month old rustc be unable to compile a rust program (SDR FFT thing) because it used a new feature my 3 month old rustc didn't support. As I continued to encounter Rust projects this happened a few more times. Then I decided to stop trying to compile Rust projects.
Right now the dev culture is mostly bleeding-edge types who always use the latest version and target it. As Rust becomes more generally popular and the first-adopter bleeding-edge types make up proportionally less of the userbase I expect this will happen less. Bash still gets new features added all the time too; it's just that the type of developers who chose to write in Bash care about their code being able to work on most machines. Even ones years (gasp!) out of date.
Yes, I don't curl|sh like recommended for my rustc or otherwise install random arbitrary compilers from outside of my OS repositories. I have an OS install with system libraries and programs and I want to use that.
I don't have to set up a custom install of a language for every single application for any other language (although python in machine learning domain is getting there). This is an abnormallity which complicates software mantainence and leads to problems. It should be avoided if possible. And setting up container for every application is also not a solution. It's a symptom. Like a fever is a symptom of infection, containers are symptom of development future shock.
To be clear, I'm talking about in the context of a human person and a desktop computer. Not employed work at a business.
You can install `rustup` using your system package manager if you really want to. You could also `curl | manually-verify-script | sh`. But if you don't stick to recommended install procedure then of course you are stepping out of the "boring" path.
> I don't have to set up a custom install of a language for every single application for any other language
Which languages do you use? I find that using version manager saves a lot of headaches for every language that I use, and is very much a normality. Otherwise I run into issues if I need different versions for different projects. The fact that Rust has a first-party version manager is a blessing.
rustup still is outside of repos even if the download method isn't silly and insecure. For some random applications that's fine, but for a compiler and toolchain? No. If I wanted a rolling distro I'd use a rolling distro. Rust culture only being compatible with rolling is not a good thing for many use cases.
>Which languages do you use?
c, c++, perl, bash. A program written in perl+inline c today will compile and run on system perl+gcc from 2005. And a perl+inline c program written in 2005 will compile and run just fine on system perl+distro today. And pure Perl is completely time/version portable from the late 90s to now and back. No need for containerization or application specific installs of a language at all. System perl just works everywhere, every time.
There are versions of c++xx isms and non-c89/etc isms in some programs written in these languages. But at least these only happen a couple times a decade and because of the wide popularity and dev culture their use is much more delayed after introduction than in rust or python.
I think it’s hard to tell with the signals we have on GitHub for example the difference between mature and dead as a project. Regardless of what a commit is for, it’s a sign that someone is watching and maintaining and any novel issue will likely be quickly addressed. I think this means new stuff always will have an advantage there.
The age of last commit on trunk is a useless metric in isolation. The fact that it is the most prominent number on the front page of a repository is a shame.
A metric I think is useful is responsiveness to issues. If there haven't been any recent commits and if I open the issue tracker and the maintainer hasn't even acknowledged issues that have been opened in the last 6 months, then I assume the project is no longer maintained.
I would love to see a metric of the number (or proportion) of issues closed even though users are trying to get assistance. No idea how to make that machine-calculable though.
Even better, a metric for refused PRs (maybe including PR size somehow), tracking where users cared enough to try to contribute and the owner just refused to accept the changes. Easily gamed though.
I wish we could just let software "die" aka be stable without constant updates. For software that doesn't have a significant attack surface (security) it'd be amazing. But because of the bitrot of constantly changing underlying APIs and platforms, oftentimes if you find some Python script that hasn't been updated for a few years it'll already be broken in some horrible ways due to dependencies changing and no longer being compatible with current versions of certain libraries.
Think of how much time is wasted because so much software that's been written but not maintained and can't be used because of how libraries have "evolved" since then.
If you’re just glancing, then sure… and maybe if it’s only glancing-level important then that’s fine.
In my experience there are almost always some issues (either legit or misguided support requests,) PRs for docs and features, updates to libraries for vulnerabilities or deprecated code. Worth looking at stuff that isn’t open also. Even if it’s a 10 year old project and the last commit is from 6 months ago fixing a seemingly minor bug, if there more than a few stars or forks, and there aren’t ignored issues and PRs sitting there, I’d consider it trustworthy.
And if it does look abandoned but was active at some point, it’s always a good idea to take a stroll through the forks.
This one is actually archived last year so clearly unmaintained but so far seems to be working great. But no successor fork. I think in this case I will use it, especially because it's in test code and not product.
Yeah they're definitely out there. Looks like there aren't any active forks either. Bummer. I had another really good example of one that was super popular but I can't remember what it was.
I don't have anything of note publicly out there, if I did and it was still a going concern despite having had no updates for quite some time (so “fished and stable” rather than “dead and not to be relied upon unless you want to maintain it yourself”), I'd chuck a commit in every now and then (annually? every six months?) if only to update the readme.md to say that as of <current date> I didn't consider the project any less supported than it was at the point of the previous update.
in open source, there are no 'dead projects'. maybe just abandoned
but if you find a mature/dead project there's a chance it's just old and stable.
what I'm getting at is in an unmantained project any stranger should be able to figure out how to adrees any novel issues. it's the point of open source; I don't undrestand why this is failing. maybe beacuse understanding an anonymous code base is hard work?
In the limit, there are also many ways to debug and patch issues in closed-source projects (someting I've done once or twice myself), unless they're run through an obfuscator or something. But there's always a dividing line where "reimplementing the functionality you need yourself" is far easier than "trudging through an old codebase that's tough to even get building".
Yeah, sometimes people on HN make this assumption in the other direction, insisting that such-and-such a project is surely very mature (with a very quiet userbase) and not dead, on account of it not having any recent updates. But sometimes projects are truly just dead and buried. E.g., the million old frameworks and libraries that haven't been touched since 2005 and haven't had a functioning website since 2015. Bonus points if the only downloads were hosted on a SourceForge clone that also no longer exists outside the Internet Archive.
It's sometimes possible for a project to have 0 bugs to fix and 0 in-scope improvements (for performance, compatibility, etc.) left to make, but only if its scope is extremely limited. Even Knuth still gets bug reports for his age-old TeX.
I found it useful to read through the open and closed issues and estimate what caused the described problems. Look at the usecases that do not work, are they common or exotic? Does the program fail safely or does it fall back to unwanted behavior? Does a certain configuration cause the program to run amok and wipe your disk?
If you get many "wtf" moments while doing this, the project is not mature.
Boring is great as long as you aren't looking for a job. There is a significant risk that you are slowly removing yourself from the job market if you stick to boring tech. Your next employer often doesn't give a sh.t that you provided great business value. Most of them want shiny new. When I read the ads of my current employer I don't think I would get hired.
Python is a very old tech (v1 in 1991), but you won't have a hard time finding a job because the popularity of the language has been kept up first with the web, then with data analysis and now with AI.
Also, the more something has been out there, the more legacy there is to maintain. There is no shortage of PHP or Java jobs. Sure they are not sexy, but you'll have work.
Can you give examples of what you consider to be “boring” versus “shiny new” tech? IME job postings are dominated by mature, mainstream tech like Python, Java, JS/TS, and so on.
I've used Copilot to give you an example of an average front-end web development recruitment message. From personal experience, this is how it they are in 2025. At the end of the day, the work itself will consist of applying small visual customisations to an existing CMS.
"Dive headfirst into the realms of React.js, Vue.js, and Angular for mind-blowing interfaces. Unleash the raw power of Svelte and Next.js for lightning-fast performance. Style with Tailwind CSS and code with TypeScript for ultra-modern, maintainable projects. Integrate GraphQL and WebAssembly for next-level data handling and execution. Build Progressive Web Apps (PWA) and leverage Server-Side Rendering (SSR) for out-of-this-world user experiences. Embrace the Jamstack architecture and Micro Frontends for infinitely scalable, modular applications. Focus on Component-Driven Development and Headless CMS for ultimate flexibility. Create Single Page Applications (SPA) with Responsive Design and CSS-in-JS for seamless adaptability. Master State Management with Redux, MobX, or Zustand. Supercharge your workflow with Automated Testing (Jest, Cypress) and CI/CD Pipelines. Prioritize Web Performance Optimization, Accessibility (a11y), and User Experience (UX) for top-tier applications. Implement Design Systems, Code Splitting, and Lazy Loading for hyper-efficient, user-friendly experiences. Join the vanguard of front-end development and shatter the boundaries of what’s possible!"
Here's some requirement sections from real frontend job postings on weworkremotely.com. Not putting them all here, but I clicked through a dozen and these seem representative.
Except for the mentions of next.js and one mention of AI assistants, these job postings could be from 2020 or even 2015.
Senior Frontend Developer at SimplyAnalytics
- 8+ years of professional software development experience on large, structured code bases using vanilla JavaScript (this is not a React, Angular, Node.js, or full-stack position)
- Strong UI development skills (CSS & HTML)
- Open to learning new technologies
- Self-starter who gets things done
- Attention to detail
---
Frontend Developer at Nutrient
- Good knowledge of web technologies (e.g., HTML, CSS, React.js. Next.js, Javascript/jQuery, HTTP, REST, PHP, Cookies, DOM).
- Familiarity with UI frameworks (e.g., Bootstrap, Tailwind CSS).
- Familiarity and regular use of AI assisted IDEs like Cursor, Windsurf, Co-Pilot, etc.
- Manage and prioritize multiple concurrent projects, meeting deadlines in a fast-paced environment.
- Have good communication skills and enjoy working with a passionate team and experience working on a globally distributed team.
- Have a well-rounded approach to problem solving, and understand the difference between when to apply a fix and when to refactor to remove a specific class of bugs.
- Experience integrating with various Marketing technologies and tools/APIs (e.g., Hubspot, Google Analytics, Salesforce, etc).
---
Senior Frontend Software Engineer at Hopper
- Senior-level experience & familiarity with React
- The ability to effectively drive towards a solution in a thoughtful and creative manner
- The ability to work autonomously, iterate on solutions, and manage different contexts
- Dealt with ambiguity and can balance building out multiple features at once without jeopardizing the quality of the code
---
Senior React Developer at SKYCATCHFIRE
- Expert-level React development
- Strong background in TypeScript and Next.js
- Using AI assistance tools to develop better software
- Experience building and maintaining large-scale applications
- Clear written communicator that prefers emails to meetings
- Portfolio showing systematic, well-structured work
Good employers definitely care about business value, and IME great candidates are excellent at highlighting business value in their resumes relative to the tech they used. E.g. "Maintained and improved various UI forms, migrating to react-hook-form" vs "Experimented and optimized booking form, increasing conversion by 7%"). Bit of a trite example but along those lines. I agree tech still matters though.
I'm slowly removing myself from the job market because I'm going to retire. Go work yourself to death at an exciting startup that will fail before your options vest if you want, but some of us would rather enjoy our little cabins in the woods for a few decades.
Nobody ever got a promotion, or hired, by using boring tech.
I have a feeling that I don't get many replies to job applications because the vast majority of work I've done is "boring", and the majority of open source code I've written is shell scripts. It all works fantastic, and has zero bugs and maintenance cost, but it's not sexy. Intellectual elitism has also defined my role ("DevOps Engineer" is literally just a "Sysadmin in the cloud", but we can't say that because we're supposed to be embarrassed to administrate systems); I'm fairly confident if my resume was more "Go and Rust" than "Python and Shell", I'd get hired in a heartbeat.
I guess it depends. Like, you really need to know Gradle to be effective in a non-trivial Java setup.
But Gradle is a beast (over 1000 pages of doc), and it could slow you down.
On the other hand the Go tool chain is superb, but you need to reinvent a few things (until recently Go didn’t have generics) and the libraries out there are just a fraction of what Java offers.
Using Gradle is already the first mistake, had it not been for Google and their partnership with Gradle folks for Android after dropping Eclipse, and it would have fizzled alongside Grails by now.
Boring build tools on Java land would be Ant/Ivy, Maven.
He is just grumpy at Gradle, he does it in every possible thread that even mentions Gradle's name.
Which is especially sad because he fails to even understand what Gradle does (see his comparison with Ant which is just completely inaccurate. Gradle has a proper build graph (actually better than Maven at this - there is never a need to do a clean install in Gradle, but you often have to do it with maven to get the proper output) it operates on)
Not at all, it is a tool designed by those that haven't learnt the Ant lessons that are the reason Maven come to be in first place.
Also they are very keen breaking the DSL language every couple of releases, and before Kotlin based DSL, had to come up with a background daemon to cache Groovy execution, due to its performance, or lack therefore.
I am sympathetic. Declarative programming is _conceptually_ pure. I find in practice, it always makes the core hard things it intends easy and all other trivial matters a torture exercise (see XSLT.) So I can see why you'd reject a goofball imperative programming language like Ant made from XML. I also have had 'sharp edges' experiences with the Gradle Daemon, which struggled to deliver on the build acceleration promises in any case, so I share your frustration there as well.
Gradle and Maven have one rare superpower, maybe you could lump Ivy in there, which is to navigate and cache maven repos to build dynamic classpaths. Java is like a 10-ton elephant hurtling downhill on one foot on a squeaky old rollerskate. Somehow it manages to stays up, but it's always a near run thing whether it crashes. I appreciate that Gradle continues to make incremental advances that keep the elephant up. You complain that Gradle keeps breaking the DSL, but they have to navigate new Java, Groovy, and Maven releases that have relevant feature developments. It's a 16 year old program, it's going to either change or die. I look at JavaExec and consider the power of all it does expressed in such a simple way. Just that one feature makes Gradle tough to beat and worthwhile: https://docs.gradle.org/current/dsl/org.gradle.api.tasks.Jav...
Here are the recent-ish feature adds that I found super impactful and that improved the way I worked with Gradle and the overall quality of my builds:
To me, the fact that Gradle as an organization has been able to keep doing updates over years and make a niche business out of that product has been laudable. I think the way they positioned themselves as thought leaders during the whole Log4Shell fracas was really excellent, and I appreciate that paying them for support mostly means you get their best performance tools and training. I'm skilled enough after a lifetime with the tool that I can do an ok job at performance, but I'm sure things would improve if I could pay.
Bazel is a huge step up complexity wise, and as far as I can tell, it would not be useful for "watcher mode" style builds the way gradle is. If anyone from Gradle, Inc. reads this:
* I like the idea of hermetic builds and reproducible, cacheable builds touted by Bazel, but I also want my ability to use the watcher mode to script hot redeployments and manage local test server lifecycles like I can do in Gradle. Could Gradle be morphed or have a constrained mode somehow to be able to do both those things? EDIT: OMG, maybe you are already headed this direction! https://blog.gradle.org/declarative-gradle-first-eap
* It seems to me that software supply chain in the Java ecosystem could use a package management expert like you. I would love it if Gradle had its own, private for-pay repository that contained a trusted subset of artifacts that you certify and watch new versions and do independent code quality checks and 1st party code reviews (AI-assist ooh la la!) If we can keep our builds to your trusted set, we could have a lot of confidence that our software supply chain is safe. Maybe you could charge customers to add more artifacts to the repo that you approve of quality-wise and then validate and serve going forward.
I don't like the way public maven repos are centralized that we blindly trust and give our telemetry to and would love to point my builds at a private repo.
The claim you make in your first paragraph does not follow from the second paragraph. While indeed a professional / artist / skilled passionate amateur will be limited way less than a random snapshot taker, as they might overcome many limitations in many creative ways, it’s not true that the gear does not matter and it’s not limiting.
If it doesn’t, then why do professionals use gear that’s often worth tens or hundreds thousand dollars? Try to picture wildlife or soccer with a smartphone from 2010. Are you sure it’s not limiting you? ;)
In photography both things matter: skill AND tools.
Also not all photographers are alike and not all photography is art. Event / documentation photography is also professional photography. It requires repeatability and acceptable results, not great results once in a while. That’s why professionals use expensive, sophisticated gear with autofocus, eyetracking, high speed drive etc. They cannot afford getting only 1 shot out of 1000 great. They need all (most) shots good enough.
Btw: Many amateurs who do photography for fun are often way more artistically skilled than professionals so I would not look down on them. It applies not just to photography but probably any discipline. The original meaning of the word „amateur” is actually very positive. Being paid for something does not mean you’re good at it. I’ve seen plenty of terrible quality work from professionals (in photography, in computer programming, in electronics design/rework).
My father was a professional photographer. He was also a professional asshole so his response was always to respond with the shittiest worst lens he could think of and see if they bought it.
Java, .NET/C#, C++, server side frameworks like Spring and ASP.NET, vanila.js when the option is on me, WinForms/WPF/Swing/Qt, Visual Studio, Eclipse/Netbeans.
The Python 2-3 transition and some developments after made it definitely not so boring for me, but hopefully it'll be more stable in the coming decades ;).
"Cannot live without" is a strong wording, but software that I use a lot and that's mature/stable in my experience: shell (zsh, bash, sh), GNU utils, vim, nmap, xfce, git, ssh, mpv, Xorg, curl, and lots of little old CLI tools.
Emacs. 20 years and all that time I wished I could get rid of it, but every other editor/IDE I've tried lacks a feature (or ten).
Debian. Moves at its own pace, flexible yet stable enough to cope with my idea of a smart move.
Go. I've been playing with it since before 1.0, threw it into production at my first occasion, and all those years it continues to deliver.
Django. Identical story.
...StarCraft (1, then 2). Technically not "tech", but in a competitive setting, it remains strategically, tactically, and mechanically demanding, it reflects that path of mastery, "teach yourself programming in ten years". It shaped my spirit, attitude, but also humility.
In the context of boring/stable, Starcraft 1 is a particularly interesting example. Most live-service competitive games rely on regular balance patches to "keep the meta from going stale", which has some parallels to the shiny-affinity of modern tech stacks. Starcraft 1 on the other hand have not had any changes to the balance in 20 years, yet the meta is still constantly evolving because it is emergent on player skill development instead of prescribed by hamfisted developer oversight.
I think that's the biggest difference between SC 1&2, although the former is still being nudged a bit through map design.
SC2 in its first decade has been about always keeping the game fresh - new units, new spells, adventurous changes, crazy maps. As of 2020, the changes not only stopped, but the game was caught for a few years in a pretty crappy state: nigh unbeatable PvT cheeses, 40min-long PvZs where Zerg is clearly winning but can't close the game, meanwhile no Protoss in top 10 GM or major tourney semifinals. The worst of it has now been fixed, but the changes are only slight tweaks and nudges, now again reminding me of Brood War. It's still an excellent game though.
I would call those anything but boring and mature. They have had a lot of big changes over the last decade and there's no sign of that slowing down. They've been around for a long time, yes, but today they bear only a superficial resemblance to the standards of 25 years ago. JS in particular has changed radically just in the past 5-10 years alone. Search for "how to do X in JS" and you will find that most of the SO questions/answers you get are outdated to the point of being basically wrong. (At least, that has been my experience learning JS in earnest over the last year.)
True, they are not done and moving. But what you did with them 20 years ago still works today. And what you do with them today, likely still works in 20 years. Exceptions obviously exist, though.
- IE9 which finally allowed you to use modern (at the time) web features (like flexbox) without having to support IE specific hacks.
- ES6 which added a lot of syntax changes to JS to make it much nicer to use (and pretty much killed Coffeescript).
- Popularization of type-checking with Typescript and Flow around 2020 which is almost standard these days.
And of course the frameworks evolved a lot as well, but that was mostly project-specific not so much the platform. Someone doing React doesn't care about Angular2 release.
A lot more than that! Typescript is a passing fad! ;)
The Esc key used to stop animated gifs and cancel AJAX calls, it was like a 'stop the world, lemme get off!' button.
Canvas tag (with desynchronized context), Gamepad API, and Web Audio API made the browser into a full-blown operating environment supportive of game development.
CSS3 - grids, aspect-ratio, media queries, oh my!
Web Workers, ASM.js, and WebAssembly -- what even is web development anymore?!?
Of all of those only CSS3 is actually a big deal for most projects (and only a subset of the new CSS3 features). But yeah the new APIs are great and more power is better. Native video/audio playback/streaming was huge as well, but only for a certain class of applications.
I was just highlighting the stuff that really made a huge difference for everyone. Even if you don't use typescript your deps probably do and your IDE can show type hints.
I use Typescript, but I've found comprehensive JSDoc comments do much of the same thing in the right IDE (JetBrains) without paying any of the build time. So, when I have the choice I just use ES2017 (whichever version that has async/await)
Typescript only ever paid off in-terms of capabilities for me when my dependencies went all-in with the runtime type information - felt a lot like Java development marshalling and unmarshalling JSON to objects. But by then, my build times were turning into molasses.
I work largely in the JS ecosystem so for me it would be Express. Everyone says the Express project is dead because they hardly ever update it but IMHO the real reason they never update it is it's been practically perfected.
Express has so many different uses and I've used it on very large scale backend projects for Fortune 500's, shoved it inside lambda functions, and used it to host email templates and build dev tooling with it to overcome shitty API's at work that always go down.
In all those times I don't think I've ever run into an issue that wasn't already solved
I remember the main reason people start moving away from Express was because the way routing worked in Express it kinda created a performance problem that severely handicapped the throughput of services. I don't know if it ever got addressed (I think it required breaking changes). But it was this article from 2014 that triggered the move away from express:
But the alternatives like Restify and Fastify are very similar to Express in developer experience, so it was not a huge deal to move away from it. One could think that these new frameworks are just a new major version of Express that had a lot of breaking changes.
The blog post you've linked doesn't justify what you've said about it at all.
In the netflix blog post they're complaining about increasing latency over time because they have a function that *reloads all express routes in-memory* that didn't properly remove all the previous routes, so the routes array got bigger and bigger. That's not a fundamental problem with express[1], that's an obscure (ab)use case implemented wrong. Hardly a damning indictment of express.
> This turned out be caused by a periodic (10/hour) function in our code. The main purpose of this was to refresh our route handlers from an external source. This was implemented by deleting old handlers and adding new ones to the array. Unfortunately, it was also inadvertently adding a static route handler with the same path each time it ran.
[1]: Admittedly an array is not the "best" data structure for routing, but that absolutely wasn't the performance issue they were having. Below a couple thousand routes it barely matters.
I haven't been tracking. Is Express 5 GA yet? Any project that stalls out like Express seemed to makes me think it's dead.
Any NodeJS web projects I run are on Express 4 and _still_ use express-async-router, helmet, and a hodgepodge of other add-ons to make it passable.
It is highly productive and useful, but I would never serve to the internet with NodeJS, much less with Express 4. Only internal apps. Scary NPM ecosystem of abandonware and resume fodder, it's not okay!
Dial Calipers... From my point of view the PC and the microprocessor resulted in a giant economy that is pursuing technology production won't necessarily improve user satisfaction.
Using that perspective I think digital circuits which can be created in any controllable media is what I'm pursuing. I'm trying to make large circuits that represent the physical environment and the logic of the technical tasks that need to be done. My thinking is that next wave technology improvement will come from large environmental circuits that represent the human use of the physical environment.
I see the pursuit of smaller and smaller circuits as only one method of technology development best left to sophisticated global production. I think there will be cottage industry based on locally or "last mile" implementation of more esoteric but easy to understand systems.
The general purpose computer has resulted in overly complicated and inefficient for most regular people's actual computing needs.
If the benefits of the newer tech outweigh its risks you use it. The challenge is weighing the evidence. Newer technologies will tout their advantages, but the disadvantages are not advertised as loudly, and are often uncovered after making the investment. "Boring" or "exciting" is the wrong framing.
Boring tech is really better described as extremely late adopter strategy.
Maybe it makes sense for your business maybe it doesn't. If you don't want to be on the forefront of technology, well don't. I don't think that launching every startup using the late adopter strategy is necessarily going to result in better business performance. Those that find a better way to do things will win.
I agree with this article, but with one caveat. Don't be the guy that becomes the designated INSERT_OLD_TECH_HERE guy in the office while the rest of the team is working with the cool modern tech that has a lot of jobs in the market, don't pigeon your career by becoming an expert in legacy tech, like COBOL, RPGLE or FORTRAN. You'll find yourself in a position where you'll get fired and won't be able to find another job because there aren't many jobs for your skills and because companies only want to hire people with years of experience in the tech they're using, be it javascript, java, go etc.
And also these legacy techs don't pay that much in general, for every consultant making bank fixing COBOL bugs there are probably dozens or more COBOL developer making less than a javascript developer, so you'll find yourself making less money than a kid with a 3 years of experience in web development, and when you go out in the market trying to switch jobs, you'll have a hard time finding a new job or salaries will suck. Don't be stupid and become the fall guy to keep the legacy debt going while everybody else in the company is learning the in-demand cool stuff and padding their resumes with hire able skills. You will regret it.
This is a great point. I think there's a balance between "OOH, shiny new thing" and "kids these days". It always pays to play with new tech or otherwise learn about it. Both for you the developer (who avoids getting pigeon holed) and for the business (because there may be new capabilities or cost savings).
But introducing it willy nilly into production applications because you want to gain experience with it is bad. Bad for the business, at least. For you it might be resume driven development.
That's why I always advocate for time and space for developers to play with new things on the company's dime. Some good options:
- conferences
- hackfests
- spikes
After some investigation, you can layer in new tech where it makes sense, which makes for an even more compelling story on the resume.
Of course, you also have to have business buy-in that this is a worthwhile use of time. R&D and investing have a lot longer history than the craft of software engineering, so that's the approach I'd take.
I hope part of the goal of this article is to fight back against these trends. If the decision makers can gain more confidence in choosing boring/mature tech rather than trendy tech, there will be more demand for it. Many times people choose the trendy tech, not because it is the right tool, but because of the reasons you mention.
I fully agree with this, but the problem with this idea is that if we only ever choose to adopt the most boring mature technology, it doesn't give any room for better to ever exist. Be selective and judicious, but also open-minded and willing to re-evaluate. Sometimes the exciting things are exciting for very valid reasons. I'm okay with hype cycles, even if they are annoying to me as someone who constantly has to justify why I'm not adopting them, as long as they lead to maturation of innovative ideas.
> Sometimes the exciting things are exciting for very valid reasons
Very true! I would also add: Sometimes boring things are boring for reasons that actually make them poor solutions.
For example, some systems require tedious, mind-numbing configuration. This is boring, but also a good reason to NOT use something. If it takes hours and hours of manual tuning, by someone with special training, then the solution is incomplete and at least needs some better defaults and/or documentation. It might be a poor option.
Another example is a system that does not lend itself to automation. Requiring manual interaction is certainly boring, but also does not scale well. It is a valid reason to disqualify a solution.
Boring can often be a smell--an intuition that a solution is not fully solving the problem.
Most of my career was building dev tools, test infrastructure etc. The cardinal rule there was that it has to be boring.. Practically, this meant no surprises, almost invisible, people take it for granted and it just facilitates things without getting in your way.
Similar to electricity, air, water etc. Very few people really notice these things and talk about them. Till they stop working in which case, it's all people can think and speak of.
bold of you to assume i notified anyone about the holidays i scheduled three days prior knowing there was a board meeting that would trigger a crunch sprint
What is old becomes new, and what is new becomes old. You think you're creating some cool new "shitz", but someone else has already done it. You're just reinventing the wheel with some extra "shitz" tacked on
> Using tech that has been subjected to all those people hours of use means you’re less likely to run into edge cases, unexpected behaviour, or attributes and features that lack documentation or community knowledge.
Often this is part of the value proposition with commercial offerings - Obtaining access to solutions that work for people with much bigger problems than you.
This can differ subtly from the raw scale of deployment. For example, SQL Server and Oracle may not be as widely deployed as something like SQLite, but in the cases where they are deployed the demands tend to be much more severe.
I think a good example of boring vs old is wireguard.
Wireguard is a pretty simple vpn to setup, has a very predictable and stable behaviour which makes it boring, but it's fairly new compared to other vpn software.
Today I successfully ran `pip install -r requirements.txt` in one Python project I cloned from GitHub, I couldn't believe my own eyes. Usually it's at least half an hour trying to figure out how to install dependencies.
In JavaScript ecosystem, installing packages works, but the packages get deprecated within a few months. React and many other frontend frameworks completely change their philosophy and the recommended way of writing code that you need to rewrite your app every 1-2 years or be left behind with deprecated packages.
If it's still like the RoR I used, it can never be considered "boring", just old.
It was a framework where code you dump in "conventional" locations is autoloaded everywhere.
With DSLs based on interpreting the method name as an expression, reflecting on them in `method_missing` implementations you get from inheriting classes.
Where state is shared between instance objects by way of reflection.
Where source diving is the documentation in many third party packages.
No, these were reasons why "rockstar" became a pejorative in the programming community for a while.
In general, having a product disconnected from the hype-cycles has distinct advantages. The lower code permutation rate causes fewer patch cycles that spike labor costs, and the number of unknown exploits/errors usually decrease with time.
If I recall, this was one reason GNU Octave integrated the Forth+Fortran code used at NASA, and kept the reasoned algorithmic assumptions consistent with legacy calculations.
All legacy code has a caveat that depends on if a team cared about workmanship. =3
> The number of people I’ve talked with who’ve replaced complicated K8s clusters with a few VMs and seen massive improvements in reliability, cost, and uptime would make some people at the Orange Peanut Gallery more than a little perturbed, for example.
If I'm being honest, I'm a bit tired of seeing this trope. The details _always_ matter, and when it's written like this, it comes across as universally true. Ironically, Kubernetes seems to past the test of "boring" by the author's standards considering how long it's been around and how many hundreds of thousands of clusters have been running on it over the years.
Do people with really bad architecture skills use k8s to design overly complex clusters? Absolutely. Does that represent the entirety of the k8s community? Not even close. Kubernetes is dangerous because of how far down the rabbit hole you can go. People who try to be Google on day one with k8s are generating so much unnecessary negative PR.
Kubernetes is like if someone snorted a bunch of HTTP RFCs and then decided to half-ass an operating system in an all-night bender. And then a (distributed!) propaganda machine convinced ever middle manager ever to use it.
There are two ways to look at it. It's all been purely built up via propaganda, or there is actual merit in the tool. Ironically, it's a meta-game because propaganda supports each of those views.
But personally, I think it's silly to say Kubernetes has more of a propaganda issue than any other <insert big name framework>. There are "purists" like any other tool. However, I don't let them bother me so much that I just throw the baby out with the bathwater.
Boring tech is boring because it is reliable. You don't need to fight constantly with it. You can forget about it, and focus on your work. Maybe reliable equals mature.
Last week an issue came up where we have to upgrade from Postgres 12 to 16 because of EOL concerns. I said to another engineer, “Postgres is pretty boring so I bet it’ll be quite easy/low risk to do the upgrade.”
We got into a meta discussion about how we still have to approach it methodically, but the nice thing about boring, mature software is having a good gut feel that it probably won’t dip into the risk budget much at all.
Speaking of EOL-driven upgrades, the absurdly short "LTS" that most tech stacks have makes them decidedly not boring. Even Microsoft has fallen for this absurdity with 36 month "long" term support intervals for .NET versions. Are three any language stacks with an LTS that is at least five years (ideally much more) that aren't spelled in all caps?
Code is a liability and dependencies/vendors/libraries are liabilities I have even less control over. I hate when an internal release has to be "about" upgrading dependencies. I get it, I do. But I hate the idea of "we're going to focus on a thing that doesn't actually yield any tangible advancement of our goals."
Not that I'm unappreciative of what projects like Postgres, Django, etc. have given me. Like a good physician, I appreciate you and everything you do, but we'd both be happier seeing each other as rarely as possible.
I say that sometimes to people and they look at me weird. When you work in bigger projects with a lot of people it is harder to argue why you shouldn't write the code than to just do it.
Like, for example, I had to code some build-step that updated some assets that took about 5 seconds to run. That operation was done maybe once a month by other developers, during review another person asked why I didn't parallelize the process and cache already processed files and I was just like: it would add 200+ lines of extra code and error handling and it is not like I mind doing it, I just don't think it is worth the overhead of understanding this code and troubleshooting any possible bugs of this extra optimization code.
And it is harder to argue this kind of thing back and forth than it is to just do it. And now there are 200 extra lines of code that would take anyone else besides me at least an hour to grasp before they can make changes.
Same applies on discussing why you shouldn't add a dependency. If anything that is harder because you need to justify the extra time of not using the dependency.
Which is one of the most unappealing thing about the marketing material for LLMs services like Copilot. The issue was never the speed of writing code, more often than not, you're contemplating if you should write it. And if you need to, how much of it should you write and how to make the eventual rewrite easy.
If you're experienced enough, you either knew the rough way to code a task or realize that you need to take time to investigate the problem space. I don't think I ever ask myself what should I do to write more code with less effort. All the improvements I've made was to target precisely the thing I wanted to edit.
Java, it isn't all caps, and has three years support, plus 2 extended support, in the case of Oracle, other JVM vendors offer even longer times, e.g. from Azul, https://www.azul.com/products/azul-support-roadmap/
Most compilers of ISO languages, some of them were all caps named, others not.
Postgres upgrades were actually annoying the last time I did, where I had to explicitly import data from a previous version into the new one, instead of the software just automatically detecting that the data was a version behind and doing whatever to upgrade the format.
This would probably not have been as big of a headache, if it wasn't because it was running in a container, and was deployed as part of a separate project, meaning CloudNativePG (which probably handles this for you) was not an option.
If you're not using any weird extensions, you'll be fine. If your database is large with indexes and you're not doing it via a pg_dump/PG-restore, make sure to run a reindex as PG13 introduced index deduplication. That saved us terabytes (though cleaning index bloat probably played a role, too).
Major version OSS database upgrades? I would probably never consider this a easy/low risk thing, anytime you have any planner changes its something where having a baseline is really important to understand the impact to your workload.
It's doubly bad with postgres because the statistics get wiped after running pg_upgrade. They do tell you to run ANALYZE afterwards but that's yet more downtime.
That's not great, do the bucket counts change or something between versions? It seems like statistics would be a thing that ... should not change while you are not looking!
Boring is also a desirable property of cryptography.
Boring cryptography is obviously secure.
The guiding principle for whether something is boring or not is the Principle of Least Astonishment. If I can, say, send you a ciphertext that was encrypted with an authenticated mode, and then decrypt it to two valid plaintexts using two different keys, this is astonishing even if the impact of it is negligible.
This is why, for example, airline systems at airports (not the self-checkin terminals but what the agent is running behind the desk at checkin), and Costco in-store product lookup software -- still runs on AS/400 or some equally ancient non-GUI platform
I think that people would be fine with working on boring tech if companies paid them more to make up for it. If they don't, then why should they be surprised that potential employees don't want to work on something that they find boring?
Think boring tech that is still getting active development like linux kernel is mature, not old. Tech that does not allow you to change anything because you are walking on egg shells is boring and old for sure.
Java. It's everywhere, and due to the JVM it won't fall victim of "being written for that exact hardware" problem, like previous generation stacks did. Also, very performant, safe, has an insanely large hiring pool (more than people living in my country), one of the biggest ecosystems (the other two in the top 3 are js and python, and I would argue that js is very frontend-oriented ecosystem and for backend tasks it's not as mature/stable, and python is large in the data science direction. For regular software, Java's may be the widest), very good tooling (observability, live debugging).
(Though curiously many of those systems still run in actual virtual machines emulating the whole mainframe and whatnot)
It represents a good fit to a local maximum somewhere. It doesn't mean whatever it's doing can't be done better, but it does contain a lot of information about its problem domain encoded into it.
One of the most satisfying statements I get to write occasionally is "Component X is now considered functionally complete and so feature Y will not be added to it.
Even framing the conversation this way is quite odd. What's wrong with being old? Since when did software expire? All the best software is old. And, frankly, all software is boring. So this leaves us with no way to distinguish quality software from the rest.
I highly recommend in the future starting from what you actually value about software. Oldness and boringness are not the reasons, and if they are, they are extremely bad reasons.
Eh, the opposite of being bored is being excited/engaged, and the opposite of being surprised is being predictable. I don't disagree that predictable is probably what we want for a lot of what we see around us (no one wants unpredictable traffic lights), I think we are lying to ourselves if for at least some subset of our work-life we want cool and shiny, so long as we are within the bounds of business objectives.
Clojure was not "popular new" to be "boring" in a way, but it is boring in the sense that it works in a predictable manner, I use it for backend and data analysis, just works.
or partially unmaintained, a security nightmare and not compatible with a lot of stuff you might need to be compatible, too
boring tech is nice, if it can get your job done, is compatible with modern security standards and allows fast reliable development
sadly that isn't always the case
especially security standards have shifted a lot in the last 10+ years, partially due to attacks getting more advanced partially due to more insight into what works and what doesn't
deployment environment and pipelines have shifted a ton, too, but here most "old" approaches continue to work just fine
data privacy laws, including but not limited to GDPR, bring additional challenges wrt. logging, statistics and data storage
regulations in many places also require increased due diligence from IT companies in all kinds of ways, bringing new challenges to the software live cycle, dependency management, location of deployment. Points like 4-eye-principle, immutable audit logs, and a reasonable standard of both dynamic and static vulnerability scanning/code analysis can depending on your country and kind of business be required by law.
If your boring tech can handle all that just fine, perfect use it.
But if you just use it blindly without checking if it's still up to the task it can easily be a very costly mistake, as costly as blindly using the new wide spread hyped tech.
reply