Hacker News new | past | comments | ask | show | jobs | submit login
How boring should your team's codebases be (meadsteve.dev)
186 points by meadsteve on Oct 15, 2022 | hide | past | favorite | 225 comments



There's never a simple answer to this.

Here's some of the things we encounter:

* A bored developer is an unhappy developer. Unhappy developers leave. Developers that leave take a swathe of domain knowledge with them.

* Your good developers are often the ones who like to tinker with frameworks, patterns and complexity. Note: good developers don't force this down people's throats, but they're always thinking about what they can apply in the future. That's not to say they can't be perfectly fine working on boring code. But they often get bored with it. They can be 5x as productive as your average developer when working on the boring code, but you're just ticking down a clock in a lot of cases.

* Complex code can be a hindrance to onboarding new developers. Boring code can be a hindrance to onboarding new features.

* You often end up in a situation where you're reinventing the wheel and you're spending increasing amounts of development time on keeping the wheel round. At some point you've got to consider a ready-made solution to your problem or consider hiring more people to deal with it.

* Technical leaders have a fine balance between keeping developers happy, keeping development velocity high and keeping onboarding speed high.

Creating a company off the back of a flavour-of-the-month tech stack isn't a good idea. But I can't see how any large software company can scale without having a bit of spice somewhere.


> A bored developer is an unhappy developer

A boring codebase doesn't make a bored developer, on the contrary it frees developers up to think about important stuff and deliver value to the business. Just as I want my language to be boring so I can focus on interesting stuff, I also want my tech stack to be boring - the interesting bits should be in the value added, not the stuff under that.

> Your good developers are often the ones who like to tinker with frameworks, patterns and complexity.

In my experience, good developers recognise that complexity is always the enemy of good code, complexity is a necessary evil. Developers enamoured of complexity or novelty are not good developers but usually beginners who think that new x must be better than older y.

> Boring code can be a hindrance to onboarding new features.

Not sure what you meant here, but IME boring code makes it significantly easier to onboard new developers and to develop new features - boring code meaning code that is easy to understand, does what it purports to, and has minimal layers of complexity and minimal architectural busywork.

There is certainly a place for novelty, particularly in a green-field development, but the tech stack is a tool in service of the developer/business, and should never be seen as the end goal or product.


I agree 100% with your first comment. The best developers care about the impact their work has above all else. Sure, they might prefer Ruby, or Golang, or whatever, but at the end of the day the results are what they are after.

However, somehow we’ve created a world where the tools and techniques matter more than the output. I have no idea why this is, but I see it every day where engineers want to refactor code and try every new trick or tool they can.

Keeping track of what is going on is generally good. Trying to apply every hot new thing to a production codebase is a recipe for disaster!


It's also wrong to say that tools don't matter. If you say try to shove async and multiprocessing python into a system because python is boring and tools don't matter you are going to get absolutely fucked in terms of maintainability. Maybe you can afford that (and many startups can) but there's going to be a lot of survivorship bias that is sourced from non-technical things like "founder had connections" that are not reproducible across to another startup


This.

In my experience, people that insist in never adding new things can only afford to do that because they push all of the flashy problems their "boring" options create onto somebody else. And that somebody else tends to be severely underrated because they lose all their time fixing the crazy problems that come from ignoring tool selection.

But also, people that insist on novelty all the time usually can only afford to to that because they abandon their code as soon as the complexity of joining all new pieces together starts to appear. Usually leaving it for somebody else to deal with.

As a rule, developers that always choose X, for almost any X create many more problems than they solve. But they do make great strawmen to fight against in HN comments. As soon as you see people equating them to the technique, you can know the comment is worthless.


I think it’s a combination of 4 things:

1. Everyone is trying to make a name for themselves,

2. everyone is trying to maximize their income,

3. most people want to focus on practicing the mechanics of their profession rather than exercise real understanding, and

4. Most people don’t understand the global problem space they’re in


  > A boring codebase doesn't make a bored developer, on the contrary it frees developers up to think about important stuff and deliver value to the business. Just as I want my language to be boring so I can focus on interesting stuff, I also want my tech stack to be boring - the interesting bits should be in the value added, not the stuff under that.

This. I don't want to work with niche languages anymore, and artisanal, hand-crafted in-house libraries/frameworks.

I want to use boring shit like the JVM or the CLR (.NET) and the libraries with the most answers on Stackoverflow so that things "Just work", issues are debuggable, the authoring experience in my IDE is solid, and I can use what little braincells I have to focus on building features + solving problems.

Niche languages and wacky libraries are for my weekend projects + personal fun-time.


I agree, but would use Rust instead of Java. Imo Java is a horrendous language that’s basically legacy at this point.


JVM is very different than Java. Many Java devs use another JVM language such as scala, etc.

Rust is the future but not mainstream and still early. Can’t wait.


To be fair, they didn't say Java, they said JVM. That allows Kotlin, Scala, whatever.

Java is missing so much and has so many design errors (which are acknowledged as errors), I agree that it should be avoided.

I think Rust is a better language, but you can't exactly incrementally convert your Java codebase into Rust very easily. If you stay in the JVM, things like that are much easier.


Care to explain why? I get that there are some horrible Java projects and libraries that have really ballooned in complexity over the years but I still see Java itself as one of the better languages. Especially for corporate work.


>A boring codebase doesn't make a bored developer, on the contrary it frees developers up to think about important stuff and deliver value to the business. Just as I want my language to be boring so I can focus on interesting stuff, I also want my tech stack to be boring - the interesting bits should be in the value added, not the stuff under that.

That only holds true so long as the developer believes the end result is interesting and not the puzzle of software development on the journey. A lot of people are developing very boring software. It's hard to stay passionate about that sort of thing, which is why developers take it out on the code. It's especially hard to stay passionate even if you think the product is interesting but the journey is a slog. You will get burnt out.

I like programming in boring languages. I code Go for a living, and I think it's a relatively boring language. But I love Go.

>In my experience, good developers recognise that complexity is always the enemy of good code, complexity is a necessary evil. Developers enamoured of complexity or novelty are not good developers but usually beginners who think that new x must be better than older y.

Yes, complexity is evil. But it's a necessary evil like you say. At some point you have to face the music. We write the same code 3 times, we extract it to a function. We use it in 3 separate places, we extract it to a package... and so on. This applies to every step along the way from development to deploying to production. Someone is going to get fed up with repetitive tedious work and they're going to write a tool or abstraction or something to fix the issue. You can't get away from complexity but you can limit its scope.

Good developers aren't suggesting frameworks because they're newer, or refactoring because the code looks nicer, or anything like that. But a good developer is always tinkering and experimenting. If they think there's an oppurtunity to improve the developer experience at the cost of complexity, they'll try it and see what it looks like. Often nothing comes of it, but sometimes they find a nice middle ground. That's the sort of developer you want. One who is always thinking and not hacking out code just to reach a deadline.

>Not sure what you meant here, but IME boring code makes it significantly easier to onboard new developers and to develop new features - boring code meaning code that is easy to understand, does what it purports to, and has minimal layers of complexity and minimal architectural busywork.

"Good" boring code bases in my experience are flexible but tedious. You couldn't write half the software you write without someone sprinkling some of that necessary complexity into the mix. There's no room for unnecessary complexity. But if doing things in your code base is tedious, boring and eats up dev time, then there's an argument to be made that some refactoring is warranted. It'll introduce complexity, but the trade of it faster development and happier developers.

I work on a "boring" code base. I don't mind it, but I can see the duct tape holding it together. Adding new features is easy but time consuming. We said from the very start that we will not write complex code. And we stuck to that and we saw the benefits of conciously deciding that. Development velocity initially was rapid, but now it has tapered off. Getting buy-in for refactoring is a huge effort because everyone argues "complexity is evil", but I'm seeing my velocity drop off a cliff. What was once fun because it was simple, is now boring and tedious because it's too simple. We sidelined some basic patterns and complexity for that boring-ness. That's why there's a fine balance between what is simple vs complex.

How do you solve that issue? Do you stand by your guns and say "Yeah it's a slog and you're bored, but it is how it is"? Or do you work with the developers that are working on this code base 8 hours and day and reach a middleground at the cost of your ideal "simple is better" approach?


> If they think there's an oppurtunity to improve the developer experience at the cost of complexity, they'll try it and see what it looks like

IME this fails most of the time. Perception of complexity varies across the team. A proactive developer pushing for a new shiny thing is unable to properly recognize complexity of his solution because he is the most familiar with his solution. Familiarity changes perception of complexity.

A lot of developers have a catastrophic combination of being non-confrontational and having impostor syndrome, so when then new shiny thing gets enforced upon them they are unable to push back. They don't like it, but they think it's their fault, so they keep silent.

> "Good" boring code bases in my experience are flexible but tedious

It is true, but I taught myself to love that tediousness and I absolutely love it. I treat it as enlightenment. I write way less code, but ship way more value.

It's really sad that most engineers are excited by solving their own problems, not by solving customer problems. Talk about misaligned incentives.

We even rank companies based on how much fun they allow us to have, not by how much impact they are allowing us to have. I've heard numerous stories about folks at Google not being able to ship to production for months, but still praising their tech stack. To me that's just insane.

> Getting buy-in for refactoring is a huge effort because everyone argues "complexity is evil"

That's strange. In my world most of the refactorings are aimed to reduce complexity, not increase it.


Until you have had to work with complexity guzzling junior developers who cannot help but chase the new sparkle, consider yourself lucky to be working on a stable codebase that is readable. It sounds to me like the team has turned down some of your ideas that you feel would make the codebase "better".


> But a good developer is always tinkering and experimenting.

I strongly disagree. Developers build stuff, that's the job, not tinkering. Sure, everyone loves to tinker (most people love it far too much), and sometimes that is productive in the end, but it's the sort of thing you do in your spare time or quiet moments, not something you should see as central to the job - central to the job is turning ideas into reality, delivering.

> But if doing things in your code base is tedious, boring and eats up dev time

This is not what people mean by 'boring' software, they mean software they don't have to think about much, and when they do think about they understand instantly. Boring in a good way - as simple as it can be, hiding complexity with the right abstractions.

I'm afraid I don't know the sort of code-base you're talking about, but if velocity slows down, it's a sign of problems. If you're talking about a boring mess, you're probably talking about a code base I would call interesting (in a bad way). Boring is not a mess, boring is stuff that just works, with few hidden gotchas and abstractions and a simple call tree.


> Technical leaders have a fine balance between keeping developers happy, keeping development velocity high and keeping onboarding speed high.

Very much in agreement that the best developers love experimentation and tinkering. I think about this kind of balancing act quite a bit and I personally like very small teams of folks who are not afraid of a lot of spice as long as the tools either have decent community or are not impossible to hack.


> Your good developers are often the ones who like to tinker with frameworks, patterns and complexity. Note: good developers don't force this down people's throats, but they're always thinking about what they can apply in the future. That's not to say they can't be perfectly fine working on boring code. But they often get bored with it. They can be 5x as productive as your average developer when working on the boring code, but you're just ticking down a clock in a lot of cases.

In my experience that depends.

But the tinkering kind is often satisfied when they are able to tinker on their own code. Even (or especially!) if they're allowed to do it during working hours. But allowing engineers to literally hone their craft on the clock is something that is becoming rarer and rarer, unfortunately.

But I agree that a developer that refuses to admit failure of their experiments and wants to force their experiments on others is a problem, of course.

On the other hand, there's more to this job than coding, and a lot of people interested in "learning" will leave as soon as they find out there's nothing more about the problem-domain to learn.


Why does tinkering on the clock matter? Why not just hire people for 30hrs/week if that's what they want?

It's not the 1970s where you can't afford a computer at home.


Developer can be a programmer or a software engineer ( my definitions).

Programmers tend to be interested in cool new paradigms, tools, libraries, etc...

Software engineers tend to be interested in delivering robust and correct features. Consequently they are more conservative.

When companies hire, they should be clear what sort of developer they want. Programmer in a software engineering role will damage the code base and eventually leave in frustration.


> Programmers tend to be interested in cool new paradigms, tools, libraries, etc...

Yeah well guess what, a company does not care one iota about what its employees personal interests are, or what they think is cool. And it shouldn't.


This is nonsense. A company that doesn't care one iota about how its employees feel about their job ends up with high turnover, which dramatically increases costs in the long run. Good companies strike a balance between the needs of the employees, the needs of customers, and the bottom line. Any company that fails to strike that balance will lose in the long term.


The definition of a company and its goals and motivations, is clear, and it does not have any consideration for employee amusement. That's what the christmas party and company picnic is for, and they happen outside of working hours.

If you push your own amusement as a priority at work, at the expense of the goal of the company, you literally make the working environment dysfunctional.


This philosophy only works if you live in one of three simplified universes: either your employees are fully bought in to the company's goals, or you can always pay them enough money for them to keep working for you without any intrinsic motivation, or your employees really are interchangeable cogs and retention doesn't matter.

None of these simplified models reflect the real world. No employee is fully bought into the company's goals. Odds are you don't actually have enough money to beat all other offers. And the cost of turning employees into interchangeable cogs is that you need a lot more of them than if you're willing to let them be individuals.

The result is that while the company has its own goals, those goals are best served by making sure that the employees are at least happy enough that turnover is kept low. And a big part of keeping creatives happy (not just in software) is letting them try new things and experiment.

(This is aside from the tangible benefits that your organization gets from allowing people to be creative, which I think is not negligible.)


Boring codebases should be generated, not maintained.

Codebases become boring because bad tools require a lot of repetition and meaningless boilerplate (which also encourages mistakes). Experience with bad tools is much less valuable because they can’t amplify my time and effort. If a job did not allow me to maintain valuable and marketable skills, they would have to compensate me a lot for creating a résumé gap and making future job searches harder.

Incidentally I also found this to be sort of a drawback with Google. Experience with completely proprietary platforms also has little value outside the one company where they’re available.


That makes no sense, it's just a question of role-candidate fit. Why would a company not care?


If the boring code is working it’s easier from a time and reliability perspective to go play and experiment in an other repo.

R&D or having a “lab” repo for developer experiments is a separate lane until it’s good enough for production. It’s a wonderful way to highlight what possibilities and capabilities are being explored.

Or, you can use the latest and risk having to constantly update it and let it take away time from experimenting.


> Your good developers are often the ones who like to tinker with frameworks, patterns and complexity.

This reminds me of Spolsky's "Smart, but doesn't get things done" anti-pattern.

Smart doesn't mean good productive team member.


> A bored developer is an unhappy developer. Unhappy developers leave.

Software development is the only profession where people expect (and demand) to have fun at work. The official ways of this society is that fun is what your free time is for, and work is for getting things done and make money. Why is this different for software development?

Why do people think they can "play with new technologies" at work, which is wasting not only company money, but other people's time as well? It's so incredibly unprofessional, maybe it's time for software developers to grow up and start acting like adults in the workplace?


> Why do people think they can "play with new technologies" at work

The same reason software developer salaries are so high. Supply and demand. Companies will do back flips to attract and keep software engineering talent.

> which is wasting not only company money

Some of the best cost saving solutions I've come up with in my career have been thanks to "playing" with new technology. It's the opposite of wasting company money.

> maybe it's time for software developers to grow up and start acting like adults in the workplace

There is nothing childish about loving your work, having fun while doing it, and bringing that attitude to work. It's infectious and great for company morale. Maybe it's time for the negative nancies to grow up, seek therapy, and start acting positive at work and having some fun.


> There is nothing childish about loving your work, having fun while doing it

Expecting to love your work and have fun doing it, is incredibly naive, yes.

> bringing that attitude to work. It's infectious and great for company morale.

Exactly the opposite, relationships become incredibly strained with the rest of the company and there is a lot of frustration when dealing with the software department. Developers are disliked in the company because they are selfish unprofessional time wasters.

The users are just dumbfounded when they present their 8 month rewrite of perfectly working system, in trendy react framework, now with only 80% of the features and in a slower web page. It's really embarrassing.


> Expecting to love your work and have fun doing it, is incredibly naive, yes.

Then I guess I've been naive for over three decades. And getting paid great for it too. I suppose I'll never learn this important lesson considering I'm close to retirement.

> Exactly the opposite, relationships become incredibly strained with the rest of the company and there is a lot of frustration when dealing with the software department. Developers are disliked in the company because they are selfish unprofessional time wasters.

I could imagine that happening with you. There are always a few envious types in any company who think the entire company feels the same way they do. These same folks are envious that the sales people get huge bonuses for "spending all their time on the golf course". Considering I have work relationships going back decades it's safe to say there are plenty of people in most companies who aren't like you.

> The users are just dumbfounded when they present their 8 month rewrite of perfectly working system, in trendy react framework, now with only 80% of the features and in a slower web page.

Of course they would be. That doesn't sound like fun at all. That sounds like a project that didn't take user requirements into consideration. Fun and poor requirements definition sometimes come together, but you are confusing correlation with causation. Loads of overly serious software engineers get the requirements wrong too.

I hope things get better for you and you learn to have fun while still being highly professional. It's a great club to belong to.


There are a lot of reasons to rewrite an existing system. Maybe the existing system is built in a language that is impossible to hire for in the area you’re in. Maybe the test coverage is awful and the code makes it very difficult to write tests. Your tone is frankly pretty condescending and the only reasonable answer to when it makes sense to do a rewrite is “it depends”.


Dunno where you're from, but where I'm from, it is pretty much the opposite world. You're a red flag if you don't see yourself as having fun. Yes, this includes most webdev shoves where one does the equivalent of shoveling virtual manure. (No, I don't agree with this, either).

>Why do people think they can "play with new technologies" at work

Because software dev is a step above code monkeying and people value being able to grow. Which requires experimentation.

It's unprofessional when there are higher priorities or you can't make a potential business case.


> The official ways of this society is that fun is what your free time is for, and work is for getting things done and make money. Why is this different for software development?

Two reasons. 1) Because devs generally have other employment options. 2) Management has no idea how to judge what devs do.

1 is important, because devs can often go work somewhere that lets them work on fun technologies. This is usually at a smaller company and often comes with lower pay, but not so low that anyone would consider it suffering (still easily into the 6 figures).

2 means that the manager only sees the following option: Tell the dev they can't play with fun tech, risk them leaving, and spend many thousands of dollars on recruiting a replacement (who may also leave). OR Tell the dev they can play with the fun tech, and nothing immediately breaks (and the manager can go back to worrying about their day to day problems).

Taking the second option is not necessarily an irrational choice for the manager. Especially if that manager is using an "up or out" strategy (their plan is to be promoted or leave before the long term effects of these decisions can bite them).

Such is the realpolitik of the business world.


> Why do people think they can "play with new technologies" at work, which is wasting not only company money, but other people's time as well.

Because that is where new ideas, new approaches, innovation and knowledge creation happen? In software, like many other knowledge-work professions, you can't just put in a few years at university and then stay on top of your game by simply showing up every day for 40 years. The daily challenges are too diverse and change too quickly. Continuing education is a significant part of staying in the game, mixed with daily application of this knowledge.

Think about it this way - if you're a high-value developer, you've already put in thousands of hours of R&D and practiced with numerous technologies. Much of it on your own time, much on a previous employer's time. This knowledge is the basis of your current skillset. It's why companies pay you a salary. It's how you adapt to novel challenges in daily work for which you, by definition, require a larger body of knowledge to draw from. You know best how to build that body of knowledge.

Some companies are apparently more than willing to reap the benefits of R&D that was done on someone else's watch. But when it comes time to allow R&D on their time, suddenly it's "wasted" and derided as "play". The only thing that's wasted here is the developer's talent. When the "adults in the workplace" behave like this, making arbitrary engineering decisions about what is legitimate work vs play time - despite lacking the engineering context or credentials to make such a bold determination - that reeks of unprofessional behavior.

It would be like me micromanaging a lawyer's document list in preparation for their case. Or me telling a doctor which medical journals they should/should not be keeping up with. I'm not qualified! So I shut up and let them work, and put my trust in their deep knowledge and hard-gained experience. That's professional.


> Software development is the only profession where people expect (and demand) to have fun at work.

Humans in general demand fun at work whenever they have the leverage to do so. That so many don't have fun is not an example to be emulated, it's a problem to be solved. People are more effective when they enjoy what they do. And software is far from the only profession where people are able to pick jobs because they enjoy them.

A few examples from my own immediate experience:

* In the last year our company hired a full-time designer whose express reason for switching jobs was that our project was more fun.

* My dad is a producer for a radio show. He got that job by twice switching away from jobs that he'd stopped enjoying and he now loves what he does.

* My brother-in-law went into law because he knew he'd enjoy it, and he has a blast every day. He's always telling a story about some crazy new legal edge case he ran into at work.

* My wife is a musician and music teacher and absolutely loves it. You should see her when she has her instrument out.

Instead of trying to rob the fun from the creative professions, we should be trying to find ways to help more people be able to enjoy their work. Even aside from the obvious humanitarian benefits, this would have a huge improvement on our economic output compared to the status quo where so many people are basically cogs in a machine.


>Why do people think they can "play with new technologies" at work, which is wasting not only company money, but other people's time as well?

This has been a thing for as long as humans have been working. It's because through experimentation you may discover something that increases productivity.

The guy who strapped a combustion engine to a field plow was probably playing with new technologies at work. And it no doubt cost a lot of money. But now the world is better off because of it.

Yeah, engines are more complex than horses. But productivity skyrocketed. These trade-offs are made every single day, every single year for as long as we've existed.


You must have only ever done minimum-wage unqualified work to be saying that.

Everyone wants a job that they love, and that requires having fun at work. People leave jobs, any kind of jobs, because they're bored and not having fun.


Do you really think they take into account in other fields, that their doctors, lawyers, finance people etc should "Have fun" and change their decisions away from what is straight forward solutions, because of that? They don't. Only in software.

I had a heated argument with a colleague who was fighting to use MongoDB for a new project, even though there was a company wide decision by upper management to NOT use MongoDB in the company. And his only justification was that "he took this job to play around with new technology". If I hade the power to, I would have fired him on the spot, because clearly he had misunderstood the very basics of the employment contract he had signed.


This isn't true. Developers like tinkering with new frameworks. Doctors like experimenting with new treatments or procedures or drugs, often very immature and untested. Lawyers will seek opportunities to flex a new legal strategy or argument in order to make a name for themselves or to impress a judge, even if it may not be the safest strategy for the client. Finance people do just about anything to avoid simple, straightforward, time-tested investment strategies.


> Do you really think they take into account in other fields, that their doctors, lawyers, finance people etc should "Have fun" and change their decisions away from what is straight forward solutions, because of that? They don't. Only in software.

Two things:

1. Both my parents work in law, and both have changed jobs, taken in on new challenges, or focused in a specific area because they were interested in it and wanted to have "fun" at work. Eminent doctors will also frequently filter their patients to focus on cases they deem interesting.

2. You are saying that companies actively support non-optimal solutions just so that software engineers can have fun. It's not the case at all. Maybe bad engineers pushed bad technical choices because they wanted to have fun, and the company went along with it because the bosses didn't know better, but that's driven by the bad engineer, not by the company.


Humans didn’t evolve to sit at a desk all day. We can adapt to it, but some sort of feeling of adventure helps motivate. Interest in the work engages more of the brain and can make one more productive.

I’ve cultivated the ability to become excited about picking apart grody system failures. That never gets old, and unlike adding a hot new framework to the stack, the cumulative effect is always to make things work better.


Just as one tiny counterpoint, the company I work for [0] builds a database written in Zig, not C. We implemented our own consensus using Viewstamped Replication [1], not Raft. And we built our own storage engine on LSM trees rather than use RocksDB [2].

A lot of this we built ourselves so we can do FoundationDB-style deterministic testing [3] of the entire system which would not be possible with off-the-shelf libraries because they are not deterministic.

Another goal was to not allocate memory after startup. A goal that most third party libraries would disrupt.

Sometimes the boring options do not further the technical goals of your project. :)

[0] https://github.com/tigerbeetledb/tigerbeetle

[1] https://pmg.csail.mit.edu/vr/liskov12vr-abstract.html

[2] http://rocksdb.org/

[3] https://apple.github.io/foundationdb/testing.html


Agreed. Though not strictly a counterpoint. I do enjoy novelty (and benefit from it). I just want the right amount of it


That is not a counterpoint. If you said your choices did not have any cost or risk compared to the “boring” choice, that would be a counterpoint.


Ah, I think this is actually a very interesting example. My understanding is that TigerBeetle started its development under the company Coil, and was eventually spun out to its own company. I think this post my almost be directly speaking to Coil's decision to fund an in-house, specialized, database project written in Zig.

More power to you guys for actually getting to work on such a fun project, but I wonder if that was a poor decision on Coil's part, as reflected by their eventual decision to jettison that part of their tech?


Every time this topic comes up, it reminds me of a web app I wrote back around 2007 that was deployed to a over 2000 locations. I deliberately used "boring" technologies. The entire front-end used under 100 lines of JavaScript. The backend was simply SQL Server, and the queries were written in SQL instead of some ORM. The output was just HTML. No special tooling was used, no "minification" or "tree shaking", or any such thing. Just hit the build button and "copy to deploy".

For about a decade I used to turn up to that customer annually for "maintenance", which primarily involved importing some CSVs that changed every year, and also updating the logo images and icons to match any rebrands.

In that time the system had two million users, went through 4 OS upgrades, 3 database upgrades, and went through the 32-bit to 64-bit upgrade also. The underlying runtime had 3 or 4 major updates, depending on how you count it.

Zero outages, no problems, only the occasional performance regression that could be fixed by poking the database statistics to get things back in their groove.

The problem was...

You see, all of the above was a problem, because it didn't keep me employed. I was not the "hero" for saving the day. Entire teams of people weren't involved. There was no visibility at the senior management level. Nobody got scared, or had to throw money at it, or hire consultants to review it.

So it had to go.

It was replaced by a system that cost about 500x as much (a 9-digit sum), got rolled back for failing to meet requirements, and then got additional funding and was eventually forced upon its hapless users.

That, apparently, was doing things "properly". That got everybody involved. Everyone got their beak wet. All the way up to government ministers and their lobbyists. Multiple consultancies were engaged. Reports. Audits. Reviews.

This is why we can't have simplicity: because it doesn't scale.


Isn't the moral of the story that you didn't charge for it correctly?

If you charged a fixed annual maintenance fee then you would have felt very clever having made a ton of money not having to do anything

Administrators would have also felt good that they had you as insurance of sorts bc the way things stand you make no money and can disappear at any moment


I think they would have still been uneasy that he was a SPOF, even if he had charged a lot more. And there's also a cognitive disconnect if a one-man deliverable costs above a certain threshold.


The SPOF fallacy is always managements favorite way to waste money. My current shop we had a system maintained by 1 guy, as like 10-20% of his job/time. I can guarantee you he is not paid even $500K.

This of course was deemed bad & risky, so we must engage a set of vendors to deliver a replacement. 2 years, a dozen subcontractors, and 7-figure annual bill later.. and they still haven't replaced 10-20% of this SPOF. Literally spent $5M so far against max $100k of this guys salary. There are still no signs that SPOF can give up the responsibility even next year.

No one has been fired over this. In fact the decider has been promoted.


Both of these issues are often solved by incorporation.

A one-vendor deliverable can cost whatever, and corporations can certainly agree to keep supporting something for a certain time.

Doesn't actually matter if the corporation has one member, and no coherent plan for what happens to the contracts if that person steps in front of the wrong bus. It makes the relationship legible in the way that the contracting party is comfortable with.


Its called business continuity plan and tbh most companies drop the ball in that regard.


What does SPoF even mean?

I could probably step into this system and be effective in it w/i a single month, probably less.

Is that a SPoF, or is the multimillion dollar system that no one understands well it's own SPoF?


I've seen this at startups. Management starts off informal, and promotes people that have good visibility. In the early stage, these are people that quickly churn through feature lists and deliver solid code.

Later, drunk on its own success, management continues to promote based on visibility. In a mid sized company, the most visible engineers are the ones that manage to ship broken code, then respond to nail-biter business continuity ending pages at 2AM.

At this point, the engineers that built the product and keep the lights on vest out, are passed over for promotion, and then leave.

In the next phase, the company's product stops delivering on its core competency, but hopefully it has monopoly leverage, so whatever.

Finally, the big company has a come to Jesus moment, and tries to course correct. This step is fraught with peril. It rarely works, and instead usually leads to a revolving door of process-fiddling / agile-promoting execs. This happens because execs that could solve the problem necessarily realize the root issue is middle and upper management; organizational antibodies pick up on this and isolate such threats to the staus quo.

The only execs that succeed at this point are the ones that somehow delegate 100% to low level managers and ICs while giving their peers the impression they are micromanaging and making massive organizational realignments or something.


It is also possible that this project and you being the only one handling it were analyzed as a risk.

It doesn't justify how the handover/rewrite happened, but why it did is somewhat understandable, if not plain reasonable.

Typically, employees with walled gardens (willingly or not) are a massive liability to their company.


That's not what he said, sounds more like anybody (reasonable) could have handled it alone, or for redundancy in a small team. Boring standardized battletested tech is the opposite of creating a walled garden, and he states (understandably) that this was the problem, for him and the product itself. Very sad story, but seeing what lasagnacode over-and-underengineered-madnesses-at-the-same-timr are put up today, more than completely believable..


Nowhere in the grandparent's post says that it was a "walled garden", or even that it was closed source. The fact that only one person was needed doesn't mean there's only one person available. OP even said he worked for a company in a reply. The rationalisation automatically assumes that the grandparent is either incompetent or lying by omission, which is very uncharitable.

Even if all those problems were true, if it was really analysed as risky, the proper thing to do is to bring in one or two more engineers, perform audits, ask for the full source if it's not available. Ask for documentation. Heck, OP said it's not minified: try to reverse engineer it, if need be. Perhaps it's not even necessary!

There's absolutely no need to bring a 9-digit-sum team to replace a working system made by one person, even if this is common practice in our industry. Not before all other rational avenues are pursued if there are problems.

What also pisses me off is that what happened on the other side might have been caused by companies like the ones I worked for. For a long time I worked for consultancies, and it was routine for sales to "translate" our feature lists into procurement rules (sorry don't know the term in english) and give that to companies and government so we would be the only ones able to answer.

And the worst part is that software engineers go on with this tune because they enjoy so much overengineering everything from scratch.


Didn't say it was a walled garden. But management has its own ways and quirks I said it was possible that the situation was seen by mgmt as a walled garden.

Seen as.


And I answered to that already on my second paragraph.

Taking the nuclear option after merely "seeing [something] as" risky without exhausting the much-cheaper remaining options is not "somewhat understandable, if not plain reasonable". And it's not "ways and quirks": it's incompetence at best or corruption at worst.

This kind of situation might be common, but it is not understandable nor reasonable.


Seems like you are the one being uncharitable.

For better or worse there are tons of both reasonable and unreasonable factors as to why a large company would replace a part time developer's side project with something that costs 9 figures.

You don't know those reasons, the person you replied to doesn't know those reasons, and in fact the OP probably doesn't even know those reasons (they "used to turn up to that customer annually for maintenance").


>"Seen as"

Without understanding that it can be simply and cheaply fixed by training second person is gross incompetence. Those single cell morons should've been fired instead.


What details must be missinh for 100 lines of js and a sql backend with or without orm to be a walled garden?


>"It is also possible that this project and you being the only one handling it were analyzed as a risk"

If that was indeed the case then hire a "backup" person. Way cheaper.


Absolutely.

But would you be surprised of such a decision?

Heres a proposal: As organizations grow, the size of a solution becomes imcreasingly proportional to the size of the organization rather than to the size of the problem.


This definitely one narrative, but there are lots of reasons that could also contribute to a change. A few:

1. Your system was likely more complex than you describe here; What's generating HTML with dynamic data between the database and the client? Did you "100 lines of JS" have any dependencies?

2. Maybe your company wasn't charging for this simplicity and peace of mind correctly. Companies would pay for for SaaS-style products that looked a lot like subscriptions even back in 2007, we just didn't call them that.

3. It sounds like this was run on-prem. That's expensive (and scary) for a lot of companies if supporting software is not part of their core skill set.

4. We're not solving the same problems today as 2007; much of that low-hanging fruit has been picked. I'm guessing your original system was internal facing at your clients; everyone wants to integrate into much broader client-facing workflows now.

5. If you were only doing annual updates not much was changing. That's awesome but implies a pretty static problem domain.

There are countless more motivations, and the baseline has shifted dramatically. I'm not saying the reason you present is wrong or not the primary one, but it's dangerous to attribute malicious intent when there are lots of "simpler" reasons as well.

>> This is why we can't have simplicity: because it doesn't scale.

I'm not sure your example leads to this conclusion. Simplicity is a set of abstractions. When we expand the domain broadly enough they start to leak. This is related to, but not the same thing as scaling.


I do feel like the necessary complexity of SQL maintenance and dependency patching was thrown under the rug here. But then again maybe the client completely firewalled development and operations


I have personally seen this within organizations. Certain leaders can appreciate the simple approach, but not others, because it doesn’t increase headcount. For other leadership types it is drama and increasing headcount that drive their careers.

Also you the developer will get scant recognition for finding the simplest solution. That’s the kind of thing that doesn’t get appreciation up the management chain (usually.) It should but it doesn’t.

The perception is not: wow this will save us millions over time by allowing us to do more with less. The perception is: so this guy did this project that turned out to be simple.


It's the same problem as preventing versus curing: the latter is much more expensive but much more flashy. In most companies the owners are the only ones who would care about doing things as efficiently as possible, but those are often also the most removed the line work. They get all their information filtered through middle managers who are competing with each other for their next promotion, so unspectacular news often doesn't make the cut for being passed upwards.

The main exception I've found so far is in making tools/systems for myself, since then it is easy to convince the owner about the benefits of simplicity and easy maintenance :)


100 lines of JavaScript and a backend consisting of stored procedures that output HTML does not sound like a ‘boring’ technology choice. It sounds very exciting - anyone approaching this codebase to do things to it will likely have lots of very interesting questions about how this system handles lots of things.

The solution might have good answers to all of those questions! It is perfectly possible to build a well-engineered system using those technologies!

But on almost every level the answers to those questions are going to be surprising.

Whereas the same thing built using webpack and a Ruby on Rails to Postgres backend will be much more legible.

‘Boring’ technology does not mean no technology.


I understand the annoyance of it being replaced for a more complex more expensive system, but I would also like to know: What was the reasoning provided and what did decision makers truly believe about the whole thing?


> What was the reasoning provided

Reasoning? Hah-ha. Haaaaa...

It was an open government tender process, for which I, or the company I worked for was not eligible, despite the tender being "open". You see, a decade-long pedigree of actually having implemented the software used for this purpose did not qualify us for replacing it with a v2.0.

There are rules, you see? They have to be followed! Or else.

Or else bad things might happen, like money being wasted.

The fact that the end-result of this process was that a 9-digit sum was spent on something I spat out in my spare time in under a year -- and was used for a decade -- was of no relevance.

> what did decision makers truly believe about the whole thing?

Their concerns started only when the whole thing blew up and started making headlines. Then nothing happened to them personally, so their concerns evaporated along with the taxpayer funds they had wasted.


I think most readers, including myself, empathize with you and understand the frustration and absurdity. But you are also telling just one side of the story (yours) and I imagine that v.2.0 specs had certain requirements and features, possibly required by legislation, that needed to be followed and implemented. When you say, dismissively, There are rules, you see? They have to be followed! that's when I, and likely others, start to wonder if you are really providing the full story, or if you actually even understand the differences between your simple app and the updated version.


Nah, this is full story 99.9% of the time. I worked for government and this happened all the time - nobody ever got fired for choosing 500x more expensive IMB general solution versus something that you customized for the stakeholders and has 0 issues and million users and 0 incidents. I had personaly many such products being on the side of the government once, and on the side of the private vendor after that.

One example - I created Help Desk system for the public finances of entire country using Redmine and other FOSS tools. The cost was 0, the time to implement it was single year of not so focused work and it served hundreeds of thousands of people. Then IBM took over with its service desk, implmenting it for years and costing infinity. They could get into tender, I could not since I and my team are small company. The funny thing is that stakeholder subteam abandoned it and returned to my solution (with 0 maintenance since I left the company).

This is typical. You need to know how government works to understand it. I understand it, but do not approve it. I am also not frustrated about it, its just how this world works currently, in majority of the countries as far as I know.


The same experience in public healthcare.

I see the project paid for few willas of ppl involved at the top and was discontinued due to missed deadlines and missing functionality.

Noone was fired, sued or even scolded for wasting millions of taxpayer money.

All you can do is laugh. This world is a joke.


> Nah, this is full story 99.9% of the time. I worked for government and this happened all the time

Same experience, also in the private sector.

> This is typical. You need to know how government works to understand it. I understand it, but do not approve it. I am also not frustrated about it, its just how this world works currently, in majority of the countries as far as I know.

Yep, I mean the issues with unnecessary jobs and inflated projects and budgets is not exactly news, I think it's just part of society's struggle to adapt to a post scarcity economy, while not shortening the amount of working hours. It's not really surprising that it also affects software.


No this is really how stupidly it works. Government software consulting is insane.

The licensing/certification stuff basically creates monopolies.

My spouse worked at a digital agency a decade ago, that it turned out was basically a near-monopoly provider of certain types of software for the local government.

The thing was, none of the work was actually done by them. It was all subbed out to 3rd party dev shops who couldn't qualify themselves for the required licensing. Further, they subbed out all the dev offshore.

So the government was both overpaying for offshore devs, and thinking they were spending money locally because the intermediary happened to be local.

They could have gotten the same work for 40-50% cheaper just skipping the front company, or spent the same and hired actual local devs they thought they were.


You're either preaching to people who agree with your perspective or talking to a well-tread HN persona where all management is incompetent nincompoops and the world would be a better place if only devs had unilateral powers in all areas, including those where they have no experience or even visibility. You are being quite charitable to place the majority in the former category. See follow-on comments (both current and soon-to-come) for supporting evidence.


Just to agree with the OP, I've just gone through a government tender process to buy a piece of software for my organisation. The number of people who could bid on the tender was incredibly limited. We've ended up with a 'solution' where the best and cheapest company was excluded from bidding. Mainly cause they struggled with our byzantine tendering process, that gives us 'best value' according to our procurement team. It's not the only broadly failed IT system that we have which has gone through these processes, so it's not a one off either.

We're currently busy throwing away solid pieces of open source software that have worked well for years in favour of enterprise garbage.


Government software contracts are never meant to succeed. They are meant to burn as much cash as possible. Everyone I know who has worked in Arlington has the same story. Huge headcounts. Billable hours. Literal coked out VPs on yachts.


This is too simplistic view of the state of the affairs.

If that was so, countries would not work at all. There is always a service that needs to absolutelly work, or your government is fucked and lots of its people. For those projects you absolutelly need to hire those that will provide desired outcome without failure. Most of the services are not so crucial and in those you can have such failures without much of a problem, it even seems "good" sometimes as you must employ number of people to fix service mistakes constantly.


If you dont see that the world is slowly falling apart due to corruption and incompetence its hard to even discuss.

It have not fallen apart „yet” because there are really hard working ppl with duty in their heart.


Your only mistake, then, was that unlike a consultancy based solution, not enough people were able to take credit for it. I know it sounds counter intuitive, but to sell an idea, it's best to make every buyer think it was theirs all along, then only will it stay in place.


In the US, especially with federal money, this would be ample justification for a congressional inquiry and a potential fraud, waste, and abuse claim.

The usual outcome of the investigation is uncovering a bunch of people just saying that they were doing what they were told to do, and no one taking the common sense approach of looking at the current vendor. It might push one or two incompetent middle managers into retirement.

That said, it may get fixed for the next round of bids. It may have long term change depending on which congresspeople were involved.


> congressional inquiry

The only result of this would be millions more dollars "investigating" version 1, led by the bureaucrats who made the decision to build a v.2, including paying an army of consultants to find every possible flaw and non-compliant feature, in order to justify their decisions. The horns will really come out then... v.1 did not achieve 100% accessibility according to OSHA, cookies had the potential to leak data, the JS packages underneath were not vetted and compliant, no guard against denial-of-service, the list of possibilities is endless... point being, when you force gov't officials to find a flaw in something because their job is being questioned, they essentially have unlimited resources to find that flaw and justify their own existence.


Oh, now it makes complete sense.

Yes, government tender has rules. And if the decision makers don't follow the rules, they can suffer all kinds of consequences, including personal bankruptcy and jail time. Obviously they wouldn't bend the rules just because it would save government money and lead to a better outcome.


You did get to keep your integrity as an engineer and have a good story to tell.

That is worth a lot.


> This is why we can't have simplicity: because it doesn't scale.

No, it's because it doesn't sell.


Agreed. Half of people want simple, maintainable solutions. The other half want complex solutions that people can glam onto, like how politics works.


I call this the tragedy of software development.

Well written, minimal, roubust systems are not appreciated or welcomed. Heck, most of the times people don't know they exist.

In our industry the heroes are the firefighters and the firefighters are usually the pyromaniacs.


> You see, all of the above was a problem, because it didn't keep me employed. I was not the "hero" for saving the day. Entire teams of people weren't involved. There was no visibility at the senior management level. Nobody got scared, or had to throw money at it, or hire consultants to review it.

there are a few other instances of that:

- an old article about Michelin (french tire manufacturer) quoting some scientist of theirs "We can make a million hour tire.. but what would we sell"

- recently people said their rust code cause too much downtime for coders because it was too stable too early

flip side of the same issue:

- very often people game their work to ensure benefits: stash duties for later so you can appear busy, or overwhelmed (and claim promotion because you have so much to do)

The global system doesn't reward to true optimization, it allocates people on useless tasks, at best for lower risk, but smart people doing things solid and fast could be using their talents on other problems.


Yea government work is full of that.

Kickbacks galore, my brother works in ed-tech. He said one state rewrites their public school report card system every two years like clockwork because of that.


You know the crazy thing is that moderately sized private companies I've worked (500-5000 people) have plenty of that kind of BS.

I have seen a piece of mediocre software or service vendors sold as a panacea to my last 2 shops, skip POC phase and just get purchased off the back of someone (not in IT) very very senior being buddies with the founders. No users or technical people were asking for it.. just gets rammed down from place up high until a reasonable enough niche is found to put it into PROD.

It ends up being a solution in search of a problem, with a couple years finding where to use it, a couple years finding it inadequate, and then a couple years removing it again.

Great money if you can get it.


It sounds like your project ended up working out well (ignoring the replacement). But one thing that would be hard for me when starting a project like this: How do you know that over time it won't grow into something terribly unmaintainable? You don't have an ORM, but then perhaps over time you re-implement most of the functionality of an ORM, and now new people need to learn that. Of course, you can start with out one and bring one in when it is needed. But in my experience that's hard to actually do because feature N + 1 needs to be implemented now and there's no time to migrate everything over to (ORM that would have been nice to have to make feature N + 1 easy to implement.)

I'm just using ORM as an example, or course.

Anyhoo, I think there's probably some other dimensions than "boring". Seems like you used "less" tech, but I'd say in the java world Spring and Hibernate are boring, or at least "popular", in the sense you can hire devs anywhere with some experience.


By devoting time to code maintenance and refactoring in between features N and N+1 (or at least N+M). The code doesn't just magically go from 5-10 SQL queries to being completely unmaintainable without an ORM overnight. When and if it grows into that, you'll see it coming.

That doesn't work, of course, if you're not considered to be "working" unless you're hacking on a new feature right now that'll be deployable by the end of the week, but it seems like OP was allowed to develop in a sane way.


> The code doesn't just magically go from 5-10 SQL queries to being completely unmaintainable without an ORM overnight.

I'm not sure in whih world an ORM helps with maintainability...


You either start with or without an ORM, depending on your assessment of whether the project is gonna need one.

If you start without one, you still have to partition your code well enough so that retrofitting one doesn't cause a huge mess. Basically keep your "raw SQL queries" in a centralised place (file or folder), rather than strewn together in controllers/views/services. And you should do exactly the same if you use an ORM. Isolate the fuck out of it and make it "easily removable" or "easily replaceable".

Also keep the "effects" of your ORM or your non-ORM away from those other parts too: your controllers, views and services should be totally agnostic to whatever you're using in the data layer. When you add subtle coupling you lose the possibility of changing it, but it also makes your project less maintainable.

This is easier said than done: in dynamic languages or with structural typing like Typescript it's very easy: it's all objects, anyway, so ORM or no ORM it's the same. In stricter languages like Java it might lead to lots of intermediate data structures which are verbose and causes problems in itself. Or the middle ground: use primitives (lists and maps) rather than classes and objects, although ORMs like Hibernate will make things difficult for you, since they're not too flexible about how they're used and their types tend to "creep" all over your project.

-

Most unmaintainable projects don't become unmaintainable because people "forgot to prepare". They become unmaintainable because people assumed everything is permanent, so there's no penalty to using everything-everywhere. So there are "traces" of the ORM in the controllers and views, the serialisation library and serialisation code is called in models in services as a "quick hack", the authorisation library is called from everywhere because why not. You quickly lose the ability to easily reason about code.

The same applies other areas. I could make a treatise of how game developers love sticking keyboard-reading code absolutely everywhere in the codebase.


> How do you know that over time it won't grow into something terribly unmaintainable?

Like pretty much all Angular projects?


I think you may be biased towards the reasons to replace it, given that it was your creation, and we're not hearing the whole story.

It may well be that the system was difficult to maintain _because_ it used a bespoke framework of vanilla JS and handcrafted SQL Server queries.

Or that they wanted to improve the workflow of importing CSVs, and build modern features around it, which would be a mountain of work.

Or that the company outgrew it and it was difficult to scale.

Or, you might be right, and it was politically and financially driven. But then the technology choice wouldn't have mattered, and you could've chosen a more complex stack just as well.

I appreciate the sentiment of trying to keep things simple and not jumping on the bandwagon of the latest trends, but sometimes choosing a popular framework is not a bad idea. Particularly in corporate environments where the project is not owned by a single person, churn is high, and new developers are expected to eventually take over maintenance.


One day of maintenance per year is not "difficult", that's basically the point!

I didn't use or create any JS frameworks, which is a part of why the maintenance was easy!

The customer was a government department, and their scale changed only with population. That is: slowly.

> sometimes choosing a popular framework is not a bad idea.

Ironically, the replacement product used a popular but out-of-date technologies such as Enterprise Java Beans. They overused OO paradigms to a hilarious degree, and needed something like 2000x the server capacity to host their application.

Keep in mind that the data, userbase, requiements, etc... are all identical. This is a like-for-like replacement.

They needed an entire team of people just to babysit the infrastructure, which now took a decent chunk of a data center. My app could have handled the production workload while running on my laptop.


Alright, it sounds like a typical government gig then.

Then a lesson in this case is to use large corporate approved technologies, regardless of how inefficient, costly and complex they might be.


Well, but everyone can read a couple hundreds of JS LOC, and SQL unless you use some obscure rdbms doesn't really change.

Same with HTML.

I'm siding with parent. It's not a technological problem it's a societal one.

Sadly job security involves making everyone's life difficult.


>> deployed to over 2000 locations

Were there 2000 independent systems / SQL server instances running or just one? 2K separate deployments to manage (with 1K users each), does sound a little scary. Of course, perhaps that is not what is going on at all.


Which is actually kinda funny, because some of the "complex" technology the OP is railing against allows us today to manage thousands of databases both easily and efficiently... IF the systems are built with a more current approach. This is why I try to understand ALL of the motivations for disruptive change and not immediately assume incompetence and self-interest bordering on criminal.


Sometimes complexity is conflated with lack of familiarity. Instead of using the term "complexity", we should state what the actual problem is.


Option one:

Write a cloud formation / terraform template that involves O(1) machines and deploy 2000 identical copies.

Option two:

Write a template that deploys O(N = 2000) interdependent services across roughly 3-10x as many machines, and deploy one copy.

From what I can tell, you are arguing for option 2. It is strictly worse than option one. In addition to being more complex, it has a few nines less reliability, and costs 3-10x more for the hardware. The dev and CI hardware budgets is going to be 10x more because you can't test it on one machine, and it has bugs that only manifest at scale.

Source: I do this for a living, and have been on both sides of this fence. Option 1 typically has 5-6 nines (measured in chance a given customer sees a 10 second outage), option 2 never gets past 3-4 nines (measured in at least N% of customers are not seeing an outage).

The modern vs old technology debate has nothing to do with this tradeoff. If you want, you can build option 2 with EJB + CORBA on an IBM mainframe, and option 1 with rust and json on an exokernel FAAS.


I'd argue for Option 3, which is to try to understand the workloads placed on the original system and then design the new system based on this. I think having 2K independent database servers would not normally be optimal for 2M users, but it is possible.


If the old system is exceeding uptime SLAs, meeting all business needs, and coming in under the budget for such an investigation (it sounds like the total operations budget was less than 10% of one engineer's time), then why bother?


I don’t know the situation, not touching it may have been optimal. I’m suggesting that if it was going to get re-written, I would at least study the basic parameters of the problem by reviewing the workload of the current system.


I wasn’t clear in my description, unfortunately.

This was a multi-tenant centrally hosted application. There were 2000 sites served, each with kiosk PCs and some associated special-purpose hardware.

The actual application code ran in just four virtual machines in two data centres.

No templates, no Terraform, no microservices, etc…

Just vanilla ASP.NET on IIS with SQL Server as the back end.

The efficiency stemmed from having a single consolidated schema for all tenants with a tenant ID as a prefix to every primary key.

Shared tables (reference data) simply didn’t have a prefix.

The vendor product that replaced this was not multi-tenant in this sense. They deployed a database-per-tenant, and lots of application servers. Not one per tenant, but something like one per ten, so two hundred large virtual machines running twenty instances of their app.

Multiply the above for HA and non-production. The end result was something like a thousand virtual machines that took several racks of tin to host.

Management of the new system took serious automation, template disk image builds, etc…

The repetition of the reference data bloated the database from 50GB to terabytes.

It “worked” but it was very expensive, slow, and difficult to maintain. It took them several years to upgrade the database engine, for example.

That task for my version was a single after-hours change. Backup or rollback was about an hour, simply because the data volume was so much lower.

The simplicity in my solution stemmed from a type of mechanical sympathy. I tailored the app to the customer’s specific style of multi-tenant central hosting, which made it very efficient.


Both approaches are valid for multi-tenancy, with their own pros and cons.


Of course, it is hard to say without knowing more about it, but it seems that jiggawatts solution is closer to optimal than the second one. The 50GB database could fit on a USB drive after all and we know empirically that a single SQL server database was able to handle the requests since the old system worked.

Also, the fact that a consulting company was able to turn a part time gig for one person into a $100M+ project at the taxpayer's expense is very frustrating.


typical technical forum, thinking they know the best solution based upon a 1 paragraph description.


That’s all we have presumably.


2000 SQL server licenses sounds terrifying


Both the old and new systems were using licensing based on processor cores, not VMs or instances.

If I remember correctly, my version had something like 8 + 8 cores in an active/passive configuration where the passive node is free. There was also a single dev/test server also with 8 cores, but that's free too.

The replacement used a few hundred cores shared by the various instances and environments. If I remember correctly, they had something like 10-20 databases per virtual machine, and then about 5 virtual machines per physical host. The cores in the physical host were licensed, not the logical layers on top. (I can't remember the exact ratios, but the approach is the point, not the numbers.)

The "modern" cloud approach of having dedicated VMs for a single thing is actually terribly inefficient, and that approach would have bloated out the above to thousands of VMs instead of "merely" a few hundred.

The correct architecture for something like this -- these days -- might be to use Kubernetes. This provides the required high availability and instancing, while efficiently bin-packing and deduplicating the storage.

Still, you can't Helm-chart your way out of an inefficient application codebase.

Again, for comparison, my version could run on a laptop and had about half a dozen components, not thousands.


I think you mean: we can't have simplicity, because it doesn't sell!


Figuring out how to reward simplicity, reliability, and maintainability feels like one of the most important unsolved social/human/economic issues in the software industry.

Seems there's only incentive to simplify at small companies where the employees feel they can save their own time or increase the value of their equity by delivering value to customers more efficiently. At large companies employees work 40-hour weeks regardless of their output and they're trying to impress a performance review committee, not customers.


Stop billing hourly. Instead bill for the value you provide. How you provide it should be immaterial, and so you can do it as cheaply and efficiently as possible while still reaping heaps of money as long as it creates value.

That's how all other markets work. Billing hourly is the death of progress.


I honestly think you did the right thing--your conclusion is more a cynical take than a truthful claim. You clearly had organizational problems, and that's beyond the scope of your code.


we don't have simplicity because people are inept. I would trust you to write a small CRUD php database without any security issues, however, the next goon that comes along and jams in a bunch of $id = $GET[id]; insert into where $is; and you have a major security issue

Frameworks exist so you don't roll your own stupidity into a bigger problem.


How secure was the app compared to the new one?


Fantastically.

The new one had "hand rolled cryptography", which should make you twitch uncontrollably if you know anything about security.

The new application had, among other failings, hard-coded (unchangeable!) RSA keys used for communication channels. As in, all customers shared the same keys. I can't remember the exact specifics, but I swear at some point there was something like encrypted JSON in XML. Or was it encrypted XML in JSON? Does it matter which?

The old app that I wrote would happily take JavaScript or SQL snippets as inputs to any text field and do The Right Thing.

You don't want to know what happened to the new app when it was tested with malicious inputs.

The testing team were told "not to go too hard on it", because that would "derail the project".


Horror stories like this are always fun to read.

I've become a fan of avoiding ORM's and API's between front end and back end for websites. Want a page that shows a dashboard of xyz? Write the right query that fetches exactly what you want, render the HTML, and return it.

Super simple, and abstractions are at a great minimum. No SQL->ORM->API->frontend, each with their own twist on how they model the world. A splash of JS (perhaps via HTMX or Alpine), and this can take you a long way.


A few years ago I start a dashboard project that was mostly raw SQL.

I then saw the team wanting to convert it to ActiveRecord, which they started. But lots of queries had to use AREL (Rails' "low level SQL AST abstraction"), since they weren't really possible or just too difficult to do in ActiveRecord.

But AREL is so incredibly unreadable that every single AREL query often had its equivalent in plain SQL above it, as documentation, so new people could understand what the hell it was doing.

In the end some junior was unhappy with the inconsistent documentation and petitioned that every query, simple or complex, AREL or ActiveRecord, had to be documented using SQL above the AREL/AR code.

Then they discovered that documenting using Heredocs rather than "language comments" enabled SQL syntax highlighting in their editors.

After that we had both: heredocs with the cute SQL and some unreadable AREL+AR monstrosity right below it.

I still laugh about this situation when I remember it.


Another great horror story, thanks for sharing :)


> The old app that I wrote would happily take JavaScript or SQL snippets as inputs to any text field and do The Right Thing.

Confused here. Where were input validation checks in your implementation? How did you guard against SQL injection, etc?


Presumably they just did whatever the standard provided mechanisms for their SQL driver were (such as parameterised queries). User inputs text in a comment box, and you insert it into database using such a mechanism and it's safe.

And if you're using, for example, Go's templating library, then it automatically escapes everything in HTML templates unless you explicitly override this default behaviour.


Well if it was only 100 lines of plain JS then how would one guard against reflection attacks? I.e. submitting HTML (like script tags) then getting that to render when others view the tainted data.


Because on this way of building sites, the user submitted data is escaped before it reaches the browser. E.g.: https://go.dev/play/p/MmNSxU5QfAb (hit run to see the output).

The JS wouldn't need to do any escaping, because it's not trusted to handle any unescaped data. It's operating on the already-escaped html template.


They certainly weren't using Go, or as stated, any framework. Also no mention of any type of web server; not sure what magical code was creating dynamic HTML from the database. Where was the business logic? Stored Procedures? No mention of more dynamic functions... No integrations... Sure sounds like a desktop browser-only app while the majority of the world today wants some mobile functions from almost every system.

There is a lot information, which is understandable but also conveniently supports a very unflattering narrative while simultaneously promoting the OP's awesomeness.


I think you're reading them far too strictly. I don't think they literally meant they were using nothing beyond JUST the SQL Server and then somehow getting HTML out of that, with 100 lines of JS on top. Unless I misread, I don't see anything that implies they weren't using something like PHP or ASP, for example.


To me this is the hilarious part.

Q: "how do they use the workarounds needed to secure the more complex approaches?"

A: "those security concerns don't exist in the approach, no workaround needed. That's part of the simplicity".

It just represents a fundamental misunderstanding, but it's not their fault, they've never seen anything else. Like someone using a JWT instead of a session cookie.


>Like someone using a JWT instead of a session cookie

Those aren’t mutually exclusive, you know.


Just put the queries in procedures with parameters. Only store the procedure calls in your backend, disable arbitrary queries completely in your database permissions.


> The old app that I wrote would happily take JavaScript or SQL snippets as inputs to any text field and do The Right Thing.

I can't be the only one here who is both skeptical and a little turned off by someone who says "You can stick any user input into a database query and you'll be fine", with a condesending pat on my head.

Your comments continue to be incredibly one-sided and biased. The summary is "My work was perfect and the new system a steaming pile". Perhaps this contributed to your replacement.


Do you understand what SQL Injection is?

Fundamentally it's mixing data and executable code such that the DBMS cannot properly distinguish between the two and can inadvertently treat data as executable code.

Parameterize queries very explicitly tells the DBMS "this is executable code, and this over here is data". Nothing anyone puts in the data will ever be mistake as executable code by the DBMS. THIS IS SAFE.


It is only safe for the SQL server. An injection attack could still be targeting a cache (to poison it with e.g. a malicious script), the browser (to steal data via XSS/CSRF) or the user (show an error message telling them to contact malicious number).


What the person said

> "You can stick any user input into a database query and you'll be fine"

Besides which, pretend SQL Server is a glorified cache, the result is the same.


> I can't be the only one here who is both skeptical and a little turned off by someone who says "You can stick any user input into a database query and you'll be fine", with a condesending pat on my head.

Like how Google has worked the past 2 decades? OP said snippets then you gloriously paraphrased it into a completely different statement.


That's what parameterised queries are for. SQL injection isn't hard to avoid.


“ was it encrypted XML in JSON? Does it matter which?”

I’m sure there were meetings where it was discussed at length and the stupidest idea prevailed, because other peoples’ failures are more useful than shared successes in such an environment. And probably for “security reasons.”


This comment was better than the post!


It feels like there is an article like this every other week. They reflect the same generic view which is broadly true yet I think is not very useful as an advice. In a highly creative field like software competitive advantage often outweighs comparative disadvantage. In other words it might very well be the case that a company that takes a chance on something unusual with a higher opportunity cost will outcompete competitors. How to make the "right" choice of unusual is where the interesting questions lie but that is highly context-dependent and can't easily be generalised into a blog post.


Whether you code is declarative/imperative or whether it has all the hot new packages as featured in HN has very little to do with making software products "competitive" or having "comparative advantage". It's always going to be about whether the product is solving the customer's problem more conveniently than the competition does.


...and whether it will continue to do so.

Maintenance over long time is hard, requires experience, architectural choices, risk analysis, balancing tradeoffs, and obviously a disciplined team.


Yes, but whether it will continue to do so, once again, has little if nothing to do with the tech stack.

I think you’re getting at the point about how tech debt can bog down product development and the progress of a business, and I fully agree that it should be avoided, but tech debt avoidance is hardly related to the tech stack or programming style as much as it is to architectural decisions. And, when you’re looking at longer shelf life for your code, then the article becomes justified in making your codebases as boring as possible because thn you’d like to employ only the patterns that have truly stood the test of time, instead of polluting your code with novel approaches.


> I think you’re getting at the point about how tech debt can bog down product development and the progress of a business, and I fully agree that it should be avoided, but tech debt avoidance is hardly related to the tech stack or programming style as much as it is to architectural decisions.

I think all the people who are adopting Rust, who wouldn't have touched C++, would disagree with you.


Boring isn't old, nessecarily. Not-boring isn't nessecarily novel either.

Design Patterns are from at least 1994, when GoF wrote them down. Ideas like DDD are some 20 years old. As is e.g. Event Sourcing.

Actually, by your reasoning, Event Sourcing would be the preferred architecture for everything since the oldest written human sources were event sourced (bookkeeping grain).

As always, what is best fitting "depends". Battle tested is one, but never the only one, parameter to decide with.


> Maintenance over long time is hard, requires experience, architectural choices, risk analysis, balancing tradeoffs, and obviously a disciplined team.

Exactly. That's the reason Facebook gave for adopting Reason (OCaml).


> In a highly creative field like software

Not in 2022. Half a century ago, yeah, it was a highly creative field, but these days almost everything has been done. (Even ML is decades old, the hardware caught up.) And this is a good thing!

We have been doing this long enough that it should be "boring" now, in the same way that constructing buildings or bridges is "boring", eh?

> In other words it might very well be the case that a company that takes a chance on something unusual with a higher opportunity cost will outcompete competitors.

Sure, but that's a decision that should be made cautiously, with good reasons to think it will work. In other words, innovation should be "boring" too, at this stage in the game.


Reminds me of Sendgrid. At the beginning, they were faced with using an off-the-shelf MTA (mail transfer agent) or writing their own. As a core competency, writing their own turned out to be a major competitive advantage.


This is a very good point. There are definitely "risky novel" choices that could make your company a success. But I've also seen many teams drowning in a soup of random tech choices.

I'd love to be able to write some more specific advice on this topic but mostly I just want people to be mindful of the impacts of their choices and actively choose risk rather than having it sneak up on them.


My first time as a team lead I saw value in little side experiments on non-core parts. Now I think that's only OK if there is time and budget to roll them back if they prove to be a bad fit. Otherwise they accrete and become a drag on velocity.


I disagree with the fundamental premise put forward here.

Software should be written to meet specific needs.

And those needs should be defined.

And it’s likely a software project will have many tens of needs defined.

And amongst all those needs it will become clear what technologies fit the needs.

A blanket statement like “choose boring technology” only fits projects where the project needs result in that outcome.

Saying “projects should be built with boring technologies” is the equivalent of saying “projects should be built to NASA launch spec reliability”. That MIGHT be true, but having a predefined idea of what the requirements are puts the cart before the horse.

Requirements come first, then after that come statements about how things will be done.

My guess is that systematic definition of requirements will result in very very few projects “built using boring technology”.


My experience is that requirements are almost orthogonal to what stack will be chosen. CTO going to a conference, tech lead being bored, founder having a friend who knows "shit", developers padding resume and so own impact the choice way more than actual requirements. 99.999% of the projects can be done using any mature lang + PostgreSQL or PostgreSQL compatible newSQL variant.


If I was to rewrite my blogpost to fit your second sentence I would title it: "Please consider onboarding new staff as a need for your software". This is just another consideration when architecting. I absolutely will use cutting edge tools if they are needed. But we just need to think about it a little.


Indeed the availability of developers who know the tech stack is a key requirement that should be defined and assigned an importance level.

And if you’re working in a business building a modern web application then it’s extremely hard to imagine the stakeholders being happy with software developed using absolute lowest common denominator tools techniques and libraries, as advocated by the “boring software” movement.

Competitive advantage and the expectations of customers leads to the need to use modern tools techniques and libraries.

And in fact developers deserve these things too because they tend to make things easier more powerful and more reliable.

If the stakeholder who is paying for the project says “can we have an animated user education intro”, and you say “no because we use boring technologies and our developers might not understand how to use the animation APIs” them I think your job would be at risk pretty quick.


Cannot agree more.

Some cases need a very boring CRUD setup, yet other will require distributed events passing around CRDTs.

I've worked on terrible projects built with boring tech that was perfect for simple CRUD, but where the project wasn't anything CRUD at all but parsed huge datasets of time series, combined with events and changesets. The 'boring tech' was such a bad fit, that the project ground to a halt, cost enormous budgets just to keep running, and was impossible to move forward.


Couldn’t disagree more. I think you’re missing the point. You can use boring technology (Golang) and simple patterns (no generics) and build something that works, and is readable, and maintainable, and secure as a consequence of this. I’ve seen this again and again. Then you can expand your code as business needs change, but making your code flexible from the start is in the same basket as over engineering and premature optimization.


Your example and conclusion aren’t a good match IMO. Choosing a restrictive language is not something you can easily pivot from later when you need more flexibility. It’s a huge commitment, so making the right choice for the future is not premature.


On the other hand using an expressive language you have to force your employees to use a subset of the language, which usually doesn’t go well


What frustrates me about the software industry is the many failed lineages problem. There are many mountains of code that get the job done at various companies but isn't shared. The lineage of these in-house frameworks or effective solutions to problems kind of don't go anywhere, they simply end. The lineage ends and doesn't cross pollenate. So lessons aren't shared. We are in an era of explosive growth where there is a new framework or new library or new technology introduced.

I learn from whitepapers and reading English descriptions of people's problems. I find this easier than reading someone else's code. Code is extremely powerful but it has a very high maintenance cost and change costs. Just getting up to speed on the Postgresql codebase or the Linux kernel or any other system tool is hard work and an investment.

You kind of need to work 40 hours a week to get familiar with a codebase to be effective in it. But I can get familiar with a problem outside the context of a codebase by reading a whitepaper and writing some code. Reading a whitepaper feels more effective use of time than reading someone's codebase. You need to jump around a lot of code to understand the codebase.

I think the effort for the ideal solution is underestimated. People abstract to their own understanding. And that abstraction might not be intuitive to other minds.

I would prefer to work on a codebase that solves problems effectively than a codebase that is novel.


> I would prefer to work on a codebase that solves problems effectively than a codebase that is novel.

I have a design for a very very boring code base that involves no frameworks or external libs. It works fantastic and has been used by some of the most productive teams I've managed. Those teams can manage "more services than we have people on the team by a scale of 2-4x" (quote from one of the engineers on one of those teams).

But it's crazy boring. There's no frameworks to download, not dependencies to update, no big brand pages. It's just following some very simple rules about where to put different kinds of complexity, very strictly.

It's a hard sell in any programming ecosystem because it has no brand power.

I've debated create the "no framework" spec around this. I even have a tool that can enforce the standards in CI. I just don't know if anyone would actually care.


I still consider "no framework" as a novel choice (most of the time): https://blog.meadsteve.dev/team-work/2022/10/13/how-boring-s...


I agree with you in general but I disagree in the specific case.

I think using frameworks encourages developers to follow some standards, much the same way everyone was very excited about microservices was a concept that could encourage decoupled software. Somehow we end up with a mess in both, and it's exactly for the point in your link: "There’s a good chance you’ll end up creating an “ad hoc and informally-specified implementation of a framework"". I think this is ultimately true for any element of software design that is not specified and enforced, and over time with enough complexity added to some areas, it creates the mess.

So the real "feature" is formal specification and enforcing the spec. In other words: rules and constraints for designing software. An architecture. You get a kind of "off the shelf" architecture with any framework you pick up and use, and this of course segways into the age old story about outgrowing frameworks and frameworks not being good at some specific problem, the code base slowly got too complicated for what we do, etc. The "spec" the team is using (wittingly or not) doesn't get updated.

When you have two devs implement the same concept in 2 different ways, it's a hidden disagreement about the project. It should get discussed but often it doesn't.


I like this argument because using a framework, or a library, implies that it might be established and that there’s already a lot of resources, docs, and stackoverflow question on it. Not using a framework most often means no doc.


Is it open source or only available to clients?


I think I grossly oversold this thing because there's a lot of comments here asking for something.

I don't really have this concept written down anywhere like a number of other ideas I have. But, I guess the short version is, if I had to make an elevator pitch or something: No framework is a configuration (maybe "distro" in the linux sense) of concepts (maybe "packages" in the software sense). A concept is either something you might use a framework or library for (and usually it exists somewhere), or it is something you would want a linter to find, and it might even be something that you want to ensure was done correctly at code review. I think this last one is the most accurate idea of what a "concept" is.

Over time I have accumulated a small informal set of "packages" that can be implemented without a helper library in nearly the same amount of code as if you were to use that library anyway. The important part is that the running software doesn't depend on the third party code, but actually the developers depend on a rule book and anything that violates the rules should be treated the same as calling an third party package's API method that doesn't exist. In other words: the dependency remains entirely in concept-space, not disk space.

This link below is not "no framework" but it is something I wrote where you can see the result of "no framework thinking". The concepts are stole from people who are probably smarter than me, have decades of experience and written books on these topics. The only difference is instead of turning it into a library to depend on, it's turned into rules for humans (which I guess is also what the book authors originally did anyway). I combined them and made them into a "distro" and I called it "modular provider architecture" (not very engaging or entertaining, but it does what's on the label).

- https://github.com/Incognito/python-architecture-linter/tree...

That text document is meant to be an example of how developers should write an application. By the way, it has a demo application here which does basically nothing:

- https://github.com/Incognito/python-architecture-linter-demo...

It might be hard to see here because it's pretty silly example, but I managed a small/growing team of 3-5 developers who create over 15 different services following this pattern. They did end up using libraries to do things like send data to/from Kafka or a DB, but the Modular Provider Architecture's rules were always there.

Oh, by the way, that repo I linked to, https://github.com/Incognito/python-architecture-linter/ ... this is a proof of concept for a linter that could implement the "no framework" concept. It is a dev dependency of your project, meaning you have no production framework as a dependency. It is a tool that lets you configure "rules" for your project in the style of any linter you already know of. It's like a linter from hyperspace, you can "lint" rules like.... if a file is 3 levels deep, and depended on by methods anywhere in the project with the word "bob" in the method name name, but those methods don't have if-statements, and also the Afferent coupling of the module itself is less than 0.5 .... fail CI with an explanation why. It also has a feature for you to commit an exemption list.

I used this in my teams once I started managing multiple large teams, and I could do things like generate entire reports across all projects of these really complex metrics that most linters and tools aren't really set up for.

That code is in these files, sorry for the total mess, I was just hacking around and didn't really think of a nice way to structure the definition "API. My main goal was proving the concept.

- https://github.com/Incognito/python-architecture-linter/blob...

- https://github.com/Incognito/python-architecture-linter/blob...

====

So, to summarise:

- "Modular Provider Architecture" is a "Distro" of "No Framework". Others can exist.

- "Modular Provider Architecture" is composed of "concept packages". Many of those packages have actual software libraries as alternatives. Others do not because they are rules about how to write code.

- Really complex ideas in "Modular Provider Architecture" can be enforced via CI.

I think the "beautiful" thing (if you think sofware can also be art), is that there is a clear an obvious structure to the project, but the "framework" is entirely ephemeral, just like types in typescript.

I guess the really important thing here is: enforce your architecture's rules and pick them very carefully in concert, not in isolation.

====

By the way. That's one idea I have for removing dependencies on frameworks. I also have a really crazy idea for eliminating the need for most software by replacing semver, but it takes me over 2 hours to explain it (I have really bad video recordings where I try).


Feels a lot like a framework to me, albeit it more by convention and enforced by linters instead of skeletal code. And a fairly rigid one at that.

Has this been tested against a more traditional architecture to solve a similar business problem? And if so what were its cons?


> Feels a lot like a framework to me, albeit it more by convention and enforced by linters instead of skeletal code

100% Yes, and the linters are just an experiment I had to see what was possible in terms of "packaging a no-framework framework for distribution".

> And a fairly rigid one at that.

The same problem exists when using a framework. You use the framework's rigid rules and tools or you don't.

> Has this been tested against a more traditional architecture to solve a similar business problem? And if so what were its cons?

I don't know what you consider to be "this" because there's multiple concepts at play in my previous post: (i) "no framework", (ii) the specific "Modular Provider Architecture" ruleset, and (iii) python architecture linter.

What you are looking for in an "against" case? What is a traditional architecture?


You should share the design principles!


And make a generic framework /s


This sounds interesting to me!


I second that. I really like the idea.


Would love to see this!


For mildly interesting code that you can realistically write yourself, there isn't a strict need to share code and cross polinate. Every little shop has a certain budget to try out new things, play around, and learn. If we come up with more or less the same things, that's fine.

I'm much more fond of sharing ideas than sharing code. IMO developers' understanding is at least as important as the economics of crafting code.

Of course you can't rewrite Postgres for your CRUD project. It's fine to use third party projects but there might be less need than one would assume. A lot of projects don't need a full-blown database, and I'd find it great if there was more practical reasearch how to design minimal storage systems for example. I'd bet that the majority of all apps that depend on Postgres would easily fit in RAM, forever. That could mean one could get rid of SQL, or could apply other simplifications.


> I'd bet that the majority of all apps that depend on Postgres would easily fit in RAM, forever. That could mean one could get rid of SQL, or could apply other simplifications.

My guess is you're correct, yet RAM stored DBs cannot guarantee persistence when things go wrong. Or at least they may introduce data loss if changes aren't fsync'd to durable storage or 2PC with replicas.


It already simplifies some things if data is available directly from RAM. And many stores are read-only or read-mostly.


I can't help but think that FOSS is the key to sharing the lineages.

Every single place I worked, people just want to make code work, not readable.


Perhaps a controversial take: it’s difficult to write boring code in the common boring languages because they are not powerful enough to express the problem domain without accidental complexity.


I agree with this. The biggest thing that’s missing for me is sealed sum types. We’re getting there but progress is slow.


Does Scala have that?


Scala has sealed traits, which may only be implemented in the same file, and pattern matching must be exhaustive. At a glance, Java 17 has something similar (JEP 409 sealed classes).


It depends what you are trying to write. But I’ve seen people complain about the lack of generics in Golang, and I’ve also seen really great cryptography code in Golang (TLS implementations, etc.)

I wouldn’t write a compiler in Golang though.


Boring and battle-tested, not antiquated, but also not some fancy new overhyped thing.

I've worked in a project with a very fancy tech stack: fancy language, fancy data stores, fancy API style. Hiring was hard (not many people know $fancyLang, most people were internal transfers working with something boring like Java), $fancyDataStore1 had weird failure conditions that made it difficult to scale, $fancyDataStore2 had even weirder failure conditions, and did I mention a custom framework for it all?

On the other hand, some level of fancy is still good for everyone: functional programming patterns can make all codebases better and more correct, async and things like fastapi in Python, perhaps Kotlin in the JVM world, the new .NET and ASP.NET Core in C#/.NET land (then again, who would want to write Web Forms in 2022?). But for datastores, relational databases are always the way to go.


> But for datastores, relational databases are always the way to go

That seems like a pretty controversial statement


The codebase should be as boring as the developer is experienced and the project is complex.

In other words: a total beginner will find any codebase interesting/exciting and if the project is just complex enough, it will benefit from some techniques/technologies that are not familiar to most people and therefore not boring.

Yeah, you can absolutely overengineer, but it's not like every codebase were better off if it's "boring".

Also, we make progress. Before, statical typesystems had a benifit but also made code much more verbose and were sometimes very annoying. We improved here, but it means that someone has to be the first one to use a new language with helpful features. Is rust boring? No, but that doesn't mean it's not the best choice for some teams and projects.


This (and a few other similar articles) make me feel that there ought to be more emphasis on recognizing "good novelty" vs "bad novelty" rather than "more novelty" vs "less novelty".

The word "budget" frames the question wrongly, I think.

If there are 4 new technologies on a project and the team were insistent that they each solved a lot of pain I'd be less averse than I would to 1 new technology that they adopted because it was "cool", "made by google" and "everybody [cool] is using it".


> If there are 4 new technologies on a project and the team were insistent that they each solved a lot of pain I'd be less averse than I would to 1 new technology that they adopted because it was "cool", "made by google" and "everybody [cool] is using it".

Programmers always insist that the new flavor of X solves a ton of pain. Occasionally they are right, but more often than not programmers adopting four new technologies- if they ship at all - deliver what Rich Hickey called “a knit house” in one of his talks: a system that solves a problem, sometimes in a pretty way, but is never able to be extended or grow. Sometimes the actual software is fine enough, but the choices are so novel that it ends up being maintained by one or two people who get it, and growing the team is a nightmare. Either way, it’s a knit house.


Good write up.

Sometimes novelty comes from outside the team. For example, cross-department re-platforming efforts, vendor changes, and long running migration projects. This can all lead to inconsistency, which needs to be managed so it doesn't get out of hand.

This should be factored in to a 'novelty budget' too in my opinion. A large department may have a slice of applications considered legacy-legacy, others just legacy, and some starting to move to the new stuff.

I think this is more likely to happen in companies which have been around longer - there's more accrued tech debt to deal with, there are more applications to migrate, and projects to add new functionality don't go away.

Another drawback to novelty is when keeping up to date with security vulnerabilities. If you have a common stack, then the problem is less granular.

Google have their monorepo, which (I believe) means their applications are all built against the same dependency tree, so it's simpler to keep track of upgrades.

Novelty is essential though to keep up with the innovation around us. The question is how to find the right blend of novelty (high benefit, low risk) and stability (easier to manage, maintain, and onboard engineers).


The article mentions Architectural Design Records (ADR) which can be included as a folder in the git repo for the project, as a means of documenting the historical decisions that led to the project's current structure. Some of that seems overly complex or formalized (i.e. reinventing UML and all the problems that came with it), but having that history in some kind of constant format would likely help overcome any novelty issues and help people grasp what's going on. The simplest format discussed seems the best for most cases:

https://github.com/joelparkerhenderson/architecture-decision...


Where do you see the reinvention of UML within ADR? I wouldn't consider ADRs to be that; I'd say ADRs are meant for important decisions with important consequences, not a documentation of every single design choice and every single part of the project, as UML proponents would do.


Some of the ADR discussions seem to revolve around trying to create a formalized 'use anywhere' ADR model which does looks suspiciously like UML. The general concept (a historical record of the development of the architecture) sounds good, but a one-size-fits-all approach sounds like a bad idea. ADR approachs should probably be heavily customized to fit each project/codebase.

Some people might say, "but if we had standardized ADRs we could efficiently compare different codebases" but that's going back to UML.


Here’s a question for you:

“To what extent should developers use advanced programming language features?”

I worked on a project once that had lots of sophisticated code in it. One of the senior developers objected however when I suggested we use typescript decorators. He said other developers might not understand it.

Consider what happens when the question is phrased differently:

“To what extent should developers use unfamiliar programming language features?”

Well, all language features are initially unfamiliar.

It’s a matter of opinion, but I think developers should use whatever language features they want, and all developers working on that should be expected to be active in learning unfamiliar techniques that they are implemented in the code. It’s extremely unhelpful to say that language features are “too advanced” or “too unfamiliar” to use.


Some features are more naturally intuitive than others. Consider an 'if' statement and a monad. How much talent is available that's capable of understanding and using the latter effectively? How much time would be needed by a new hire to come up to speed? If you need intermittent help from contractors then will they be able to hit the ground running?

Over time the bar does rise, and I hope that trend continues. Yet it's undeniable that there are limits like human lifespan, and how much people and companies are willing to invest in education and training before doing the actual work for which they're compensated.


Most companies are willing to invest just about nothing into education


Working on a not-yet-launched Web app, where the backend and frontend both use CQRS, and hoops are jumped through in the name of "purity", a more boring codebase would be appreciated. Velocity would be higher and new hires onboarded in less time.


But the two main questions are:

Will this boring alternative you crave, continue to yield that higher velocity, or will it grind to a halt? Boring doesn't have to be big-ball-of-mud, but the architectures you mention exist exactly to avoid big balls of mud from growing.

And second: does 'boring' really fit the requirements? There's only so much you can store in a simple RDBS, it will e.g. lack many intermediate states that your eventstream now does store. Maybe your use-case requires all this data?


> Will this boring alternative you crave, continue to yield that higher velocity, or will it grind to a halt?

So far there are zero clients and multiple pivots. The boring alternative needs only to yield higher velocity to be worthwhile, otherwise funding runs out before market fit and the current architecture achieves zero.

> And second: does 'boring' really fit the requirements? There's only so much you can store in a simple RDBS, it will e.g. lack many intermediate states that your eventstream now does store. Maybe your use-case requires all this data?

Everything is stored in a popular RDBS. CQRS makes logging all data changes easier, but that is the only time intermediate states are used. If we did event sourcing I'd be keener on the architecture.


I have come to learned that there’s no answer to this “problem”.

Even if you stick with tried and true framework, eventually that framework aged, and then developers start complaining that the framework is arcane. For example, how many HN readers here know WebObjects?

But the flip side is equally problematic. How many times do we have to change frontend frameworks? Just to render some divs on a web application?

Somewhere in the middle seemed to be the right answer. Just build the web application with new-ish technology like Go or Rust or Elixir, and then sprinkle some JS like HTMX for interactivity.


Another aspect of "boring" is note is that ideally the implementation is boring, as in not surprising. Currently working on a codebase that's a pretty standard frontend stack, but omg is it not boring. I wish it was. Instead, it's exciting as in "this function claims to remove items based on a filter, but actually it does the opposite."

I might go so far as to argue that being boring (ie predictable and consistent) in your implementation is more important than being boring in your tech stack.


95-98% should be boring.

The other 2% buys you the room to be boring - keep the meta-programming and macros in a small well defined area that does well the bits that boring programming cannot touch.


From a perspective of a coder on a team, I think that strategically selecting which novel things to learn is a good thing career-wise. Fun aspect aside, one gets paid to learn new stuff which can later result in better future opportunities. As an example, as a junior dev back in 2015 or so, I was told to use one of the very first versions of Swift to develop a production-ready mobile app. Was it fun? Totally! Did I learn a lot? Absolutely! Did we almost trash the project and had a massive delay because of the early bugs, problems, refactoring with new updates, and because we were still learning? Hell yeah! The final payoff was pretty good for developers, but not for the final business objective which was getting a product out ASAP. Our users didn't care about what tech we used.

Following from the above, I think it's important to distinguish two different aspects of the novelty - one is completely new tech ('global' - new to everyone) and the second is tech that's new to the team that is about to use it (let's call it 'local').

Global: That's the hole we got into using the first version of Swift. The team got stuck with problems that no one has ever had yet and we had to invest our time to solve them. The biggest risk was ending up with a completely new problem and no guarantee that anyone on your team will be able to solve it. There is no Github issue for it and no helpful stranger on Stack Overflow. It's pretty bad if it happens in a critical area. From a project perspective, I'd say it's better to wait for someone else's time and money budget to iron those problems out first if you want to increase your chances of having a production-ready product within a predictable timeframe.

Local: This comes with a different set of risks. If you're working within a larger organization, how many other developers know this newly introduced tech? If the team that has introduced it quits, will there be anyone able to pick it up? If your project is on a critical deadline, do you have enough time to let developers get up to speed? I'd think about this decision from the perspective of investment.

Ironically, I have also experienced the reverse - some companies use tech so old that developers have to learn it as something completely novel. The only dev that maintained the codebase might be retiring and the option is to rewrite or hire a person with a spark for archeology.


I think something that has served me well with these decisions is only considering novel tech if it does a better job of solving a problem we have right now. If the problem can't be measured yet, it's not worth working on yet. Start with something that is as boring as possible, it'll become obvious pretty quickly where the pain points are, and consider doing something novel for those.


Timely—I had a conversation about this with a friend a day or two ago. There is a definite trade-off between novelty and productivity. IMO putting as much novelty into the parts of your domain logic that differentiate you from competitors as possible is the best trade-off. You want your complexity and learning curve tied up in the “magic” parts of your software, not the mundane pieces.


Unless your software's value is actually a matter of CS research, it has to serve the bottom line of the business functions its built to facilitate. Sometimes thinga like hyperscale or super intense data redundancy or functional complete provability is a consequence of these needs, but they are not the prime directive. Almost all bad choices seem to stem from that misalignment.


Edit: I rewrote this as a blog post of my own, where I expand upon some of the suggestions, might be more readable: https://blog.kronis.dev/articles/how-boring-or-novel-should-...

Overall, I'm tempted to agree, whilst keeping in mind that you sometimes definitely need a little bit of novelty, which the author brings up at the end of the article as well. Here's a few bits of my personal experience, where some novelty helped greatly in these past few years.

Personally, I'd say that something like 12 Factor apps are a good and supposedly new approach (using mechanisms and approaches that have been around for a while) to let you look at software written in different languages pretty much the same from the outside. For example, you use environment variables for configuration, write lots to STDOUT, don't cripple your own horizontal scalability by always reaching for local storage (e.g. when S3 might be better suited) or local application memory (e.g. when Redis might be a good idea). It's nice to have those sorts of suggestions in one place and all of the sudden you escape XML or Tomcat setup hell, and can look at Python apps and Java apps similarly from the outside: https://12factor.net/

Similarly, adopting containers has been a good solution, both because it allows achieving what people historically didn't bother with when having the opportunity of using systemd, but also because all of the sudden your applications are like phone apps - that can be launched in a consistent format on any server that you need. And you get health checks, resource limits, automatic restarts, bind mounts, port mapping and internal DNS, all of which you will never build in environments where the DevOps knowledge or resources (time) are not there.

Note: Kubernetes might be too complex for some setups, as HN loves to point you, something like Nomad or even Docker Swarm also still exists and works: https://docs.docker.com/engine/swarm/ (just linking this in particular, because it's just a small step up from Docker Compose, the pinnacle of simplicity)

Speaking of which, infrastructure as code is great! Using something like Ansible is definitely a novel thing to do at first, but cutting off my team's write access to the servers and making them use GitOps with code review for the configuration changes has been a solid idea. No more wondering why some random configuration exists, or why it was changed N years ago, now you can just look at Git. No more fat fingering bad changes, and even if you did something like there's also code review. No more risks like Knight Capital of partial deploys and if something like that were to happen, you'd get a CI notification about what's wrong. Just describe what you need on the server and let those hundreds of actions execute every morning (or after every commit/merge) automatically, ensuring a mostly consistent state - and way more lazily than learning Nix/Guix: https://www.ansible.com/

Furthermore, adopting the "ingress pattern" where all of your apps are in some internal overlay network, but talk to the outside world through instances of Apache/Nginx/Caddy/Traefik is brilliant! No more wondering about how to set up SSL certificates in each of the different application runtimes or even framework versions. No more worrying about setting up rate limits for each application individually, no more worrying about context paths for how things are deployed - you can configure all of that in your web server, even if you don't use a Kubernetes Ingress controller.

Oh, and forget something like jQuery from the old days, especially when you'd integrate with numerous low quality plugins that wouldn't even work that well half of the time. Just use something like Vue with PrimeVue/PrimeFlex or any other premade component library, with Pinia for state management. You avoid the trouble of using React with Redux (though React Query is nice) or the complexity of Angular, while still getting the benefits of writing small, mostly self-contained application components. No more thousand line JavaScript controllers, no more messing around with global state, or god forbid using something like AngularJS. And with JSX, it actually ends up feeling more convenient to write front end code, in addition to Vue getting hooks right, better than React IMO: https://vuejs.org/

But the actual applications? Most of the time, they should be more boring. Using Java? Just go for Spring Boot or Dropwizard; something like Quarkus or Vert.X are nice, but not quite ready yet. Using Node? Look at Express.js, it does everything you need. Python? Django or Flask. PHP? Laravel or Symfony. Ruby? Rails. Every time I've seen someone go for a non-standard solution or actually writing their own framework, it's been an utter dumpsterfire. Good luck debugging some uncommented code that has not enough tests when there's a production outage and the few code comments that you might stumble upon are in Lithuanian or something.

Databases? As a rule of thumb, go for PostgreSQL or MariaDB/MySQL. If you need data storage, use something S3 compatible. If you need key-value storage, use Redis. If you need document storage, either store JSON in your RDBMS or cautiously go for MongoDB. In each of these spaces, there are one or two established options and using anything outside of those should only be done when you have person-years to throw at every problem, e.g. the expertise and the $$$ to innovate.

In most circumstances, just use what has worked for other people well, as long as their circumstances are similar to yours. (a bit long winded, but just felt like writing a bit today)


I've encountered a code written in the 12factor style of using environment variables for configuration, and in that particular case there was no validation nor documentation of the configuration options. Is this typical?

For onboarding new members, I would have thought it preferable to have a JSON configuration, where both documentation and validation of configuration options are provided by a JSON Schema file.


> I've encountered a code written in the 12factor style of using environment variables for configuration, and in that particular case there was no validation nor documentation of the configuration options. Is this typical?

This just feels like bad development and isn't unlike being given a random .properties/.ini file with no explanations of what the values mean. Sounds like someone didn't do their job, or the processes to encourage (require) them to do so weren't in place.

> For onboarding new members, I would have thought it preferable to have a JSON configuration, where both documentation and validation of configuration options are provided by a JSON Schema file.

You know, this can work, but then you need your applications to be able to read that file and feeding it in through your container management solution (which many use for a variety of reasons) wouldn't be as easy. Even without containers, you'd still need to look out so you don't end up with 20 JSON files all of which might need to be changed for a new environment.

Honestly, something like JSON5 https://json5.org/ was pretty cool because of added comments, but otherwise JSON is a bit cumbersome to use. That said, some web servers like Caddy have gone for accepting JSON configuration as well, which lends itself nicely to automation, so it's still a valid approach: https://caddyserver.com/docs/json/


> I've encountered a code written in the 12factor style of using environment variables for configuration, and in that particular case there was no validation nor documentation of the configuration options. Is this typical?

I think it comes down to how your team values the code they write.

You can have a .env.example file commit to version control which explains every option in as much or as little detail as you'd like. For my own personal projects, I tend to document this file like this https://github.com/nickjj/docker-flask-example/blob/main/.en....


Yeah I think this is why I liked "novelty budget" as a term. To me it implies a limit, but it also implies something which you should spend. Doing something a little bit different can be immensely valuable as you've highlighted. Also everything was new at one time.


> Also everything was new at one time.

Hah, this is a good point, but in my eyes lots of things that were new... never really grew up and were just deprecated and died.

For example, if someone based their setup on IronFunctions, they might have run into a bit of a painful situation, seeing as the project has been largely abandoned: https://github.com/iron-io/functions

Same for a database solution like Clusterpoint, the support for which just ended and you were left to migrate away to something else: https://github.com/clusterpoint

Ergo, I'd say that it's good for others to suffer the consequences (in a manner of speaking) of being trend setters and making wild bets on new and risky products and to just reap the benefits of their efforts later yourself, when things are safer. If a project has survived for a reasonably long time, it's a good indicator that it'll probably keep surviving in the future as well (there was a name for this, sadly can't recall what that was).


> don't cripple your own horizontal scalability by always reaching for local storage (e.g. when S3 might be better suited) or local application memory (e.g. when Redis might be a good idea).

Is reaching for local storage\memory crippling or not?

Where does the 12factor talk about it?


I'd say that the closest 12 Factor concept is "Backing services": https://12factor.net/backing-services

Whenever there is something that might need to service more than one request, reach for attached external resources. You don't want to store business state (e.g. something like the current status of an auction or its bids) in application memory, unless you're okay with your app being a singleton application: one that can only ever have a single instance running concurrently, with all of the risks that it brings.

Similarly, if your application generates reports, generally it's good to put them somewhere like S3 and perhaps persist the metadata about this, instead of just spewing them in your local file system, because at a certain scale there are issues related to the filesystem approach (e.g. max number of files in a folder, inode limits), though admittedly something like ActiveStorage in Ruby at least makes an honest attempt at solving it for most folks.


If anyone wants to write boring tech full time (PHP 5, CSS 2, HTML 4, and regular JavaScript), let me know. I’m hiring!


Php 5 would not be boring and safe. You'd have some interesting challenges writing in an unsupported language version


There are no challenges. The code I wrote in 2000 still works in 2020 and will work in 2030. That’s why it’s boring tech - it just works without thinking or deleting node_modules directory every day.


> it just works without thinking or deleting node_modules directory every day

That kind of hyperbole isn't useful. If you're deleting node_modules directory every day, you're doing something wrong. Perhaps due to lack of experience. You're claiming 22 years of experience in PHP. So of course that familiar workflow is going to work better for you personally.


My favorite interview question is - how often do you delete node_modules directory. I find that the most experienced candidates will answer “every day”, which is the correct answer. I can catch people lying quickly if they say they don’t delete node_modules.


That's a horrible heuristic. All you are doing is selecting for engineers who have failed to fix the problem like you have failed. Misery loves company, eh?

I have decades of experience, mostly work in JS, am often in the top 5% on StackOverflow and very rarely need to delete node_modules. Neither do most of the people on my team. Some of our junior devs reach for that sledge hammer out of frustration. Which is fine. I used a lot of sledge hammer when I was first learning PHP too.


The excitement comes when security issues are discovered in no longer maintained code


There are no security issues.


That's a laughable claim.

https://www.cvedetails.com/vulnerability-list.php?vendor_id=...

And just 10 days ago:

Code security company SonarSource today published details on a severe vulnerability impacting Packagist, which could have been abused to mount supply chain attacks targeting the PHP community.

https://www.securityweek.com/critical-packagist-vulnerabilit...

And this source says PHP is the 2nd most vulnerable server-side language in the world.

https://www.thewebmonkeyonline.com/php-security-issues-you-n...


Lulz


related: "choose boring technology" https://news.ycombinator.com/item?id=20323246




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: