Hacker News new | past | comments | ask | show | jobs | submit login
Software Complexity Is Killing Us (simplethread.com)
398 points by Tenhundfeld on Jan 29, 2018 | hide | past | favorite | 287 comments



To me, this was the money quote:

> You might say, “But event sourcing is so elegant! Having a SPA on top of microservices is so clean!” Sure, it can be, but not when you’re the person writing all ten microservices. It is that kind of additional complexity that is often so unnecessary.

It's very easy to be wasteful when:

1. It's not your time that's being wasted

2. It's not your money that's being wasted

One of the best learning experiences I had, was building and launching an entire application API, purely by myself, in my free-time while juggling a full-time job. It really helps you accurately evaluate the cost-benefit tradeoffs for various flavor-of-the-month technologies/methodologies, when you're the one who has to shoulder every one of those costs.

I've had far too many team-leads whose assessments of these cost-benefit tradeoffs are intensely skewed by the fact that it's other people who have to work on those things, and it's other people's money that is being burnt while working on these things. We think these are technical debates, but really, it's a organizational problem. Under the right organizational structure and incentives, people are exceedingly bright at optimizing for their personal success. The principal-agent problem is a crippling handicap that hobbles every medium/large business today. The day we find better ways to solve this problem, is going to be a transformational day in the history of our civilization.

https://en.wikipedia.org/wiki/Principal–agent_problem


If you're building something you plan on throwing away, or won't be around to maintain, it makes perfect sense to throw things together quickly. If you're building something that will be a foundation for many future projects, however, it's worthwhile to put more thought, and thus more time, into building it.

> It's not your time that's being wasted

More frequently than I would like, I find myself putting a band-aid fix on something that I should have spent more time thinking through up-front. In this case it's future-me whose time I wasted by not taking my time initially.


In a startup scenario, business changes can and will come in that destroy previous assumptions. Being stuck with a 1/4 finished cathedral when you now need to build a truck is both wasteful and inneficient


Being sloppy and being fast are not the same thing.

One can be fast by limiting complexity, yet provide a disciplined implementation.

In an uncertain scenario, such as you described, I would say it never pays to be sloppy, but it often pays not to do anything more than is absolutely necessary for the time being.


Ah the million dollar statement. Ofc you can make something completely right, that ”race car” project but then someone says “but I want a bus” and you transform it as fast as possible to a bus and somewhere half way through someone comes along and say “why isn’t it flying”.


I've never worked anywhere in 18 years where the goals of a project weren't at least limited to the correct 'vehicle class'. Implementation details sure can change but that's why "fast and disciplined" is good - you learn to implement the software in a manner where you understand the changes needed based on concrete principles rather than a mad dash of copy-and-pasting.

By 'disciplined' I don't mean that the program fullfills the dogmatic mandates of some system development philosophy - just that the software is written on solid principles that at least the author understands.


Just being curios, have you been working on the same platforms/systems during all these 18 years?


Both what you are pointing out and what the other commenter are pointing out are critical, contrasting failure modes that must be continually avoided. You both are correct.

The balance between the two will depend enormously on the circumstances, and is difficult to manage.


Couldn't agree more. The bandwagon effect is something also worth mentioning. Tech stacks and tools these days are branded almost as consumer products and the outcome is poorly designed systems.


The net present value to a developer of dooming their current employer's current project by choosing an unsuitable technology which they could put on their resume to negotiate future salary (at a position they'll be moving to in, on average, months not years) is going to swamp betting on something boring but stable, completing the project, staying put and crossing their fingers for raises that outpace inflation.


The root problem here is our hiring and interviewing system.

Imagine if interviewing rewarded "look I know all this stuff about writing simple functions in Stack X" less than "yes, the project shipped, and beat its deadline, because we decided to use library Y that the devs already knew even with the acknowledged pain points, instead of bringing a new technology in just for this project. We have some less-mission critical areas where we decided to experiment with new tech, but needed something predictable here."

Evaluating a senior dev based on junior-dev-esque implementation tasks instead of "does this person know how to avoid over-complicating our job?" is ludicrous. As an industry we interview to find "clever" people rather than "wise" people.


The problem in my experience working in "enterprise" shops is that more often that that, most of the employee hierarchy is in on the scam - managers know developers are padding hours, developers know middle managers are peddling snake oil up the chain, and senior managers are painting a picture of success to their superiors to get their big fat bonus. And very often, almost all of it is an illusion, and when everyone's done and the final product sucks for end users, everyone responsible has long ago declared it to be a smashing success and moved on, and the poor support developers have Yet Another Over-engineered Mess on their hands. Rinse, repeat.


The purest capitalistic endavours are drug deals, theft and scams.

The more refined variants, which find law loopholes and apply them with leverage,are still the same approach.

Look at those jailbirds under the sky, they do not sow, but our Lord the Ford, supplies them better then those working none the less.

The shareholders of a company know this deep down as well, so one serial scam series later- the comon pension er who had to use bad software all his life, can retire knowing he ripped off some poor schmock to have this.


Outside of web development, you can get a job fine without using this year framework. However, you still have to leave after a year to get more than inflation.


> you still have to leave after a year to get more than inflation

This hits home. I've been in software development for quite some time (10 years), I've been with 4 employers so far (full time) and pretty much all my significant raises have come as a result of me looking for a better paying position and then leaving. I may have been unlucky (I also did not work for a really large company yet), but I feel like the "career develoment" opportunities within software engineering companies are way under the level they should be.


I think this has been the way of the tech industry, and particularly SV, for a long time now. My uncle, now passed away, started at Atari in the early 80s and hopped companies every three years until his final job with Nvidia in a fairly senior position. He stayed with Nvidia until his death a little over a year ago.

Each time he switched it up he grew his salary exponentially.


Exponential growth, every three years, since the 80s... sounds a bit hyperbolic, doesn't it? Just how much did he make at Nvidia??


Exponential growth means just constant growth rate. Doesn't mean it's a big growth rate. 2% a year will double the starting salary in... 35 years.

;).


No it doesn't. It means an exponent is present.

https://en.wikipedia.org/wiki/Exponential_growth


Constant growth rate gives rise to a quantity that is an exponential function of time [not to be confused with a constant rate of increase, which gives rise to a linear function of time].

As that Wikipedia article says, the continuous-time equation for exponential growth, x(t) = x(0) e^(kt), arises as the solution to the ODE x'(t) = kx, where k is the constant growth rate.

Similarly, in discrete-time, exponential growth follows the equation x_t = x_0 (1+r)^t, where r is the constant growth rate.


I stand corrected.


The formula for constant growth over time, for example, a 2% increase every year, would be starting salary * 1.02 ^ t where t is the number of years. So there is an exponent present, and constant growth is an exponential process.


Maybe Nvidia was his second job?


That is pretty much country specific.

Been with the same employer for several years, jumping between web and native projects in all those years, across multiple clients.

Inflation was never a problem.


THAT RIGHT THERE.

The newer development trends haven't focused on experience. They've focused on resume driven development.


It's not a tech issue, it's a business issue. Why give people bigger raises than they may otherwise ask for? That's just wasting money. It's the same logic for why you only get a good deal when you threaten to leave your mobile phone provider. They don't want to proactively reduce your rate or make it too easy to get a discount otherwise they might be doing it unnecessarily.

The squeaky wheel gets the oil.

With work, it's also about an employee's best alternative, and also the psychology of "well, you used to work for x, so we'll only pay you a small increase since that's obviously what you'll work for". It's as much the psychological aspect as anything else since it's difficult to decide the "price" of something without any reference (i.e. a new employee).


My experience in Australia is this is OK. Very few companies for example would expect you to have React experience. If you have any web framework experience (or just know JS well) you are OK.

Usually on the back end they are fussy about your main stack, e.g. C# -> Ruby would be hard. Java -> C# more possible. However within that, which frameworks you have used, e.g. Akka, NHibernate etc. doesn't matter.

NoSQL, or any particular DB technology is rarely expected.

My experience is companies are fairly conservative with their tech choices here. A 4 yr old framework is typical! Usually there will be 1 or 2 cutting edge things being tried out, but without uprouting everything.


Therefore it's crucial to have a nice work culture. When employers only think of themselves, employees copy that. On the other hand the people doing the actual coding must also be able to learn something new, that's just how the technology world works. If the employers don't help with that, the employees are doomed eventually.


Exactly, frameworks and tools are resume driven, everyone is putting new ones as a means to sell themselves to companies, do conference talks, set up their consultancy, ....

And when they move along to the new shiny thingy, those tech stacks die unless they achieved momentum beyond the usual adoption curve.


Amen. I think the effects can be significantly lessened though under various circumstances:

- highly technical boss (understands that things might backfire)

- flat hierarchies (responsibility isn't delegated through layers...)

- reasonable work culture with nice people

But yeah, as an employee this is a question of luck, landing in the right place.


I think you're on the right track when you're saying that this isn't a technical problem. That's the most important thing to understand about the development of overwrought, top-heavy system architectures.

Trying to fix this with more code is a fool's errand. As soon as you get something reasonable and good, someone will come along to distort and abuse it to serve their non-technical interests.

IMO a few of the main reasons we see this plague of listless over-engineering are:

a) many people don't actually know what they're doing, and they're trying to hide behind ever-shifting technical targets to obscure this fact. Sometimes this is pretty blatant, but this happens in a more subtle form with surprising frequency. Many people maintain only a very superficial interest and understanding in what's happening under the covers, so they hear promises of magic hand-waving and instantly jump to them.

and b) people are forced by the principal to coop themselves up in an office for 8 hours per day, and they need to invent things to occupy their time. You could say that this is the agent not respecting the principal's investment, but it really goes both ways. Our employment model does not allow for "done", because as soon as this happens, the contributor is declared redundant and the revenue stream disappears. It's fun to brainstorm models that may not have that issue.

A relatively flat/reasonable technical apparatus is needed to reason coherently about a system at all. A lot of these microservice shops are complete nightmares, and they use "microservices" as cover for "nothing is maintainable, just delete it and start over when we need to make an important change".

People primarily want to use the latest stacks and make complex system architectures to gratify their vanity, in various ways: to keep their resume relevant, to be treated like a guru, and to keep up with the joneses. These are problems that stretch way outside of tech and affect everything humans do.

Have other technical fields found ways of handling this sanely? What is stopping civil engineers from changing to a different concrete formulation every two weeks? Is that there is an objective physical component that can be tested, whereas software lives primarily as a manifestation of a mental problem-solving structure? Those are the answers that our field needs to look for IMO.


> and b) people are forced by the principal to coop themselves up in an office for 8 hours per day, and they need to invent things to occupy their time. You could say that this is the agent not respecting the principal's investment, but it really goes both ways. Our employment model does not allow for "done", because as soon as this happens, the contributor is declared redundant and the revenue stream disappears. It's fun to brainstorm models that may not have that issue.

I think you might be overstating this part. I haven't been in many positions where if I said "hey, I finished that thing you asked me to do, and it took five fewer people and six fewer months than we thought" the response would be "ok, looks like we can fire you all now" instead of "great! get started on the next from the list of 100 things we think might be useful for the business!" + some good results at performance review time.

My usual frustration is the opposite: I can't knock out my tasks as quickly as I could because I'm waiting on other teams to figure out how to add an unexpected new feature to their overengineered buzzwordy codebase.

Maybe this is a high-growth vs low-growth environment type of thing, but even in more stable businesses, seems like the maintenance backlog never really empties. Just gotta make sure the bosses understand that you wouldn't fire the workers every time the shop temporarily clears - you're still going to need them for the next jobs that come in.


I think that a lot of the motive in this direction is unconscious. It's less "Oh boy, they'll realize we aren't doing anything if we don't say we need to rewrite something", and more, "This will be an interesting way to spend my time and keep myself busy for a while." This is the same impulse, just vocalized differently.

If the incentive structure was changed, devs may rather think "I need to make this as tight, small, and maintainable as possible, so that I can spend the rest of my time on the beach." We see this a lot more on side projects specifically because the dev knows and respects that the available resources are constrained, as the parent poster mentions.

In any case, people usually won't allow it to reach the inflection point where they're visibly non-productive and the boss is cognizant of the option to "clear the shop".

There are always tweaks, defects and flaws that can be made up and justified. The question is whether that work is really contributing useful value. Once a project reaches a certain level of maturity, this becomes increasingly dubious.


That's your team's problem. Optimizing single developers for "getting their part over" is not contributing. Software delivery is not like taking tests in school. It's team delivery over multiple release. The only wins are group delivery.


>nothing is maintainable, just delete it and start over when we need to make an important change

Good! That's why we have microservices, classes, or interchangeable parts in any engineering discipline: rather than laboriously extricating a deeply integrated, tightly coupled component (or worse, give up and throw away the whole system), we get to rip out one specific component, throw it away, and replace it with something that conforms to the same interface while keeping the rest. If you can routinely delete and replace specific services, your microservices architecture is working beautifully. Just as, when you can routinely overhaul specific classes, your OO design is working well.

In the physical world, maintenance and repair hardly ever consist of coaxing the same matter into working state. It's extracting old parts, throwing them away, and putting in new ones.


Microservices are never deleted. They are only abandoned and left to rot after the initial developers move on to their second system (https://en.wikipedia.org/wiki/Second-system_effect). Then you have apps that are stuck in migration purgatory, where they haven't fully switched over to the new microservice because the business doesn't want to spend that effort, but they do want developers to focus on using the new microservice and new API for new app work. The hope is that app will eventually be 100% on the new microservice, but after 3 years you start to realize that is never happening and by that time the company has an entirely new staff working on a much newer microservice that promises to fix all the problems with the prior microservice.

That microservice kool-aid is strong and deadly.

> replace it with something that conforms to the same interface

ha. hahaha. Oh my. I'm in the process of switching over a REST service to a GraphQL service. Because apparently people wanted all the pain of SOAP, but "could I get that in a JSON flavor, please?" Five years from now, REST will be making a comeback under a different name. You heard it here first.


REST failed at marketing itself and its primary document is a doctoral thesis, hardly something that an average developer is going to grok, understand and implement.

Hypermedia is a more appropriate term, and emphasizes the actual substance rather than style.


> In the physical world, maintenance and repair hardly ever consist of coaxing the same matter into working state. It's extracting old parts, throwing them away, and putting in new ones.

The difference is what you consider "the same matter". Since software exists in the realm of the mind, you have to come to an explicit consensus; there is no natural or physical barrier.

A mechanic will allow dirty oil to drain, but they won't rip out your oil reservoir and redo all your hoses every time you want to get your oil replaced. That'd be very expensive and wasteful. It is not good to use parts that are so cheap that they tear whenever someone pokes or nudges them. That'd make it so no one could move around without getting lots of oil all over the place.

Basic maintenance should be possible. In almost all cases, you should not export every function as its own independent microservice. You should not have to track a request across 60 internal services when you're trying to debug. Yet, this is what we've gotten from naive microservice implementers all too often.

In a microservice architecture, many of the tools that would otherwise validate and verify a successful refactoring are unavailable, because dependent code is exported and accessed over HTTP instead of language-level constructs that the compiler or other static analysis tools could check.

In any architecture, it is rare for a programmer to be able to gut a dependency without causing collateral damage somewhere down the chain.

Somehow, this policy of "throw out everything related to SERVICE_X and start over whenever we need to make change" doesn't seem very practical...


The idea of non-architectural changes to software is a dangerous myth. The "architecture" should reflect the problem, and when the problem changes so does the design. You almost never want to replace a component without changing the interface.


Caring about those decisions even when it's not your time or money is what I think "craftsmanship" is.


What you say is all true, but its also a local maxima that we're all caught in. I think there's a critical threshold at which we could achieve accelerating good with computing instead of today's computing doing a lot of harm. What the path to that threshold is, is an interesting conversation to have.


A few decades of observed reality suggest the path involves reversing the arrow of time. Software fucks up everything it touches in one way or another, with few meaningful gains outside of simple convenience.


At the risk of going off on a tangent, your pessimism is unwarranted.

Consider, first of all, that we're having this discussion on a message board with participants from all over the world. That's a meaningful benefit of software.

I read both the original article and this discussion with a screen reader, because I'm visually impaired. I can also use a screen reader to read ebooks. Sure, I have enough sight that I could read regular print with a simple magnifier. But without software, my totally blind friends would have to wait for the things they want to read to be transcribed into braille, recorded, or read to them in person. This, to me, is a benefit of software that goes way beyond convenience.

I could go on, but this blog post by Bertrand Meyer responds to general pessimism about software better than I could: https://bertrandmeyer.com/2013/03/12/apocalypse-no-part-1/


Spend some time contemplating all of the ways that the modern surveillance state oppresses minorities, all of the drivers of income inequality (both domestically and globally), how few people actually benefit meaningfully from automation, and get back to me about how my "pessimism" is unwarranted. FFS, software has lowered the barrier to warfare so far we're bombing somewhere between five and ten (I can't even keep up) countries currently, and it isn't even generating column inches in the local paper.


Modern states deliver drastically higher standard of living, less oppression, and minuscule existential risk (i.e. total war, wholesale slaughter) compared to historical norms.

Read some history and get back to us on how the modern world is oppressive, violent, or impoverished compared to some bygone golden age.

It’s true that if you go back far enough we were all equal... because we were all peasant farmers.


I've listed observable deterioration in societal conditions attributable directly to software, and thus changes that have taken place over roughly the last 50 years. Are you asserting mass surveillance (for example) hasn't become a largely inescapable societal norm or that software doesn't provide the methods? In what way is "historical norms" before the advent of software in any way relevant to the discussion? To humor your strawman, please reconcile the phrase "miniscule existential risk" with the myriad existential threats to species survival the modern age presents that did not exist before the advent of computers.


I'm pretty sure the state was oppressing minorities and driving income inequality just fine without software.


If you believe that you're going to be sad when someone shows you a graph of income inequality over time. That oppression pre-dates software is not in question. Are you attempting to assert that modern surveillance and carceral state either don't benefit from software or are not examples of institutionalized oppression? You are, of course, aware that the US has the highest number (by count and % of population) of incarcerated citizens of any country on the planet, and that minorities are disproportionally targeted, right?


The surveilance works also in reverse. Black Lives Matter wouldnt be there if there wasnt the masses surveil the supressors.

Alot of technology benefits are non visible today- but imagine any nasty event- and notice how a lot of the implicit knowledge of a society would survive today- encoded in videos, automated tutorials and wikipedia articles.

And it even makes kind of sence. Investing into "Leaps" forward can fail in a thousand ways, investing into measure that prevent society from falling down completely and allow fast recoverys, well that is something that can become usefull a hundred times over.

Its like a dice where one in nth sides gives you a chance to leap forward - and 0.5*n of all sides have a chance you must step permanently back. Removing those sites of the dice from the scenario tree, is statistically OP though not really cool to watch.

The only thing currently not OP is how companys research is directed towards local optima. There are no big attempts to lift whole areas of endavours in feasability.

Actually, if i had the whole picture- i would guess that humanity is currently on some sort of golden path.


> Alot of technology benefits are non visible today- but imagine any nasty event- and notice how a lot of the implicit knowledge of a society would survive today- encoded in videos, automated tutorials and wikipedia articles.

Let's be fair here. If a nasty event caused us to lose power grids globally for a year, we'd lose 99% of that knowledge (preceded by mass starvation of high-double-digits of the western population). Digital data is fragile. Hard drives have lifetimes measured in years. Even optical and flash drives have practical lifetimes (in actual use) of under two decades. The whole thing works only because data is not at rest for long - it's get copied over all the time, as our economy pumps out hard drive after hard drive after hard drive (hell, the same thing applies to CPUs, RAM and displays these days). Everything is so tightly interwoven together, that it starts to resemble a living being - you can't pause it.

That's, BTW., the reason I strongly believe our global priority everywhere should be preservation and stabilization of the technological civilization we have. Itself, it's not rebootable - if it fails now, if we get kicked back to pre-industrial-evolution levels (through e.g. 3rd World War, or environmental collapse, or both), we'll be stuck in them for millenia to come, waiting for Earth to replenish all the easily accessible high-density energy sources, that are all gone now.

We are on a golden path, but it's a tightrope - we need to focus on balancing ourselves and getting to the other end, instead of getting distracted by random trivia and amplifying every single thing that is not exactly right for someone.


the web didn't. But your point about arrows is well taken. Today, the gravity arrow points toward the money. But I don't think that's some immutable law of nature, i think it's a local maxima (a very deep maxima)


> the web didn't

Oh, the web is likely the single biggest fuckup (maybe second only to C) of the software world. We've gained a lot, true, but we've also broke things a lot too.


Goes to show that "worse is better", given that these two perhaps badly-designed systems are the software world's biggest successes.


Yeah. "Worse is better" is just philosophy built around understanding of feedback loops (indirectly, through evolutionary processes). It's pretty much fact of the system that is human society. It's true, and it's the default way things work. It should be used, however, as a reminder that we have brains capable of overriding that default, if we can be bothered to use them.


Couldn't agree more.

On the mobile app I am working on, the hope is to move from dozen of Activities to a single one all MVI based.

That's nice (even though it will be a nightmare to debug and optimize) but it will takes tons of engineering work for absolutely not product return.

My current plan is to get the hell out of there before that.


What a brilliantly terse formulation of the topic complex that had me pondering for quite a while now! In some manner, "mulling over casually on-and-off" for 2 decades.

> Under the right organizational structure and incentives, people are exceedingly bright at optimizing for their personal success.

Optimizing organisations for converging towards such structure-and-incentives would be the money-shot, wouldn't it.. the only set-up coming to my mind proven to work (for team sizes greater than 1) are the cliche garage up-start and "small gang of well-aligned-in-skill-levels-and-obsession, ~90% of the work/design 100% under own control, product-builders". In other words, how can every small tech team feel like "early-days id Software" --- or rather, the perception of it that prevailed.

> The principal-agent problem is a crippling handicap that hobbles every medium/large business today. The day we find better ways to solve this problem, is going to be a transformational day in the history of our civilization

Indeed. "Medium/large" is worth repeating. Small shops need to get to medium to be able to financially and mentally afford to become some-degree-of-"wasteful". The question then becomes, why is this dynamic so predictably playing out every time that they indeed will? Is it human nature? A case of "everyone who continually succeeds in meeting their next goal eventually rises to their level of incompetence"? Is it that as soon as the adrenaline-rush of ensuring "the bacon" has actually worked, we naturally relax and become playful? With the only way to express 'playful' in a "serious setting" being to endlessly bikeshed and adopt new toys to play with, I mean "promising/proven tech to adopt and migrate/pivot to"?

Remember early-Google's (maybe they still have it) "20% of the week is play-time (at least for developers)". No (or at least reduced) need to channel the urge into counterproductive role-plays of long-winded 'architectural/tech/stack/paradigm debates'. No matter how you cut it, they must have gained heaps from this arrangement not just in product prototypes like Gmail, Maps etc but probably also simply in reduced time-wasting back then. I do suspect for 99+%-software companies, the most fruitful model might turn out to be like this but with 80% devoted to the "play approach" and only 20% of the week to revisiting the concern's concerns that inform and subtly (re)direct in what problem areas to "play more". It's just a bit too radical-seeming to invest a-million-plus in, unless it's one's own millions (and possibly reserved for structures that don't primarily aim every second of every day to "exit" from their own construction).. I suspect the remaining 20% "serious" puts that aspect in the right adrenaline-driven ultra-effective no-BS 'hungry hunting mode'/context and would remain crucial so that "play" doesn't drift off into 'massaging sandcastley no-one ever asked for'..


Blah bullshit blah.

The reason software is so complex now is that our expectations as users is orders of magnitude higher than it was "in the good old days". I'm old enough to remember the good old days, so I can speak with a little authority here.

The Pareto principle may not be exactly accurate, but it describes many situations quite well. And in software, it fits very well. 80% of the requirements take about 20% of the code (and complexity). It's the edge and special cases that add the bulk of the work.

When a program just did one useful thing, we humans would do our human thing and integrate several different programs with our manual effort. If an edge case appeared, we didn't even identify it as an edge case - we're built for handling edge cases! We just made the minor _human_ judgement and fixed the data and pushed it into the next program.

Software is complex because we're trying to make it replace more human activity. And beyond the core functionality, human activity is all about applying human judgement. If anyone is familiar with people training people on a particular task, particularly the quaint concept of apprenticeship, it involves teaching and showing and mentoring until a student has enough knowledge AND wisdom to get the job done.

So to boil it down, modern software is primarily complex because it attempts to replicate human judgement and wisdom.

That software complexity is not killing us. It may be a pain in the ass, but if anyone can remember what it was like before we had these tools, I would dare to say that it's still a whole lot better than before computers.


I disagree. Users, in reality, don't care about all the crap that stakeholders and some developers care about. They don't care about Material design, fancy animations, beautiful buttons, "elegant" code,a ton of niche features, mobile apps(no I'm not kidding), your annoying push notifications, your fancy menu bars that you also fix to the top of the screen for whatever reason, your autoplay videos, your little "delightful" asides that interrupt the content, etc. They want something that works without much effort or worry that the software will fail, and that doesn't necessarily have to do with how fancy or feature rich an app is.

Craigslist and the Quartz app are just a few examples of what I'm talking about. Nobody except maybe some stakeholders ever asked for those things to be more fancy than they are. They are simple and they work. The end.

Unfortunately, we are in an era where everyone wants to be like Google, or at least thinks they need to. I do not believe that the rest of the world would be worse off if they stopped spending time making "production ready" software that visually competes with the Big 5.

Software becomes complex because we allow it to become complex. Just because you can add another feature doesn't mean that you now have more value for a finite amount of pay for X developer time. Companies seem often blind to continual costs for support that go up every time each time the solution is to add a new feature or software package for a small segment of the customer base or the company itself.

Whomever is in charge should acquiesce less often. But everyone wants to be a wizard.


> They don't care about Material design, fancy animations, beautiful buttons

If they don't care about those things, they aren't doing a very good job of voting with their wallets. It's much easier to sell a product that is pretty.

> "elegant" code

That isn't for users' benefit. The people paying to maintain the software do care how much it costs to maintain the software. Elegant code is an attempt to make it cheaper to maintain.

> a ton of niche features

People buying software do care about that, and they are often a distinct group from users.

> mobile apps, your annoying push notifications, your fancy menu bars that you also fix to the top of the screen for whatever reason, your autoplay videos, your little "delightful" asides that interrupt the content

None of those are for users' benefit. Those are for the benefit of customers (advertisers).

So perhaps saying that users expect more isn't telling the whole story. However stakeholders certainly expect more.


> they aren't doing a very good job of voting with their wallets

I'm tired of repeating this, but here it is:

Voting with your wallet. Is. Bullshit. Consumers don't pick from the space of all possible products, they choose from what's available on the market. Especially in software, it's producers who collectively determine trends. If they all fall into a fad, customers have no way of saying no.


Exactly right. Just like producers have decided that all upcoming editors, chat apps etc have to use Electron. They all talk as "Overwhelmingly users have spoken that they want bloated, slow text editors with million plugins for next gen development."

IMO they are exactly like military-industrial complex where increasingly complicated software will need newer and complicated hardware because "customers demanding it".


Your argument would be true if on any given software market there would be around 3 or 4 developers and products.

This is very far from current state of affairs, where for almost every possible kind of software there are a lot of different developers with vastly different products, different designs and ideologies. They may not be as well-known, but they exist in their niche, and you can definitely vote with your wallet for one that looks to your liking.

Extreme example - mobile todo app. I think that one in ten professional developers released one to app store just as part of his portfolio. And users still choose apps with mainstream design, look and feel. If that's not voting with their wallets, what is?


Sorry, stuck in my own perspective. My clients are typically within a company. It is internal corporate expectations that are out of line with reality. External users, in my limited experience, are much more flexible and un-demanding.


I think either you are not understanding me, or I'm completely misunderstanding you. What you're saying seems orthogonal to my arguments...


I'm disagreeing with your point that users are expecting more of software. Their expectations have changed in some ways, but not really in any that makes software significantly more complex.

To shorten my last post, it's poor development practices and a lack of foresight from upper management that makes software too complicated. You could make an interface that looks like it's from the mid 2000's(garbage by today's standards) and people would still use it if it works well and is reliable.

Look at HN. Do we need more from it than it already provides? If it was up to the average upper management, the backend would need constant retooling for every feature request from different departments, 12 forms of analytics, newsletter signup forms in between posts, job alerts, a mobile app with a nag message on mobile pages, single sign on from Facebook and Google, share buttons, a waffle iron, etc. None of which the actual users and even the developers ever wanted.

EDIT: Some people might point to Slack as an exception to my argument. Indeed, Slack looked very nice compared to the competition when it first came out. But what made it a killer app was that it took the best of the IRC experience and made it a universal experience between devices and browsers. Plus it can use a company's SSO. When it first came out, it was also not very complicated. I remember when it didn't have reactions or threads, but it was a breath of fresh air after having come from using things like HipChat and Gchat, both of which were pretty dreadful.


If HN was so perfect of an experience, then I and many others wouldn't use an app to consume and interact with it. I come here for the conversation and curation, but the UX on mobile is awful. It's far too easy to make mistakes. I can't even upvote properly without a very high probability of down-voting instead (at least we can cancel the down votes now). Just because You don't care about the design doesn't mean the design isn't useful or even necessary for an application's success.

I agree with your point about nagging, popups, and analytics, but I don't think the existence of those annoyances are an argument against the OP.


I think users are expecting more, but not necessarily in the ways you are imagining. It's about how much of the process users expect to be automated vs how much needs intervention. I'll redirect to my other comment:

https://news.ycombinator.com/item?id=16261510

"Software is really about codifying process -- pun completely intended. What happens is some slick new software comes along that implements that happy 80% path on the process, and does it quickly and efficiently. Then business complains about the 20% falling into the gap, and starts adding features to reduce that, thereby slowing down the processing. This keeps going, until someone has an idea and implements slick new software that implements that happy 80% path on the process, and does it quickly and efficiently..."


> "I think users are expecting more, but not necessarily in the ways you are imagining. It's about how much of the process users expect to be automated vs how much needs intervention."

To a limited extent, yes, but if we're being honest here, the main "additional demands" that most users have for most new pieces of software are "simpler" and "prettier", neither of which automatically requires additional complexity under the hood.


"Simpler" from the user's perspective does require additional complexity under the hood.

It means that the "simpler" software has to be aware of all kinds of edge cases and solve them automagically, because the user expects a larger degree of automation. Software seems "simpler" to the user because it's working on a higher level of abstraction (and levels of abstraction tend to be "leaky") and by reducing the domain of details that the user needs to be aware of. In order to do that properly, the "simpler" software now needs code to take care of all those details as they are effectively removed from the user's awareness, while the earlier software could leave some of those edge cases up to the user.


> ""Simpler" from the user's perspective does require additional complexity under the hood."

No, it doesn't, or at least it doesn't if you write the application in a language that is expressive enough to cater for the edge cases without bloating the code base.

Consider a language like Red, which allows you to write GUI applications in a few lines of code. Must be a large runtime behind the scenes enabling this, right? It's clear that this is not the case as the runtime, compiler and libraries fit within 2MB. How is it possible for Red to achieve so much with so little? I'll leave that for you to discover for yourself.


The answer can't always be "a sufficiently-expressive language", same as the answer can't always be "a sufficiently-smart compiler". It looks like Red is a minimal programming language, with a purpose-built DSL ("dialect") for UI programming, which handles the "standardized" part. The final result of this line of thought would seem to be that everyone should use Lisp with layers of DSLs for whatever their target functionality is. Which works great, as long as those DSLs are complete, standardized, and mostly not leaky. In the spirit of your comment, I'll let you discover for yourself why that doesn't seem to happen.

EDIT: As an aside, the Java 1.1 JVM was only about 15MB.


> "Which works great, as long as those DSLs are complete, standardized, and mostly not leaky. In the spirit of your comment, I'll let you discover for yourself why that doesn't seem to happen."

Two reasons:

1. Computing is a relatively young field, and not all of the best practices have made it to the mainstream.

2. The quickest way to develop is to bolt additional code onto the side of a project rather than considering how the overall requirements for the software application may be better served by designing reusable helper functions and refactoring.

Also, I think you've dismissed Red a little too quickly. The GUI DSL is not the only factor that keeps the codebase compact. I'm using the word "compact" rather than "small" as I recognise that Red is still in its infancy, but with the word "compact" I'm trying to emphasise that Red does a lot with not a lot of code.

> "EDIT: As an aside, the Java 1.1 JVM was only about 15MB."

How big was the JCL (Java Class Library) at the same time? Also, 15MB for a VM on its own is not that impressive. Even Oracle have managed to make small (feature complete) VMs, for example:

https://blog.plan99.net/graal-truffle-134d8f28fb69

"The resulting runtimes have peak performance competitive with the best hand-tuned language-specific compilers on the market. For example, the TruffleJS engine which implements JavaScript is competitive with V8 in benchmarks."

"To give a feel for how easy it is to write these engines, TruffleJS is only about 80,000 lines of code compared to about 1.7 million for V8."

http://nirvdrum.com/2017/02/15/truffleruby-on-the-substrate-...

"The Substrate VM addresses those deficiencies by producing a static binary of a Truffle language runtime. It performs an extensive static analysis of the runtime, noting which classes and methods are used and stripping away the ones that are not. The AOT compiler then performs some up-front optimizations such as trivial method inlining and constructs the metadata necessary for Graal to perform its runtime optimizations. The final output is a version of the Truffle language interpreter fully compiled to native machine code; i.e., there is no Java bytecode anywhere. As an added benefit, the binary size is considerably smaller than the JVM’s because all the unused classes and methods are excluded."


> How big was the JCL (Java Class Library) at the same time?

The quoted number was for the entire JRE installer:

http://www.oracle.com/technetwork/java/javasebusiness/downlo...


Faster very frequently is one of those demands, though--be it lower latency, quicker response to failure, whatever. And "fast" is hard, and "fast" very frequently does require the development of tools, and tooling, that understands and respects that need. Which is a big reason why, when I need a user interface, I write code in React: because I'm not waiting on page blanks or writing my own partial page composition stuff. And it's why I mostly use k8s and containers when working on devops-related tooling and architecture: because it's faster and better at reacting to failure cases than something I can build myself. (This wasn't always the case, but it's gotten there now.)

Users also, for the most part, demand more correct. Not always, and not in all applications, but the tolerance for jank is going down over time. And this is a big reason why I use React and Redux: because React maps better to my understanding of how code works (namely, that extracting state to the imperative outer shell of the functional core leads to generally more correct and maintainable code) and Redux maps better to my understanding of how data works (namely, that specific and replayable transforms create manageable state). It's also a reason why I use containers: to turn the runtime environment into an "API" that is evaluated and tested in isolation from the "API consumer" that lives in the container.

We are progressively moving towards software with these demands, too, and tools are evolving to respond to them. The panic around complexity is, IME, hyperbolic; a failure to choose tools with wisdom is endemic, it's not new.


The previous 3 places I have worked it has been a constant battle to get re-factoring done so that system is more maintainable. Some crappy feature was always considered more important (metrics showed a year later that the "important" features hadn't brought in any more revenue).


Compare the web browser you probably used to type your reply into to earlier generations of 'interface with people on the Internet' software. It does so much more.

I run Linux as my desktop, and I've been using Linux on a daily basis since the mid 90s. The amount of work that it does for its user now compared to then isn't even comparable.

How fast did this page download? Pretty fast probably. That speed happens, in no small part, because of enormously complex software running many devices between your computer and the source. Yes, hardware has gotten a lot faster, but software has gotten enormously more complex to not only keep up with the hardware, but also add and facilitate huge optimizations on top of that.

I agree that there the amount of 'useless' or low quality software is also staggering. That doesn't really surprise me, it is what it is.

And I also agree that there is a huge amount of synthetic, un-needed and artificial complexity around these days, with more being made all the time.

But I think those categories come pretty naturally, by default, from the total mass of software being created.

I guess there's a good chance that the percentage of 'good' vs. 'bad' software has always been roughly the same.


> I run Linux as my desktop, and I've been using Linux on a daily basis since the mid 90s. The amount of work that it does for its user now compared to then isn't even comparable.

And here i sit, yelling at the screen any time any of those automagical daemons and signal buses break.

Because they make fixing the problem once and for all a royal pain!

And i moved away from Windows to get away from that crap.

It used to be we had software A read or write a file (optionally one in /dev), and output some results. Maybe those results were piped to another program that drew something within X11.

But now there is daemon A talking to daemon B over signal bus C, that again talk to program D over either bus C or E, or maybe even Z, that then many go on to draw something (likely just an eternally spinning widget because something got lost in the game of telephone).


> And i moved away from Windows to get away from that crap.

This is the dilemma of The Linux Desktop (pick Ubuntu, GNOME, systemd, or all of them). Things like internationalization, plug and play, network configuration, are all important to provide. Trying to provide them to the demographic of people that use Microsoft Windows ends up with solutions that resemble those of Microsoft Windows. Meanwhile power users and developers get alienated and end up implementing those things in a far more efficient and efficient to use way. Running Emacs as my X11 window manager gives me far better internationalization capabilities than GNOME. xmodmap is a better way to configure the keyboard than the GUI tools in GNOME and LXDE (no more worries about losing configuration on updates, for one). Linux web browser video playback and font antialiasing has been such a great feature that I now do half of my browsing with w3m (through emacs-w3m) with the X11 misc-fixed fonts. It is faster to connect to wireless networks using OpenBSD's ifconfig than it is to use nm-applet. The list just goes on - who even needs desktop notifications? Do all projects that try to be "user friendly," like GNOME and Ubuntu and systemd, end up being power user unfriendly?


> Do all projects that try to be "user friendly," ... end up being power user unfriendly?

Interesting question. Can you think of any examples of projects that started (roughly speaking) simple, power-user friendly but not end-user friendly that grew to become complex, power AND end-user friendly?


> what made it a killer app was that it took the best of the IRC experience and made it a universal experience between devices and browsers

A post ago you were saying users didn't care about mobile apps.

> Plus it can use a company's SSO

So can IRC, so can HipChat, that wasn't the issue.

> When it first came out, it was also not very complicated. I remember when it didn't have reactions or threads, but it was a breath of fresh air after having come from using things like HipChat and Gchat, both of which were pretty dreadful.

They were dreadful in the UX department. Slack and HipChat in particular are indistinguishable at the feature level.


> They don't care about Material design, fancy animations, beautiful buttons ...

Majority of those are not complex technically. While they require a lot of graphic design time, they are not that difficult to code. Software is not complex because of top bar or material design or autoplay video.


Author here... we talk a lot at our company about essential complexity and incidental/accidental complexity. Software is complex, absolutely. It is complex because business and reality are complex. I think that is inescapable. What we can try to escape though is the incidental complexity that we add to our systems while we are following trends or chasing down shiny new tools.


You get an outcome appropriate for the risk assumed. The capitalist business world values speed of development over reduction of risk.

That's why at many companies, whatever tools and languages allow you to get an MVP out the door before a competitor are not merely accepted but are often celebrated and lauded.

When risk really matters, such as the commercial flight industry, the pace moves much slower. It takes comparatively forever for new features to reach the market because the cost of mistakes is so significant (human life). If business behaved the way regulated commercial flight behaved, we wouldn't be having this conversation.

Pointing to open source or expressive languages as a source of complexity is, obviously to me, a red herring. The failure is our lack of rigorous engineering practices - those practices we gave up in the interest of beating our competitors to market.

You don't even have to look very far back in time to recall when software was expected to be correct upon delivery. Nowadays, particularly with online patching, we are totally accustomed to bugs and frequent fixes. Only a decade ago this was unheard of. The cost of reprinting manuals, burning new CDs (or gasp stacks of floppies) and shipping those out to all customers was so prohibitive that it was worth delaying release until all important features were in place and a full series of tests had been performed.

It's our behavior and expectations that got us where we are, not the tools. And probably there's no going back. This is the age of instant, so we'll do our best to adapt. Ironically, the human who used to have to BE the integration between his/her software tools is now the monitor making sure that the automated integrations are managed properly when an unexpected situation arises. And now we've built new layers of logging tools to help us spot these situations. But we're still stuck managing the processes...


> That's why at many companies, whatever tools and languages allow you to get an MVP out the door before a competitor are not merely accepted but are often celebrated and lauded.

The linked article has some points I agree strongly with, but I disagree in its approach to tools. An MVP that's out the door as soon as possible is unlikely to have the sort of incidental complexity tech debt that kills projects. It can't, because the focus was on shipping.

A quick prototype for an MVP is great, get stuff out the door as soon as possible. If your app needs some javascript, write it however lets you get it out the door the quickest - if that means importing someone else's big framework, fine.

I think the secret is more akin to keep _your own code_ as minimal as possible. Your infrastructure and stack should respect the M part of MVP just as much as the product should.

Cause I rarely see true MVPs. I see "ok in order to scale to a million users in the first year, we're going to need x, y, and z on top of...". And then you're not just importing someone else's JS code and writing your business logic on top. Now you're building several systems that have to work together correctly even as they get modified independently in the future, and so in the process you're encoding a bunch of assumptions about your business that the MVP hasn't actually proved out yet! And that's tech debt of the deadly sort.

We're far away from having tools that will prevent a dev from building something overly complicated if they really really want to. Instead, we have to prevent it by always asking the hard questions "do we really need this right now" and not being afraid to look like the fun police.


That's the catch. The MVP becomes the future product. We basically start with a "what can you get me by date X" codebase, and then we're forever in the "ok, what can you get me by Y" cycle. It's a long time before the builders get a real opportunity to go back and do things right.

If the MVP is successful, that success almost guarantees a continuation of the risky practices. If it's not successful, it probably means the jig is up and the code goes into /dev/null.

I don't have a lot of big team experience. But in my limited experience working with other devs, I've seen a mixture of the builders and the doers. But what I've seen in almost every case (except for the too-much-cash-flow .com cases, which are largely a thing of the past... perhaps replaced with VC funded frenzy behavior) is management that doesn't want to spend any more time on a feature than is "absolutely" necessary. And they have comparisons to offer - other companies that did the thing within a certain timeframe (unrealistic or MVPish).

It's true though, the tools we use aren't really project development tools; they're still mostly get-it-done tools. We probably would be well served with commercial (funded, supported) components and frameworks that devs work with. At least some of the obvious mistakes wouldn't be repeated.

But then we return to the realities of the modern business world. Project budgets are compared to competitors, and there's always some competitor that has outsourced a project for 1/5 of what we suggest it should take, or they've bought a COTS kit and "tailored" it for their needs. Basically, the management expectations have become quite unrealistic. Ultimately, it's easier to change jobs than to effect a change of understanding and behavior in a company. In a techie way, it's a race to the bottom.

The alternative is to build a side project that isn't total garbage. If enough of us do that, we reshape the expectations of users.


I see you've studied Rich Hickey's talks. Good :)

While I generally agree with you - do remember that Rich talks about "information-driven situated programs" - there are plenty of occasions when you don't do that. Like, when you're building a business. It could be a fatal flaw to think that your early prototype is a "situated program" - it is, in fact, just a quick hack meant to validate a concept. You start by not knowing enough about the market & your customers - so you shouldn't optimize for simplicity, you should optimize for time-to-value. New shiny tools might be good for this purpose (they're often shallow, but also they often deliver shiny&meaningful value fast).


I heard this interesting tidbit a while ago that fits your sentiments perfectly. Can't remember who to credit it to, unfortunately.

Software is really about codifying process -- pun completely intended. What happens is some slick new software comes along that implements that happy 80% path on the process, and does it quickly and efficiently. Then business complains about the 20% falling into the gap, and starts adding features to reduce that, thereby slowing down the processing. This keeps going, until someone has an idea and implements slick new software that implements that happy 80% path on the process, and does it quickly and efficiently...


Or people learn and see that they can make a happy path of 90%. Or the business removes an implemented unhappy path of 5% that fails 50% of the time. Or the business introduces a new unhappy path to go with the times or times just change and the happy path is now only 50% of the cases.

You're completely correct. Software needs are easy and hard, and times change.


> The reason software is so complex now is that our expectations as users is orders of magnitude higher than it was "in the good old days". I'm old enough to remember the good old days, so I can speak with a little authority here.

Eh, maybe I've just turned into an old curmudgeon, but I very rarely fire up a "new" application and think "wow, this is so much better than the old days".

At least in terms of design, I find that most applications held up as some "gold standard" - iOS, Slack, Discord - are incredibly intrusive and obnoxious (not to mention unstable).

Applications used to be a lot simpler and less opinionated. They did one thing, and they did it very well.

I suspect it's a case of too many cooks in the kitchen, too much money and teams that are too large.


Yet I don't have driver issues, codec issues, bluescreens anymore. I don't need to fail compiling a kernel. I can watch a movie without downloading it, I can store a high quality picture on a floppy disc, maybe two.

Computers entertainment has changed: the internet works (no more unsupported browsers), gaming works (rarely hard system requirements like specific video cards). Business works too (no more emailing spreadsheets, accounting software works, shared calendars)

Applications did one thing, they didn't always do it well.


Yes, operating systems got a lot better. Arguably apps got worse.

Things that are now routine for apps that used to be considered unacceptable:

- Don't start if you're offline

- Limited or crap drag/drop.

- No app icon or start menu integration.

- File type associations, forget about it.

- Generic object linking/embedding, gone.

- Scriptability, gone.

A well written Windows 95 app would work offline, integrate with the Explorer, have an icon, might be able to embed itself into Word documents, and probably exported an API via DCOM. Modern apps? Uh, well.... they do similar things but at the same speed at best, quite possibly slower. They routinely barf if they can't reach the internet (e.g. any web app). Every resource dimension is 10x larger. Quite likely, I can't actually save stuff to a file. They crash less, but experience other kinds of flakyness more. And they probably find scrolling through any kind of non-trivial dataset a monumental challenge. Slack in particular is terribly flaky.


hear hear.


I agree completely, and would add further than more lay people than ever are using computers to do even more cotidian but critical tasks than ever, involving commerce, personal information, their digital memories, love lives, etc. Not only do individual programs do more, but more of life itself is done through computers and by people who are not specially trained to operate each system. This is also why fears of devs writing themselves out of a job is a laughable. The easier to use and more powerful we make our software for public consumption, the more opprtunities to optimize our lives the public demands. It's hard work, but the future is bright.


I think you nailed it. Software today is expected to handle all cases and there is not supposed to be any built in manual work arounds. Eliminating the need for manual work arounds takes 80% of the work.

Simple example, company I work for sells both physical and digital products. They used to have separate systems to handle each case and everything worked great. We are now building a new system to handle both at the same time. Issue is that due to US tax codes, there is a different tax rate used for a physically shipped product VS a digitally delivered one. The system has to account for this and no one ever considered this in the design or planning because they never had to deal with this concept before because each system just had to deal with one tax rate per order before.

If in the rare case a customer purchases both types of products in a single order, both tax rates have to be accounted for. Simple work around to save a bunch of work is just have the system limited to only digital or physical per order. This is not a optimal solution so complexity increases by a good deal, especially when refunds, reporting etc. is factored in.

This is not necessarily a bad thing but it is the reason we have such large code bases now, because we need to handle these edge cases without manual human intervention.

Our systems have to be both a jack and master of all trades.


Sure, I agree that things have changed. Despite this, outright dismissal of the premise that complexity is hurting us is unjustified and actually moves the conversation in an unproductive direction.


I guess my problem with that claim is that, like, there are process issues, but software is mostly complex because the problem it's solving is complex, and "simplifying" past a certain point means actually not addressing the problem at hand.


The author's terms are not really well-defined. From the text of the article (which, admittedly, I only skimmed), it appears that he is talking about complexity for its own sake, in the form of trying to force a project to use microservices or all sorts of mumbo-jumbo that is not actually needed to solve the problem at hand. In essence, the argument is against over-engineering, and that is a problem that causes us a lot of grief.


Well, alright, but without a way to measure one man's overengineering is another's appropriate design.


I mean, yeah, there is ultimately a judgment call involved. The problem is that many people make judgment calls that are clearly biased toward complexity, with vague non-technical justifications like "This will be useful when we get 10x bigger", "<Last-years-buzzword> is outdated technology", and "This is based on a whitepaper from Google." Unless the project is explicitly intended for experimentation, these are not good justifications!

If you're using something like Docker or Kubernetes, good justifications are like "Docker replaces a bunch of scripts we used to have to use to deploy our compiled binaries over a fleet of servers" and "Kubernetes really helped us measure and manage the shared hosting environment that runs our lightweight services".

How often do you hear these justifications, or others that refer to actual improvements in the technical processes? Usually it's just "containers are really cool d00d" or "We spend way too much on AWS, and since the concept of not using AWS is unfathomable to us, we are collapsing everything down to 8 instances that run Kubernetes and spending tens of thousands of dollars on the engineering overhaul to support that instead of just provisioning cheaper servers."


I see. Yes, that's a fair point that I think maybe I've gotten a little numb to because the place I work at kind of has the opposite bias (i.e., "we got burned in the past so now you have to fight tooth and nail to be allowed to introduce tools that would make work on the project significantly more productive").


Yeah, there absolutely has to be a balance. The important part is basing the arguments on sound technical and business justifications. "No, we can't move off of Perl because it's what I learned in college in 1996" is not really any better -- but then, Perl isn't really so bad. ;)


In my (perhaps myopic) experience, over-engineering is a rare situation. And statistically we can assume most "engineers" fall in the 40-80% behavior range. The 1%ers write entirely inline Python scripts, and the 99%ers write Java EE structures.

The 40-80% try to find some balance between "now" and "the space of potential futures".

Point being, most developers aren't maniacs nor idiots.


Not my experience. Overengineering is rampant. I think I've seen more webapps that were using an internationalisation framework than not, despite not one of the apps I've worked on ever having been used in a non-English market. The whole ESB market seems to not have any actual use case. Every time I've worked with a "software architect" they've made the project worse...

Most developers, myself included, are ignorant and do things without quite understanding the reasons for them. Given how much easier it is socially to make the case for "we should follow this best practice" than "we should do the cowboy thing here", overengineering is the natural way of things. All we can do is fight to keep it down to a manageable level.


Yeah, but don't worry, there's some consultant you can pay to give a class that will totally make all these determinations objective somehow.


> and the 99%ers write Java EE structures.

I.e. overengineering.


I'd say you're either pretty new, or you've been pretty lucky for this to have been your experience.

It's not that other developers are "maniacs or idiots". The issue is much finer-grained than that. Without strong technical leadership and responsible system architecture, it is easy for any project to go afield, no matter how talented the individual developers are in isolation.

Independently, most professional developers who've had a job for a while and jumped across a few different companies will be able to write acceptably reasonable code for the type of problems they're used to seeing. Even if they're using the cutting-edgiest, most fragile tooling, it will basically be fine within the context of a small demo and/or side project. Like you said, most of them are not maniacs, nor are they idiots, and they can easily manage something that is entirely under their sole control.

Without strong technical authorities and coordination in a team setting, however, you get these independent units trying to plug together, only to find that the couplers are fundamentally incompatible. Lots of individual developers are doing things that are reasonable in isolation and within the constraints of their specific timelines and concerns (some of these concerns are personal but still work-related, e.g., looking smart to peers), but which require huge effort to fix and make cross-compatible.

This is not just a matter of matching an API, though that may be part of it. The processes, incentives, and overall mindsets have to fit together well. Dev X needs to be able to read Dev Y's code from last week and understand what they were trying to do.

It's harder to read code than to write it, since writing is just an attempt to brain dump a structure you already hold, whereas reading is attempting to load a structure of someone else's conception -- a totally foreign construct -- based only on the slow, imperfect dump they were able to get on the day they wrote the code. You'll sometimes hear talk of catharsis in writing code, because it's a pleasure to express this mental model in a convenient reproducible context. You basically never hear the same of reading code.

Now, if you get Dev Y and Dev X on the same wavelength, then this isn't too big of an obstacle, and you have a sane, productive, happy group that is working under strong technical leadership to make cohesive, reasonable choices in the context of the broader team and company needs.

But if you can't synchronize the wavelengths, Dev Y moans about how stupid Dev X is the whole time, how it's better to use Pet Thing Z instead of Pet Thing A, so anything he touches gets its PTA ripped out for a PTZ, and how dev W should be moved over to replace dev X, since they're much more cooperative.

This continues for months and years. Stack enough of this together and it all just starts to blur. By the time you get to the wide scale view of the company's whole ecosystem, you're ready to trade in your keyboard for a mop and become a janitor.

tl;dr "a person is smart; people are dumb, panicky, dangerous animals and you know it".


My experience is generally the opposite - software is usually complex because people lack the skills, time and/or inclination to make it less so. The problems themselves are generally simple.

The world is chock full of developers who want to reinvent the scroll bar, badly, in JavaScript and seriously, genuinely believe this is both a good idea and a wise use of their company's precious development time and money.

It often looks akin to a bunch of civil engineers building a large rail bridge with no up-front design by making short-sighted decisions every two weeks as to which bit to add next, eventually sticking enough random bits of steel on to get a bicycle safely over it, before declaring it "finished".


It is often both. The intrinsic problems aren’t always sexy. We create other problems and focus on those so we can stay engaged.

So we have parts of the system that try to abstract away core business concerns and some big in-house framework that is poorly written and that you can’t put on your resume.


I have to this day never heard the words "single page app" uttered from anyone but developers. I'm pretty sure we are doing it to ourselves at this point.


I regularly hear "faster" or "snappier" or "more responsive." Which was my original impetus to learn React: that full page loads and page blanks weren't cutting it anymore.

If you have a better answer than single-page application tooling for this that is not "well, print a huge document, people will totally scroll to find things and wait for postbacks instead of expecting a navigable drill-down interface that responds to changes as they are made," I would genuinely be interested in hearing about it.


I've worked on more than one SPA and it's always slower than a traditional site except in very narrowly defined ways. YMMV, and some people love it.


I don't intend to jump on you, but this claim strikes me as super suspect. There are very few situations outside of the pathological case where the process of querying for data, rendering server-side HTML, and performing a full page reload is going to be faster than local-side processing of a (much smaller) web response based on that same data query followed by a partial DOM rebuild.

And that's not even taking into account perceptive effects; if you sit in a UX lab, having a page flash during rendering makes people think it feels slower. (I have done this test--adding a hook to the end of a web request to hide spinners and draw a white "flash" over a SPA--and the subjective "snappiness" score dropped from an average of 8 to an average of 5.) Nor is it taking into account slow connections and degraded functionality during interruptions, but I'll cut you some slack on that one just because so few people pay attention to it in the first place.

I'd love for you to explain more, but I'm gonna register ahead of time that I kind of expect a "doctor, it hurts when I do this" explanation.


I was a tech lead on a SPA for almost a whole year. You need a whole lot of JavaScript to make an SPA happen. That hurts performance in both the transport and execution stage. Our experience rewriting a high-traffic Ruby app to full-stack React caused TTFB to double.

If the process of loading a new HTML page takes less than 300 ms or even 500 ms, it's negligible in terms of UX. It's easy to achieve this on a traditional site, especially if you use caching.

I may be cynical about it due to my own experiences, but I do not like SPAs anymore. You pay for a highly contextually specific increase in perceived performance with a lot of problems in unexpected places. Happy to go on and on about it if you like.


You don't need "a whole lot of JavaScript", though. React and Redux are under 100k minified. So, yanno, a Retina-doubled JPEG or so. Your own JavaScript need not be particularly large--we're talking text, here, and even text that gets gzipped over the wire. If you're dealing with 2G connections, I can see an argument. If you're not, repeating it for a page blank on every action strikes me as, er, significantly worse.

And, FWIW, I have been party to metrics and user experience that shows that 200ms of latency can double your bounce rate. That 200ms of latency is not going to be coming from downloading react.min.js.

You may be right in some niches, but overall what you describe does not square for applications I build nor applications that my clients build, and I've been doing this pretty broadly for awhile.


The cost is not transmitting the js by wire, but executing it. 100k of js is very CPU heavy.

SSR does help though. But then there's this whole rehydration...


As I said in a prior comment, YMMV and some people love it. What you're saying right now does not ring true for me. But I'm not the kind of person who likes to pressure developers on how they perform their craft. I was just sharing my experience since you seemed to be in doubt about whether or not I was making it up. I'm not.


  > React and Redux are under 100k minified.
And the usable content on the single page is way less than 1KB (I am talking web apps, not content-heavy document pages). With markup you may go to 2-3KB.


Of course. But you're paying the round-trip on every page request, you're page-blanking every time--it just feels worse. For some things (where you expect documents--a news site or whatever) that's not the worst thing in the world, but the modern web is increasingly less document-based and more application-based.

And if initial payload is truly that much of a concern, Preact + Redux is under 5KB minified/gzipped.


Agreed. As a user (and developer) it's always so obvious when an app uses a JS framework stuffed full of AJAX calls because it always takes a few moments for the elements on the page to start to appear. It always just makes me think that the app has crashed.


I don't understand what you are saying. Can you possibly give a concrete example of software trying to replicate human judgement and wisdom and how that's unnecessary complexity?


The parent is not saying that replicating human judgement is unnecessary complexity, quite the opposite, that our expectations for software replicating human judgement are making it complicated and as a result of that complexity we have "better" software.

Concrete example: Photo organizing program

Old style simple, non-complicated: Photos exist as files on a single computer that are manually placed into albums by people, manually shared in curated batches by printing or emailing.

New style complicated, replicates human judgement: Photos exist in the cloud shared across devices. AI software takes date, location, face & object detection to create automated albums, reject the photos that people aren't looking, etc.


Exactly this. Software is complex because users have complex needs, or more specifically: users have varying needs and the union of those needs is complex.

Also: quality and dev cost are always factored in and complexity wins. As a developer I always argue for the simple, clean, maintainable, performant solution - but the business need of the customer is such that they’d nearly always rather have the slow, expensive, buggy and bloated solution (that is - given the choice at fixed cost). Even after pressuring customers on this - the answer in my experience is always “yes we accept poor performance, poor reliability so long as the features are there. Make it complex, we’ll pay for it with money and we’ll pay for it with accepting poor quality. Go for 10 features working 80% rather than 3 features working perfectly”.

And the reason they want this? It’s because their existing apps (from the ”good old days”) might be fast and so simple it’s superficially bug free but it just doesn’t solve their problem. They have to do manual work instead and it’s costing them.

So in the end we are making apps with complexity so high that making them properly (I.e with the kind of thought and care that goes into e.g a compiler) would take at least ten times as long. But customers would reject that product either on cost or lack of features or both.


Couldn't agree more.

Not only we have old businesses being entirely automated, automation itself is also opening up completely new markets.

Software complexity is exploding probably at the same rate it's absorbing the complexity that used to be scattered around companies in a less formalized, less cost-efficient way, across people and processes, w/ all the challenges that this implies (communication, correctness, monitoring, maintenance, etc).


The company I'm at now has been building the "most simple", "low-code" solutions they can find for 15 years.

We're constantly talking about the Pareto principle: "80% of software can be written in 20% of the time." And we optimize for building that 20% of the code in the simplest, fastest way possible.

The problem is the other 80% of the code, the part that accounts for 20% of the features, ends up being necessary before full adoption of the software takes place. 80% working simply isn't good enough. But since we've optimized for that easy 20% using low-code solutions, the other 80% of our code ends up being twice as painful. We end up deciding it's too expensive to continue and attempt to force use of the 80%-finished software as-is. Historically, this hasn't turned out well for us, and users end up compensating via spreadsheets or even paper.

It takes a couple years for this whole process to play out. The project is often declared a success since 80% of it got done and deployed so quickly. But there's always those last nagging bits that, upon further inspection, have led to the app/feature/whatever being abandoned later on.

> It is that kind of thought process and default overhead that is leading companies to conclude that software development is just too expensive.

It is too expensive for them. If you're building a house and get sticker-shock at the cost of building the foundation, maybe you're in too deep.

Please don't skimp on the foundation.


Hundreds of comments before someone addresses this part of the article, shameful. The other aspect to 'low code' solutions is, as software developers we have to chase the next hot thing, for employability and career development. No one wants to be stuck knowing only cold fusion when that goes away. I make the argument that web frameworks like Rails and Django are the 4GLs of today. 4GLs are roughly analogous to 'low code' I believe.


This is right. While an ideal software business would be written on 20 lines of brilliant, stupid-simple genius code which competitors can't figure out on their own, customers would need the small detail which makes the system click for them. Optimising for the least maintenance burden can't be taken to the extreme.


I don't get this article, the author identifies the issue immediately:

> in the process, we often lose sight of the business problems being solved

And then goes on tangents as to why that is happening finally arriving on what tantamounts to "developers don't like to use visual UI tools".

The original cause identified is right, people lose sight of business goals, but why? Because they have poor project management and they are in the trenches all day long. They want to do right by the code base, so they abstract things which don't need to be abstracted. The failure here is one of trade off analysis and again losing sight of what is important, shipping a usable product which meets business goals.


I mean overall I feel like this article is going in an unconvincing circle because it is trying to solve a problem that doesn't have a solution. "We want software to be faster to market than ever, have higher availability than ever, have more complex feature sets than ever, but also have higher quality and accommodate constantly-changing requirements. Also it needs to be secured to an extent that would have not necessarily been necessary pre-Internet. Oh and we'd like to keep down wages as much as possible as well."


Author here... I agree that developers often lose sight of the business problem being solved, but I think just stopping there is an over-simplification. It is very possible to build an application that serves the business problem adequately, but is grotesquely over-engineered or solves the problem in a ridiculous way. The world is built on software like that. My point was that we are introducing a lot of needless complexity to systems, all in the name of following trends or chasing the new shiny. And we are hurting ourselves by doing it.


> they abstract things which don't need to be abstracted

I'm trying to remember the last time I encountered a code base more than a year old and thought "the problem here is too much attempt at abstraction and elegance".

Broadly, my experience is approximately the opposite: over-abstraction happens to meet amorphous and unpredictable use cases, while any longstanding code base is full of un-designed hacks that arise from a constant short-term approach.


The irony is palpable here... The website loads slowly, elements move around the page during the first load, then the fonts blink around and go from system font to some custom font. It loads data from 15 different domains, and four of those are only for tracking.


True, but hypocrisy does not have much bearing on the validity of the arguments.

2 + 2 is 4 regardless of Hitler claiming it over Feynman.


That's not entirely true and your analogy is not comparable. Person A makes a claim that X is good and everyone should be doing it. Person A does the opposite of X. That doesn't prove one way or the other the truth of the arguments but it does call into question what the goal is with the arguments. There could be many reasons for such a discrepancy but a common one is manipulation.

As an example, if a crypo currency executive says their coin is fantastic and everyone should invest in it, but sells their entire holding does one not question the goals of such a statement? Note: this example is also not directly applicable to this article (at least I hope it isn't) but it's closer than your example was.


I will concede that it is not completely analogous. But you are talking about something else, namely the value of the intent behind a claim.

As I argued, intent has no bearing on the truth value of the claim, but it can – as you point out – be correlated with the truth value. I stand by that dismissing an argument based on intent is fallacious. You have to honestly deal with an argument – regardless of the messenger – to assess its truth value.


I don't disagree but I also don't see where that was suggested. The OP just pointed out the radical difference between what was preached and what was practiced. This could be seen as a dismissal but it could just as easily be seen as a call to action or just pointing out something amusing.


Sorry, I did not mean to straw man you. Re-reading your comment I realize that you didn't disagree, but were rather pointing out something else.


i'd argue that it is different 2s and different 4s.


Author here... yeah, we are loading a lot of crap, aren't we? We will do an audit and make sure we aren't loading a bunch of unnecessary stuff, but the whole site should be served up through a CDN, sorry it is loading slow for you.


The page triggers 50 HTTP queries and is 3.85MB, so a CDN is only going to do so much.

* the base page is light but has very high latency (250ms)

* there's an almost-empty CSS (style.css) next to one which seems a bit overwrought (screen.css, 140KB is a lot for a CSS file), oh plus normalize from cloudflare despite it apparently already being included in screen.css

* you're loading 9 different JS files from your CDN, plus some emoji crap from not your CDN

* typekit accounts for 1 JS query followed by fetching 12 different font files, at 15~30K each

* 14 different queries to "driftt.com" whatever the fuck that is though I expect some sort of analytics/tracking crap given

* tracking & analytics & ancillary service queries up the ass: hubspot (4), facebook (4), google analytics (2), drift.com (2), others (5)

* the Lato CSS from google fonts but apparently not the font itself, on the other hand you're loading "larsseit.otf" from your CDN, maybe pick just one?


It also works perfectly fine with all 3rd-party requests blocked (via uMatrix) for me. So other than the CSS none of what you mentioned is necessary to get a readable article.


How does something like that even happen? Did someone copy and paste lots and lots of stuff together to end up with this abomination?


50 is quite low I was auditing a big uk ecommerce site at the weekend and the homepage had over 170 elements


> 50 is quite low

50 could be "quite low" if most of the requests were for "content" images. TFA has under half a dozen images, as I've outlined in my comment most of the requests are for ancillary garbage.

> I was auditing a big uk ecommerce site at the weekend and the homepage had over 170 elements

See above, 170 is pretty high but assuming at least one image request per product[0] plus a few more for buttons & the like, having a large number of requests would be understandable for an ecommerce site.

ecommerce sites would also be a place where:

1. analytics is very, very understandable

2. the site would use a pretty generic system which could hinder specific optimisations

3. assuming the user will browse around, the site could preload various stuff, sacrificing some upfront performance for a nicer "inside" experience

All in all, I'd be much more understanding of that than 50 request on a blog post, even more so a blog post on software complexity. Incidentally, the amazon.com home page generates ~250 requests.

[0] because spriting for web pages remains a pain in the ass, doubly so for JPEG



I'm really turned off by the huge header fad.


And yet works perfectly fine without javascript and adblocker didn't even need to block a single request.

But the content is meh, I couldn't finish reading it and found your comment more interesting than the whole article.


it's a subtle live demo ;-)


I also think it should be noted that compromise between designers & developers & business domain experts (seemingly now called 'product managers) could vastly simplify all this.

The more empowered a developer is to over-ride a designer/biz person's preference[1], the better it is for all.

Way too often, I see developers just accept what the product manager/designer specifies with zero pushback when it strays outside what could be considered the 'norm' for whatever tech stack is in play.

[1] I use the word preference because that is all it is - their preference. They don't have a magic ball.

Sidenote: I'm also seeing a rise in UX suddenly becoming owned by the UI & product management team which is bizarre. You have designers(whether from print backgrounds or whatever) making poor(& I mean piss poor) UX decisions because they don't have the exposure/understanding of web platforms. Case in point: if your UX person has a calendar widget to enter a date of birth, they should no longer be allowed do UX.

/rant


Without going into who is inherently better at design choices, the problem often seems to be with the process: UX/UI designer sits down and builds some mocks, then hands them to the developer to implement. The developer sees that the mocks as designed go against the grain of their chosen framework/platform, and they have the choice of either implementing them in a hacky way, or pushing back and finding out if they were intentional choices or merely weak preferences.

They’re almost always better off spending the time doing the latter, or even better, working with the designer during the design process to both understand which decisions the designer actually cares about as well as communicate the platform constraints/preferences to the designer.


What you're describing there isn't a problem with designers doing the design, it's a problem with bad designers doing the design. And it goes double for programmers who are bad designers overriding the actual designers and domain experts on UI matters.


> it's a problem with bad designers doing the design

No, it's not.

A designer making that mistake isn't a signal s/he is a bad designer. I've worked with UI designers who range from good to excellent when it comes to design.

Thus, my overarching point which was that UX ≠ UI

and that UX shouldn't be co-opted by UI teams.


IME, the UI/UX distinction is almost entirely artificial buzzwording anyway. People either define UX so broadly that it loses any useful meaning, or sufficiently narrowly that it's just stuff good UI people were doing long before anyone started buzzwording it.

Similarly, you seem to be using a particularly narrow definition of "designer" here, as if the only people designing web UIs come from print backgrounds and the only thing covered by "design" is aesthetics.

If you have a team where you have someone from that mostly unrelated background doing your UI design, sure, they're obviously not going to make great decisions. But even then, there's still no reason to assume your programmers will do any better, unless those programmers happen to also have genuine UI design skills (which they certainly could, but it's mostly orthogonal to their role as programmers).


UI: User Interface - the layout & visual look & feel of the page/components. UX: User Experience - the interactions of these pages/components by the user (incl. expected behaviour etc.)

> you seem to be using a particularly narrow definition of "designer" here

My definition of UI designer is somebody who designs UI (which encompasses those of print background which a lot will probably have been at some stage in the/their past)

> but it's mostly orthogonal to their role as programmers

No, it's not, unless your programmers have been relegated to code monkeys. In years gone past (pre web dominance, when native desktop applications ruled), developers were designers/user experience etc.. which lead to the checkboxes everywhere UI.

Good UI needs someone with good design aesthetic. This is where UI designers shine.

In contrast to other design fields(most notably Industrial Design, where designers have pretty much always owned (& thought about) design & interaction as a whole), that is not something which UI designers have done. It's only in recent years that I've seen UI teams claim UX in the form of renaming themselves as "UI/UX team". My point is that UX has not historically been something they've put much thought into ,and as a result, they are not qualified to just own UX.


UI: User Interface - the layout & visual look & feel of the page/components. UX: User Experience - the interactions of these pages/components by the user (incl. expected behaviour etc.)

But UI has always been about more than mere aesthetics. A UI is literally how the user interacts with the system, the interface between them. That subject has always encompassed usability and accessibility issues, information architecture, planning sequences of interactions, and so on. What you've described is exactly the illusory distinction I was talking about.

I'm not sure there's much to be gained from pursuing this line of discussion any further. Obviously you can redefine terms to suit your argument and thus make your argument true in that context, but then it doesn't really address the original point of debate, which was whether developers should be overriding designers and domain experts in matters of design. If you relegate your designers and domain experts to the equivalents of those code monkeys you mentioned while elevating your developers to developers who are also designers and domain experts, obviously the latter are going to make better calls on most UI issues as well, but then you have to ask why you had separate designers and domain experts involved at all. I don't think that's a normal distribution of skills and responsibilities within a product team, though.


> which was whether developers should be overriding designers and domain experts in matters of design

in matters of UX...


Are you perhaps thinking of the term GUI? UI is not generally narrowed to refer to only visual appearance and layout. The mouse is a UI tool, as is the keyboard.


I've pursued this definitional distinction at a previous company, and got nothing by eye rolls. In modern software design, UI == GUI and design == visual design. That's just how people use those words.


Yes but visual design always includes interaction and behavior. A position that only lays out visual elements with no regard to behavior would be called an artist.


Kind of a pedantic comment given the articles content...

Do designers who design webpages/apps generally call themselves GUI designers?


Pedantic? You described the distinction between UI and UX, you just described it incorrectly.

To answer you question, no. Such people usually design interactions. Behavior. What happens when the user clicks or scrolls or drags. So they're designing UI and they're called UI designers. UX is a step removed, defining the overarching design language for the app and how it makes users feel.

In my experience there are no people who work only on the visual layout of a page and not on behavior (what you incorrectly called UI). Mockup artists maybe, but even they are usually thinking about behavior.


Regarding the sidenote,

I can visualize an engineer making that mistake way more than a UX person.

"I need something to enter dates, oh here's a calendar widget library. done."

The reason for product to own UX more is they should be more connected to the 'why'.


My point was more so targeted as UI designers who co-opt the UX role. If there's a dedicated UX person, then they're less likely as they'll be actively thinking about the UX.

Whoever owns the UX should be putting the thinking effort into how that behaves. Things like this happen due to a) inexperience with whatever UI construct is being presented and/or b) not enough thought being put in to the usability of that component.

A developer is generally going to be more hands on with the resulting UI (checking/testing etc..) Whereas, a designer has more than likely mocked it up in InVision/whatever and then done surface level checks

And product managers will be hands on with the product but, well, they shouldn't be trusted with UI/UX/anything but the description of the problem they want solved :-)


Honest question. What's wrong with using a calendar widget for DoB.(engineer here)


A calendar is good for a 'free choice'. It can provide context(i.e. day of week) and can be overlayed with useful information(price for a date/availability etc.

A calendar is not good for pre-determined dates where the date is fixed before hand - i.e. you don't get to choose what value you want the date to be.

For example, if you're entering in your driving license details for insurance quote, you don't get to choose when it expires - that's already decided for you. Ditto for DOB and other such things.

So, there are far more efficient options for date entry ranging from single text input (with appropriate parsing/validation) or the '3 dropdowns option(D,M,Y)' which is more effort but easier to avoid mistakes with.


You weren't born on a range of days.

You're unlikely to change your mind about when you were born.

You really only ever need to provide three well defined numbers.

So, why display a calendar?


Well, one issue is that there are different date formats used worldwide and there's a risk that the website will accidentally swap the month and date if I simply enter three numbers without thinking about them. mm-dd-yy and dd-mm-yy are both used occasionally.


There are also different calendar formats.

https://msdn.microsoft.com/en-us/library/cc194816.aspx


For starters, the default date is generally off by anywhere from 20 to 70 years.


Isn't the most common date entry field one where you can type directly into, but have the option to use the calendar widget?


> business domain experts (seemingly now called 'product managers)

I wish they were called product managers. It's been my experience that what starts as product management slides in to project management, as the product managers are not interfacing with customers or crossing department lines for requirements gathering.


I totally agree with the article. I would like to add that it is also the fact that as a culture we stress features over bugs.

Everyone says that security/logic bugs/ invalid data is a problem. I haven't worked for anyone that prioritizes security and bug fixing over features. As we gain more and more time in a product and with deadlines that don't make sense we basically get a whole pile of code that is unable to be tested in any amount of time given.


we stress features over bugs

I don't think we do. We stress the needs of the user over everything (and rightly so in my opinion). Often that means a new feature is more important than a lot of minor bug fixes, but equally a critical results-in-data-loss bug or an important security patch will take priority over any new features. It's just that those things are actually quite rare in production code, so it feels like features always take precedence.


> We stress the needs of the user over everything (and rightly so in my opinion).

We don't. We stress whatever is hot on forums. And sometimes we stress whatever the most overconfident person in the room pushes for - regardless of whether that person has empathy for the user or not.


> We stress the needs of the user over everything

Imagined needs of an imagined user perhaps. And more and more that imagined user is a drooling idiot best left in a padded cell it seems...


There's no need to imagine what users need. There are plenty of great products to capture data about what they do in your app, and you can always talk to them as well.

As for them being "drooling idiots", I find that if a user is getting something seriously wrong it's almost always because a developer or a designer built something that's failing at being easy to use rather than the user being at fault.


Features bring in revenue, bugs represent risk to lose that revenue.

If you're over-leveraged (as in debt) then you eventually hit an inflection point. However what I haven't seen is a good study of what that inflection point looks like in our industry.

I can imagine some measures around lost sales due to feature parity vs attrition due to problems in quality. Both of these are unfortunately trailing indicators that you have a problem.


There can be lost sales due to quality. Clients usually have a demo or a trial before they buy.


Very true. The risk applies at almost every stage.


I haven't worked for anyone that prioritizes security and bug fixing over features.

It’s a huge problem. Not only that, but adding new features comes with a certain prestige that fixing bugs doesn’t have.

Every 2 weeks the developers in my company have a meeting to share the new features they worked on and they applaud and congratulate each other.

There is no such meeting for what bugs are fixed. It’s seen as drudge work and no one wants to do it.


I stopped being concerned about this specifically a couple of years ago. Complex software does not need to be refactored, it needs to be rewritten. The important thing that the legacy software does is firm up the domain. With the concepts and language set, rewriting to encapsulate new semantics is relatively easy. It's when you have no idea what the domain is going to look like where it's hard, and that makes for complex, quickly-becoming-legacy code.


On the contrary, software should rarely ever be rewritten from scratch.

Rewriting existing software both is incredibly risky from a business perspective and potentially jeopardises the hard-won knowledge from years of working on and working with the software (also see Joel Spolsky's take on this: https://www.joelonsoftware.com/2000/04/06/things-you-should-... ).

As tedious as it may be sometimes, refactoring software and gradually replacing parts after having written the appropriate tests that make sure the software still works as expected almost always is the better long-term approach to reducing complexity and improving software quality.


I'd like to see that happen just once in my career.


Seen it happen more than once. The new guys throw away the old thing because it's a mess and it's too hard to understand. Two years later, the new new guys throw away the new thing because it's a mess and it's too hard to understand.

The company's website is ever new.


No, I meant the scenario where a company listens to the engineers and lets them put off business goals to spend weeks to months every so often refactoring, ending up with a modern, easily-maintained system instead of an arcane mess.

My solution to the problem when it was my problem to deal with was LITFA (leave it the fuck alone) until business priorities allow replacing it with a solution that's managed by other people. Obviously the reason it turned legacy in the first place was because insufficient resources didn't allow proper maintenance. Well if maintenance was too hard, rewriting with your maintenance staff certainly won't drive a better outcome.

The only way I see either refactoring or rewriting working is if LITFA introduces an existential risk to the company, and the company really, truly, actually does have enough resources on hand to do a proper job of it. But I don't believe anything short of a bet-the-company situation is going to really light a fire under people's asses.


I have seen refactorings, but it does not involve putting off business goals. It involved one part of team refactoring a part of codebase while other part of team is producing new features on another part of code, so that at least some business goals are met.

You also do it piece by piece, because half done refactoring is worst then none.

One potential disadvantage of LIFTA is that you miss learning. Bad legacy code does not happen just because no maintenance, but more often because the original architecture is not suitable for all requirements and it requires hacks. When you (or team) are writing the same thing second time, you write it differently and better then first time.

There is also a question of who should do the refactoring - and it should be someone who knows requirements and has good judgement (e.g. not a new junior you barely know).


>>> ending up with a modern, easily-maintained system instead of an arcane mess.

This ending never happened, ever.

I think I agree about the resource aspect you mention. A lot of the software I've seen was about as good as it could get with the resources that were allocated to it.


> the reason it turned legacy in the first place was because insufficient resources didn't allow proper maintenance.

I want to get that on a T shirt, a placard, an ad campaign, and a billboard.


> No, I meant the scenario where a company listens to the engineers and lets them put off business goals to spend weeks to months every so often refactoring, ending up with a modern, easily-maintained system instead of an arcane mess.

I've seen it happen, once. Almost two years of work, lots of architecture, design and all that. Built a beautiful codebase that implemented a product that noone actually wanted. It was a positive experience in some ways, but not something I want to see again.


>jeopardises the hard-won knowledge from years of working on and working with the software

We have issue trackers and robust source control these days, that's not really true unless someone wrote poor commit messages and didn't write anything on the issue. You can roll that into the new specs if you take the time to document. Knowledge preserved.

The only hard-won knowledge you should be fighting for is business knowledge, which can be dependent on your UI/UX decisions. If you rewrite and change that substantially, yes, you lose that knowledge because you've obviously changed the UI/UX. You now force all of your users to relearn your product. Plus, you're going to get a whole new category of bugs with that new UI/UX.

All of this was made more difficult back then by not having the modern language features in stuff like Rust and .NET and Go and Python that's made stuff easier and safer. Rewriting .NET -> .NET is much, much easier than C -> C. If you change languages, that's worse, and you'd have to completely learn two languages and all of their painful/weird points AND translate between those two (if you thought JavaScript equality was bad...)


After all, who's ever heard of a code rewrite getting out of hand and taking way more time than anticipated or even never being finished at all?

It's not never the right answer but there are serious risks in doing a rewrite and you'll be stuck supporting two systems for a substantial length of time.


Refactoring is never the answer either. Just keeping the legacy system standing until capital becomes free to replace it is what usually happens in my experience.


I think there are a couple of different problems driving overly complex software.

1. Cargo-culting. The article talks about this, but one thing I've seen consistently over the years is that something new comes along, and people latch onto it like it's the solution for every problem, everywhere and you're stupid or outdated for not thinking the same. It's the whole, "RDBMS's are dead!" thing.

2. Shiny-new-thingism. This is slightly different than cargo-culting in that people don't necessarily think the new tech is the ONLY way to do things, it's just that people want to use something new because it's hot. Then people get deeper into it and realize that there are some unknown unknowns that bite them in the ass. This becomes more likely the more abstracted things are, and/or the more moving parts there are. It's not that old technology doesn't have that problem too, but there are more charred carcasses of other people's failures to learn from.

The other thing is, as a developer there is a strong urge to try to use new tech because 'hot tech' = 'higher pay'.

3. The business requirements were not well-understood when the project was started (and may still not be well understood). Sometimes you're told to start building X, but you find out over time you really should have been building Y, while the customer asks for Z (no wait, W!) and you never get the time to pay off that technical debt. This is probably the hardest problem to address and is a large reason agile caught on.

4. Bad software architecture. Maybe you inherit software that has a crapload of technical debt because of point #3, or maybe it's because the previous team made some really bad design decisions from the get-go due to incompetence/inexperience. So then your choice becomes, do you tear it all down and start over, or do you limp along and add to the skyscraper made of popsicle sticks and bailing wire? Oh, and there's no requirements!

So you're left performing software archaeology like Indiana Jones, trying to figure what it was that a previous civilization was trying to do. Then you have the horror movie moment where you realize, "This...never worked, it was wrong all along..."


>The other thing is, as a developer there is a strong urge to try to use new tech because 'hot tech' = 'higher pay'.

That's a great point. We developers do often engage in "resume driven development" – often not intentionally, just subconsciously observing that some tech is hot and there's a lot of demand for it. So we look for excuses to use it, which is of course backwards.


The only way out of this box, IMHO is proper layering of software and proper interfaces. Human brains have a limited capacity to manage complexity. There is only so much you can fit into your head at one time. (I dunno, maybe it's just me, but I've been around long enough to have hit this limit repeatedly, and started forgetting about code I have written in the past).

To combat this, humans require abstraction. A proper abstraction fully encapsulates the implementation details and provides an interface that does not require knowledge of the details behind the abstraction.

We have too many leaky abstractions. Bugs, performance issues, brittle software broken by updates. Language features that break in complex ways when users make mistakes (hello C/C++!) A leaky abstraction lets its details spill out through the backdoor. These cause cognitive load and make it impossible to layer software properly.

The only way to build a big system is proper abstractions.

Until Spectre and Meltdown, ISAs were good abstraction layers.

Today, still, my personal opinion is that the kernel system call layer is one of the strongest abstraction boundaries that we managed to enforce. Above this layer, it's all a mess. Below this layer, it's all a mess. (To a first approximation).

We aren't getting anywhere until we get the layers right.


I think the culprit are the programming languages, they offer all these features that allow complexity. If you allow it, people use it. It sounds like an oversimplification of the problem but I honestly think that's the entire problem summed up. The second layer are all the legacy layers that we build up on. Nothing can be done quickly about it but as soon as you make a better language, people seem to almost instinctively jump at fixing the legacy layers too. People write an entire OS in Rust and Rust isnt even simple. We just need a simple, fast and safe replacement for C and I believe all the messed up parts about software would get fixed very fast.


I'd like to offer an empirical counter-point: Java.

Java, while a good quality language overall, is deliberately missing some important features like function passing (possible these days...). This is an effort to make it easy to learn and to encourage people to build simple software.

Its library ecosystem (with a few superb exceptions) is the epitome of over complex mediocrity. Java defines "Enterprise Bullshit".

Why has this happened? I think it's the missing features. You get a sort of jenga tower where the foundations aren't quite right, so you build some other abstraction on top of it. But then that's not quite right either, so you build the next layer back in a different direction.

More powerful languages means you need fewer layers. And, it's easier to rebuild those layers if they turn out to be wrong.


> Why has this happened? I think it's the missing features.

I have a more, I guess, cynical explanation.

I think developers, even the ones who proclaim otherwise, love complexity. It suits their egos to think they are working on something really complicated that only they understand. They will still say its simple, but it is only so after transcending some barrier of understanding that nobody except them has.

So this is how things work: if a technology is created that actually is simple (eg: Java), developers will immediately start adding layers of complexity. They will add more and more layers right up until the point where they themselves, can no longer understand it. This is what I think of as the "complexity budget". Only when the complexity budget is fully consumed, intellectually satisfied, they will stop, and start to refactor, to make it "simpler".

If you want a second example, take a look at the React ecosystem. React fans will tell you how simple it is. How you can use React by only knowing Javascript and two or three simple rules. Then have a look at what any real world React project actually looks like and observe the 17 layers of complexity they added on top.


COBOL was even simpler than Java. So simple that it's almost unusable outside of mainframe time-sharing applications.


The java ecosystem is great. One of the best of all languages in existence.

Software development in big companies and outsourcing firms is usually terrible and it gave a bad reputation to everything they use, including that language.


Java has pretty good and very rich ecosystem, including huge open source community. And also, it is not that difficult. Even current "Enterprise" part of that eco system is not that difficult. As for complexity of some frameworks, that was:

a.) Learning step with new tools available.

b.) Result of people attempting much ambitions projects then before with zero experience.

People always loved to hate Java. Nevertheless, it performs in environments where attempting to something else invariably fails.


> As for complexity of some frameworks, that was:

> a.) Learning step with new tools available.

> b.) Result of people attempting much ambitions projects then before with zero experience.

Disagree. A lot of Java frameworks are genuinely complex because the language is too simplistic to implement them, so each framework ends up having to build what's effectively its own extension to the language. Even well-designed frameworks are like this. Jackson's module registry and annotation-mixin interfaces aren't there because Jackson's developers are inexperienced or overambitious, they're there because to implement a JSON serializer you need that kind of functionality and there's no sane way to implement it in the language proper. Likewise Spring's magical autowiring, or its bytecode-manipulation-generated magical proxies that are used to implement things like @Transactional. Likewise Jersey's resolution from annotation classes to .service files with magical filenames to find out how to serialize a given entity. And so on. The frameworks are complex because the problems they're solving are complex; simplifying the language beyond the point where it can express the complex problem doesn't make the problem any more complex.

> People always loved to hate Java. Nevertheless, it performs in environments where attempting to something else invariably fails.

I've had a great career replacing Java with Scala, where the language is powerful enough that you don't need any magical frameworks, only plain old libraries with plain old datatypes and functions. It's much nicer to work with.


Spring's auto-wiring isn't like that because of language limitations. You don't need DI to write Java. It's like that because some people love writing reflective frameworks. It's easier than modifying compilers but harder than just doing it the "obvious" way, so it satisfies a lot of programmers i-want-to-stretch-myself itch.


In my experience it's the opposite. The creation of more and more complicated code led to more complicated languages, not the other way around. Some stuff is just going to be complicated regardless of language and if you use a less complicated language like C it sometimes makes the resulting code more complicated as you have to spend time essentially duplicating language features that already exist in the more complicated languages


I didn't mean to imply that C has enough features, it definitely should have a few more. But that should be it, don't keep adding stuff just because it's 'nice' to have. The added cost of maintaining big codebases of a language that has many features compounds a lot.


It's been a long time since I coded in C but I can't imagine there aren't solid, battle-tested libraries for handling strings, collections, and all the other nice things that Python gives you. Am I wrong?


Sure.

Now let's say that you're using one library using String Library A, one library using String Library B... go forth and convert strings at every point in your application where you use one library or the other.

There's benefits to having things included in a language in an opinionated way -- everyone using that language does it that way.


I agree languages are the culprit. Too often we just develop in a new language for fun or just to fill a CV with new buzzwords. Sometimes it leads to innovation. In most cases it's pure bloat.

Recently overheard a conversation going this way:

A: "I've built a client for X in Scala." B: "We already have one in Java. What features did you miss that made you decide to write it again?" A: "It was in Java. Now I'm trying to write one in Haskell. It will be fun."

Even worse, that oftentimes the only way to attract new talent is by allowing folks to write stuff in a language that other companies do not (yet) have adopted. This leads to a vicious circle where new languages are going to production and the developers stay on the same level of knowledge for those languages since they do not spend enough time to dive deep enough into them as a "new kid in town" arrived.


There's some of that, but there are real reasons you might prefer Scala over Java besides it just being more "fun."


I completely disagree. I can't see why we'd expect software to increase in quality if we made it more tedious to do common things.


Lack of architecture is to blame, and poor design. I'm sorry to say after having worked in software professionally for over 10 years, that most people just suck at any kind of reasonable design or architecture. You can attribute it to lack of experience, incompetence, or lack of care, whatever, but most people are just bad.

The only metric an engineer should be judged by is: do they make everyone around them better? How quickly can a new developer come up to speed in their codebase and make meaningful contributions? It's okay to start out slow, but in my experience the #1 sign of a bad engineer is someone who gets slower over time. If the costs to implement a feature are increasing, not decreasing, you need to take a hard look at your team and what kind of hot mess people you are really working with. If you're in a position of power, that might involve letting people go, and worse, if you're just an engineer, that might mean finding a new job yourself.


Similar experience. Although I think the biggest issue isn't the initial design, but all the requirements that slowly add cruft. You need a team that religiously follows the [Boy Scout rule](https://medium.com/@biratkirat/step-8-the-boy-scout-rule-rob...), and you'd have almost no issues even without huge refactorings or redoing everything (where you often make new mistakes).

I've been in teams like that, and in teams that don't care. The Boy Scouts performed consistently better. And before anyone says anything, Girl Scouts can do the same ;).


Shout out to my homies in RVA.

There are so many complications to this issue. I imagine a long time ago things were less complex and more manageable. As you build layer upon layer of new tooling on top of the past I feel things were just naturally going to get complex and unmanageable. It's almost like for us to have avoided this we would need some programming paradigm that stands the test of time and is flexible enough to meet today's needs. Which is probably impossible.

Unless we are in the Cambrian explosion of tooling and frameworks and at some point we will achieve a steady state. That would be really cool to see in my lifetime. Every time I think "ah yes, this should be good enough" I am blindsided by the new shiny.

That's also a separate topic. There's plenty of incentive to use the new shiny if it's not the right tool for the job. Tech gets hyped by the usual engines, CTO's buy into it, and then that's our tech stack. You need to have these new frameworks on your resume to become employable so when you start a new project you end up using one of them.

I don't know what the solution to this is other than the hope that there's some general building blocks or concepts of software development that will be easy to make work with other building blocks.


Software companies these days are ridiculously inefficient.

Unfortunately, this is because the logic that goes on in peoples' heads when it comes to selecting tech stacks often boils down to this:

1. x newer than y.

2. The company which created x has more money than the company which created y.

Therefore x is better than y.

Q.E.D. Now, let's immediately refactor all our company's code to use x.

Complexity and relevance to the task are not a factors at all. That's because big software companies don't need to be efficient - They usually have a monopoly so the money will keep coming no matter what... Complexity is a tool that middle-managers use to create more work so that they can hire more people and thus get more responsibilities and bigger bonuses.

If all tech decisions where made by non-technical people using the 2-point criteria mentioned above, companies would be using exactly the same tech stacks as they are using today.

Developers today are geniuses when it comes to using stacks in ways that they were never intended but they are absolutely terrible when it comes to selecting the right stacks for the job to begin with.


One thing I am yet to see is a critical analysis of over-engineering. I constantly see articles like this that talk about how those young developers are looking for the newest and shiniest technology and I think that's just a lazy answer to the causes of over engineering. I for instance have tried my best to run away specifically from the new shiniest things like Kafka, webpack, mongo. But this is obviously a flawed strategy as those technogolies my help solve my problems. One might say careful cost/benefit analysis is the answer but it's most times easy for me to point out how much more beneficial the tech I am introducing is. And it could be but it would still be over engineering. Plus it's not really solving the real problem that is adding more incidental complexity to the solution than needed. It would interesting to see an article on causes of over-engineering from real examples so one could extract potential solutions.


I have witnessed other people do it at multiple companies and even been guilty of it myself once or twice.

I sat at the end of a long board room table full of senior engineers and directors and was the only one who didn't bring a laundry list of techs he was waiting to introduce.

I have seen older developers treat design patterns the same way that younger developers treat JS frameworks. Everybody wants to believe they have a toolbox full of solutions to problems. Maybe for the last generation it was Gang of Four, and for this one it's the HN front page and blogs.


A large part of it is because for most programmers, it is the only way to demonstrate growth in their own skills.

Relatively few developers get to lead projects, design truly interesting systems and so on. Most developers, especially in bespoke business software, end up writing large amounts of fairly un-challenging code day in day out.

But then if you just churn out the same sort of business logic for years on end, with little access to problems that are inherently challenging, how do you feel any advancement in your craft? For many developers it is through over-engineering or pointless re-engineering.

Common signs: tasks that are actually trivial suddenly start using home-grown frameworks. Obsession with vague rarely defined terms like "best practices", "clean code", "modularity" etc. An insistence that practices which were once considered an advance were actually a regression, like the recent trend to claiming that exceptions are bad because they make control flow "hard to understand", which justifies rewriting everything to use return codes, or XML is bad because everyone claims it is, but JSON with schemas is totally not the same and definitely how architectures are done Right™.

To get a perfectly simple design and implementation that is nonetheless powerful you really need a perfect match between the time/capabilities of the engineer and the inherent difficulty of the problem. That way the developer's mental energies will all be spent designing something that will work, and relatively little is left over for inventing new config file formats.


The funny thing is I actually faced this problem a lot last year. Since I read paul grahams essay on hackers I decided to focus that will for hard problems into my side projects(although this has taken my focus from popular frameworks to specialized libraries)


Nerds are raised on sci-fi. I was. They keep waiting for the future to happen. Scouring for "new technologies" is sometimes called "scouting." It's really important for a company to appoint "scounters." Then everyone else knows "scouting" is not in their job descriptions. Then they start to focus. Many organizations never graduate from future fetishism.


Hopefully I'm not in the minority when I say there's another side to this too though.

Working now at my first job I can't tell you how annoying it is to try to update ancient php with essentially no code reuse. Sure it's simple, but basically unmaintainable.

Also from the same job, but I continue to work hard to try to convince others that using prepared statements is worth the slight extra effort. If I hadn't made it significantly easier it would never get used at all and we'd see way more string appending


Please don't complain about old code not written to today's standards. There are reasons why code can be bad and you don't know what they are. Accept the code is what it is, improve it where you can, and learn something about the crap a developer before you have had to go through when they wrote it. Even the worst hacks often aren't because they were a bad developer.

You'll have your share of terrible projects that force you to write horrible code without decent requirements to unreasonable deadlines for a client that won't listen. You'll hate what you write. You'll always want to go back and rewrite it but you'll never have the budget. You might quit your job over it. It certainly won't be a reflection of your skill or passion for writing great code. The "ancient PHP code" you're working with now probably isn't a reflection of the developer who wrote it either.


You're definitely right. My point got a little side tracked by my own partially unfounded frustration. I'm just trying to say some level of abstraction is worth it. Not always, and not in every case, but at least in some. Yes the code wasn't written with abstraction in large part because it's old, but it would have been better if it had been.


Simple and unmaintainable? These two don't go together. Maybe it's ancient and a mess... People have known how to write unmaintainable code for decades.


I feel like I used simple differently from how you must mean it. A PHP app with no libraries or abstraction is very simple in one way of thinking, I don't think coding that way makes things easier in any way at all.


Post the URL of the site pages that aren't using prepared statements, and I'm sure someone will make the argument for you. ;-)


>Well, a few things. First is that experienced developers often hate these tools. Most Serious Developers™ like to write Real Software™ with Real Code™.

"experienced" is the key here. With most of these tools you develop 90% functionality very fast, and there is no sane way to develop the rest 10%. The hacky/ugly/umnaintaneable ways you go to add those 10% into the beautiful tower that the tool built for the 90% ... You become very "Serious Developers™" after doing it several times and noticing the "pattern". And it helps that with time you are also becoming much better with the Real Code™.


The common observation that experienced developers avoid (insert supposedly productivity-enhancing tool here) is usually dismissed as some form of machismo/old dog being unable to learn new tricks, but it ought to impress on somebody how universal this is... and ask, maybe, why it's so universal? I don't dislike visual basic, or GWT, or ruby on rails, or angular CLI because I'm an old codger who doesn't need them fancy new fangled young person fashionable toys, but because I tried them out and found that, although they do make a pretty GUI appear on the screen a lot quicker than I can with vi, they also fail in unexpected ways that are nearly impossible to diagnose. So now, rather than spending a lot of time coding and a little time debugging (and knowing exactly what I'm debugging and knowing that when I find what's wrong, I can change it), I spend a little time coding and a lot of time trying to guess what the hidden, internal model of the application is hung up on. All with a skeptical boss asking me why it isn't done yet. I always give the hip new thing a chance, and I always find that it trades off some up-front complexity with a lot of impenetrable mystery - along with the expectation that the "time saving" tool will actually save time rather than shuffle it from the beginning to the end.


I agree completely. I get this incredible sense of comfort when I know I can hammer out whatever I need at a low level, instead of having to cross my fingers and hope that I never need a way out of the walled garden.

It's like the difference between having file system access to a Wordpress install and not. When you have access, you can fix anything.


> A lot of developers right now seem to be so obsessed with the technical wizardry of it all that they can’t step back and ask themselves if any of this is really needed.

"Can we? Should we?" <--- something NOBODY in tech seems to ask any more

Developers need to learn to go with the flow. Very often a platform or toolset makes a specific technique or manner of working easier than the others, but that option might be discarded by the implementer because it doesn't strike their fancy. Fuck your fancy, go with the god damn flow!


Which one?


This can really cut both ways, and moderation is key.

I've seen software that's extremely overengineered. Something like ~15 microservices with data (for the same model) split across both Mongo and Redis, spun up via Docker configs hosted on Google Kubernetes Engine. Run by 1 person for a product with low (<1k DAU) traffic.

On the other hand, I've also seen software that's extremely _underengineered_. And that might not be a disjoint set. The same stack I mentioned above implemented an in-house search, task/message queue, deploy system, URL shortener, and mobile notification service - all with a more restricted and buggier API compared to more popular commercial / open-source offerings.

It's highly important to choose your areas of competence. What systems need to be better in X vs. off-the-shelf alternatives? And can you afford to dedicate time and money (through development and maintenance) to support it?

===

I'm going to state a likely unpopular opinion now, and advocate that most typical SAAS / CRUD projects start with Rails, and build from there. I say that because there are many, many well-maintained and battle-tested libraries that have already been vetted in that ecosystem, and the community's mantra is to optimize for engineering productivity (GSD and communication with other engineers) and happiness. There are libraries to set up an ingest pipeline [1], or a pooled background task system [2], or whatever other complex systems you need to build. But when you're just getting started, you want to set up an API with some validations against business logic, and ActiveRecord and Rails(-api) gets you pretty far very quickly for just that.

Another related unpopular opinion I hold is to avoid Node, unless you have a specific reason to use it. Yes, it's new and flashy, but the cost of that is that the ecosystem hasn't matured and tooling you'd expect doesn't yet exist. That's not to say that every new technology should be avoided. I think of it as investing in a sense - you need to place your bets correctly based on research like community, maturity, functionality, and possibly most importantly: how it compares to existing tools (and their trajectories).

[1] http://shrinerb.com/rdoc/files/README_md.html#label-Direct+u...

[2] https://github.com/mperham/sidekiq


> I'm going to state a likely unpopular opinion now, and advocate that most typical SAAS / CRUD projects start with Rails, and build from there.

<3

Rails can get you very, very, far. While it's sexy appeal has faded, it's real beauty has shone. I can hop into most rails codebases and be productive in a few hours. I can come back to my own projects and still understand whats going on. I think that's huge. Sure there are newer things out there, but nothing beats rails for productivity or maintainability and overall developer happiness - tenants i hold in high regard.

Exploring new tech is great - doing it in production is asking for trouble of all sorts. If you want to build something that lasts for a long time, build it as boring as possible.


This article is about the complexity of software development.

The complexity of software itself (and hardware, as we have seen recently) is a much bigger problem.

More and more machines are playing a larger role in our lives and decisions; but we don't come close to understanding them. In about a decade, things will just "happen" with no clear cause (the actual cause being some emergent behavior of out complex systems), and we will revert to pre-enlightenment mystics and superstitions.


Step 1: Solve the problem with the most mature technology that is commonly used for that problem

Step 2: Fight the urge to use something newer


Better to just evaluate the tools in context of the problem, that way you're always open to the best tool for the job regardless of how old it is.


Nope. New tools have a lower expected lifespan (due to survivor bias) and less proven value. I love new tech, but a LAMP website will last forever.


> New tools have a lower expected lifespan

That doesn't have anything to do with whether or not that tool is the best one for the job. Further, this logic is clearly fallacious because I could just as easily say PHP is too new and you should use C and CGI. Clearly, there are qualities that make one tool better than another for a certain type of problem, so making an evaluation of the available tools per problem is the best way to go.

> I love new tech, but a LAMP website will last forever.

But LAMP is not always the best solution to the problem. For example, if one of your application requirements demanded streaming live data to the browser via websockets, PHP would be a bad choice compared to something like Go or node. Another example might be a webservice that takes some input and has to do computation-heavy work like image processing or rendering, it'd be a big mistake to implement something like that in PHP rather than than, say, a JVM based language.


> if one of your application requirements demanded streaming live data to the browser via websockets, PHP would be a bad choice compared to something like Go or node. Another example might be a webservice that takes some input and has to do computation-heavy work like image processing or rendering, it'd be a big mistake to implement something like that in PHP rather than than, say, a JVM based language.

A nontrivial problem with this line of thinking you employ, is that it expect you'll be proficent/efficent in all of these languages & toolings, when in reality you may just be a really good PHP dev.

It can be beneficial, for sure, to have an idea of where something would do well; doing calculations in SQL vs in server code, for example. But jumping right to it can be foolish if there is a cheaper / "faster" way to do it first.

Part of tool evaluation needs to evaluate your own proficiencies, and the ability to delay optimizations. Maybe PHP is crap for websockets, but maybe you can spin up lots of machines to handle the load and that's good enough; or maybe (in some ways more likely) the project goes nowhere, and that optimization that could have been, was never needed.


> in reality you may just be a really good PHP dev.

If your only tool is a hammer...

> Maybe PHP is crap for websockets, but maybe you can spin up lots of machines to handle the load and that's good enough

Well sure, you could treat every problem like a nail by going to whatever lengths necessary to solve the problem with a hammer, but that is a waste of time and effort since a more appropriate tool could magnify your productivity instead of frustrate it.


you can do a lot with a hammer.

It is my view that it's better to bang out what you can with the tools you've got, and later reach for better tooling when the need or pain point truly arises; instead of trying to optimize from the start with tooling you don't know for perceived potential future productivity. And I consider this not to be a waste of time and energy, but an optimization of available resources.

Often, building with what you know is more important than hacking together with something you don't. Especially if you're trying to build a product on it.

Of course, building something like a web app in excel would be entirely the wrong approach; your tools have to at least be in the ballpark of applicability.


> Much of this reduction has been accomplished by making programming languages more expressive. Languages such as Python, Ruby, or JavaScript can take as little as one third as much code as C in order to implement similar functionality. C gave us similar advantages over writing in assembler. Looking forward to the future, it is unlikely that language design will give us the same kinds of improvements we have seen over the last few decades.

I have lots of hope for language improvements (specifically type system improvements) to help manage software complexity. Generics marked a significant improvement over C or Java 1.0 styles of development. First-class sum types means I don't have to think about how to implement them as a pattern and weigh the tradeoffs or make sure all of my conditionals cover every scenario. Rust's memory model uses types to give strong correctness guarantees, and is currently probably the best way to write software when a GC simply won't do. TypeScript and MyPy employ gradual typing to help developers get the best of both dynamic and static typing. I suspect that there are a lot of ideas coming out of type theory that just need a good path from academia and into real-world practice.


The main problem is the environments we have to code in are garbage; the simplest solutions to complexity are often standardized at the language/environment level but implementations are years late.

There is more money in building products and coming up with workarounds than there is in improving the environments we all have to work with. There are also too few smart people in the world available to keep up with demand for both activities.

Instead of 10 people keeping IE up to date with modern JS language features, we had millions of people people trying to come up with and use genius solutions (Babel anyone?) to transpile and polyfill their code backwards. Instead of 4 people adding support for ES Modules to Node.js years ago, we have have many thousands of people trying to work out WTF to do on their own.

The solution is not to freeze technology in its currently deficient state and to be content, that attitude on behalf of environment maintainers is how we ended up here in the first place!

JS will be "mature" and infinitely more productive once basics like modules are implemented in all servers and clients. It will get better eventually.


> JS will be "mature" and infinitely more productive once basics like modules are implemented in all servers and clients. It will get better eventually.

JS is in a really weird and unique place as the only programming language available for browsers (assuming you want it to work on all browsers). Think about that, when's the last time you've been constrained to a single language outside of the web? Since everyone has to use it, changes in the language itself have to be extremely slow, otherwise it breaks existing stuff.

Any discussion of JS is inherently phenomenological because there's nothing to compare it to. I personally would love it if we simply started from scratch and slowly phased JS out. I'm sure attempts have been made :/


There's so much bias in these articles. In Javaland, JSP, Spring, OSGi, and servlet containers all introduce massive amounts of complexity, yet are ignored by "real engineers" in favor of (inexplicably) bashing Kafka and Redux.

Stop labeling something "complex" just because you're unfamiliar or uncomfortable with it.


The amount of boilerplate and number of different technologies you need to tie together to have a polished, presentable project with possibility for end-user traction is insane.

You're going to want two different mobile apps (or a cross-platform solution). You're going to need a whole bunch of widget and formatting libraries.

You're going to want a bunch of tie ins with authentication providers.

You're going to want some form of social media integration.

You're going to want a back-end served on a cloud.

If you're pushing it you'll still want a website. Which of course comes with it's own myriad of CSS and JS frameworks.

Compare this to the simpler days of the where all you needed was a mostly static HTML page with some PHP and you could plop it on a server via FTP and viola you were in business.


Agreeing with other posts that this is mostly bullshit. Software is complex because every project is constrained by limited resources. There is no perfect language to develop in. There is no perfect framework or platform to create your app on. Yes, a lot of projects might go too far down the architecture rabbit hole, but just as many bend the other way not using proven tools trying to "keep it simple". For every project that is stuck using 2-ton framework, there is another that wrote a custom sub-set of said framework with 5x as many bugs, and no one but themselves to support it. Usually there is no correct choice, only compromises. The sum of these compromises is a complex software project.


After having worked with a few really really good developers, who had both amazing productivity and almost zero defects, simplicity is now my goal.


> simplicity is now my goal.

Simplicity is a great tenant, I find myself often shooting for clarity a lot as well.


Software IS "the business". Software has always been complex. To make things easy in the application experience the complexity is pushed down. More to the point it is not so much that software is any more complex but the volume of software knowledge and awareness is overwhelming even to those coding the stuff. Platitudes about simplicity are a dime a dozen. Rather than despair take comfort in knowing you are not alone in feeling swept by a tide of perpetual software churn.


“C gave us similar advantages over writing in assembler.”

This is a profound misunderstanding of both C and assembler: C gave us portability, not reusability; writing shared object libraries with reusable code in assembler has been the bread and butter of programmers for decades, and it takes no longer to do in assembler than it does in C. A stereotypical example of this is the Commodore Amiga, where most of the shared libraries are written in assembler.


> So if we have to develop the interfaces, workflow, and logic that make up our applications, then it sounds like we are stuck, right? To a certain extent, yes, but we have a few options.

> To most developers, software equals code, but that isn’t reality. There are many ways to build software, and one of those ways is through using visual tools. Before the web, visual development and RAD tools had a much bigger place in the market. Tools like PowerBuilder, Visual Foxpro, Delphi, VB, and Access all had visual design capabilities that allowed developers to create interfaces without typing out any code.

[...]

> And many companies are jumping all over these platforms. Vendors like Salesforce (App Cloud), Outsystems, Mendix, or Kony are promising the ability to create applications many times faster than “traditional” application development. While many of their claims are probably hyperbole, there likely is a bit of truth to them as well. For all of the downsides of depending on platforms like these, they probably do result in certain types of applications being built faster than traditional enterprise projects using .NET or Java.

Huh? I challenge this guy to start up a WinForms project in Visual Studio and tell me what's substantially different from the Access forms designer. But this comes with its own limitations (it gets messy fast if you want to do something "non-native," collaboration is a challenge, it encourages sloppy mistakes like misaligned controls, poorly named controls, and form logic interleaved with business logic) and, more importantly, doesn't actually do the hard part. Creating buttons and boxes can be tedious if you don't have some kind of tool to do some of the work for you, but even in an environment where it's totally done by hand it's not where the bulk of effort in a project goes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: