Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How to avoid over-engineering software design for future use cases?
243 points by h43k3r on June 24, 2020 | hide | past | favorite | 250 comments
I have worked at Microsoft and Google in multiple different teams.

One thing I realized that sometimes engineers go extreme in designing things/code for future cases which are not yet known. Many times these features don't even see any future use case and just keep making the system complex.

At what point we should stop designing for future use cases ? How far should we go in making things generic ? Are there good resources for this ?




> engineers go extreme in designing things/code for future cases which are not yet known

They're afraid.

Fear: If I don't plan for all these use cases, they will be impossible! I will look foolish for not anticipating them. So let's give into that fear and over-architect just to be safe. A bit of the 'condom' argument applies: better to have it and not need it than to need it and not have it.

But the reality is that if your design doesn't match the future needs really well, you're going to have to refactor anyway. Hint: there will always be a future need you didn't anticipate! Software is a living organism that we shape and evolve over time. Shopify was a snowboard store, Youtube was a dating website, and Slack was a video game.

So my answer: relentlessly cut design features you don't need. Then relentlessly refactor your code when you discover you do need them. And don't be afraid of doing either of those things because it turns out they're both fun challenges. The best you can do is to try to ensure your design doesn't make it really hard to do anything you know or suspect you'll need in the future. Just don't start building what no one has asked for yet.


>> engineers go extreme in designing things/code for future cases which are not yet known

>They're afraid.

In many companies (think FAANG), engineers, especially senior engineers are incentivized/forced to show fancy design docs as part of the annual appraisal process. The more complicated the design, the more 'foresight', the better.

If it sounds kind of ridiculous (TPS reports from Office Space anyone?) it is. But on the other hand, taking a bit less cynical look, the more massive a company gets, the more the voices which demand objectivity in all these promotion/bonus multiplier processes. So a kind of obsession about such weird 'measurable' metrics gradually builds up in the name of objectivity.

And as soon as there are metrics, you can bet everyone in the system will do their best to game them (it only makes too much sense to do so).

And thus you end up with over engineered systems all over the place.

A lot of the 'not-invented-here, let's reinvent it' style culture also develops similarly - you have too many smart people in a room where the work is just not that demanding. Even if you were to get over your personal existential crisis (why am I writing yet another crud app?!), if you're the type that wants to see a promotion every other year, your're forced to invent work this way.


I think it's a bit more than a desire to show "fancy design docs" or metric gaming. I've seen a lot of engineers prioritize fixing as yet hypothetical future problems over problems that are burning them right now even when there is no need for design docs and no metrics to game.

I think some problems are just seen as sexier than others.


For sure.

I just wanted to shed a light on some unfortunate external pressures faced by a subset of engineers, which contributes to this problem. Didn't meant to come across as cynical.


People who do this get promoted because overengineering results in the manager getting more headcount.

My solution was to work someplace that can’t afford to waste money on bullshit. Has worked well so far.


Fetishization of objectivity

It's a great concept, and is often what irritates people. I work at a national lab, and senior management is absolutely obsessed with metrics, often to the detriment of the purpose of the organization itself.

There needs to be some humility too - especially in our modern data driven world. All that is important, you may not be able to measure, and all that is measurable may not be important.


> In many companies (think FAANG) > The more complicated the design, the more 'foresight', the better.

Certainly not in Amazon. Surely there can be exceptions but in general the company has a culture of simplifying stuff anywhere is possible.


This is a huge problem at AWS.

IMHO the largest contributor to over engineering in this company is people suggesting flaws in other's designs simply to have something to contribute during a meeting. I can't remember anyone, ever, telling me to remove something from a design doc (3.5 years).

I have added unnecessary complexity to my own designs as a response to comments. Not customer driven, not data driven, but somebody at a meeting got focused on something and it ended up getting added to the design.


>I have added unnecessary complexity to my own designs as a response to comments. Not customer driven, not data driven, but somebody at a meeting got focused on something and it ended up getting added to the design.

I've had unnecessary complexity introduced to my code during code reviews as a response to comments, too. Most often it's to make things "more testable" so as to reach an arbitrary code coverage target.


I've seen promos being denied, because the problem wasn't sufficiently complex enough. I'd say it was, but the design was simplified as much as possible, leading to 'wrong' impressions.


>simplifying stuff anywhere is possible

https://raw.githubusercontent.com/aws-samples/aws-refarch-wo...

Ironically, that's how you are supposed to run Wordpress on AWS.


Unironically, this is one of the simpler setups for a multi-node WP setup I have seen, and I have set up prod WP many times. Anyone with associate level knowledge of AWS can do this. There is a reason there are entire companies (pantheon) dedicated to hosting Wordpress for you: doing it with speed, resiliency and redundancy is hard.


Meta: I've vouched for this comment. You appear to be shadowbanned.


They don't like conservative opinions here, so I am not surprised.


You seem to be confusing Amazon's internal infrastructure and its development model to how AWS is used by customers.


I would have posted internal slides directly from app architects or even service(-prototyping) dev teams within AWS including similar vibes regarding this discussion, but for obvious reasons that's not a good idea. But whom am I telling that ... That "reference architecture" aimed at customers using their infrastructure regarding a Wordpress installation gets the general idea across, though.


That's the result though, right? Radical internal simplicity forces incidental external complexity.


No, I'm talking about internal services that aren't directly exposed through AWS API. This represent the large majority of the internal codebase.


Would it be any simpler when running on-prem or in another cloud?


You must not have spent long at Amazon.

My original comment was 100% based on my experience as a developer at Amazon.


I did, across different roles in two well known teams. As I said, there are exceptions, and the hiring bar has been dropping a lot in the recent years.


Well, they haven’t hired me yet, so either I’m terrible, or the hiring bar is just completely arbitrary.


In all large companies the people involved in an interview are a tiny fraction of the whole workforce.

Most of the time it's people from the team that is hiring and one or two "guests" from other teams (but usually working in the same building).

Managers also have plenty of power to influence the decision, therefore keeping a very uniform hiring bar is really difficult. (But no, it's not "completely arbitrary")

Also, I wrote that the bar dropped a lot, to the point of shifting all employees level up by one, but this does not mean that the company hires 90% of the candidates.

If a team was hiring 1 candidate every 1000 screened resumes and now it's 1 in 100 it's a whopping 10x change... but that doesn't make you a terrible engineer!


Sorry, I was just trying to comment on how interviewing itself is an arbitrary crapshoot.

Say one slightly dubious thing and you torpedo your chances of the job. Write one slightly dubious thing on your CV, and it will be rejected without a second look.

While I’m confident that most of the people rejected for those reasons would be at least adequate at their job.


Lets face it hiring is just a shambles in this industry. 20 years in the industry I know how to write a reliable system that is easy to maintain. Getting kind of crap at the tests they give in interviews.


Also: have a framework in place, which supports worry-free refactoring.

Comprehensive Unit/Integration tests, a robust type-system, pick whatever suits your style.

It's a lot easier to refactor stuff when you don't have to worry about breaking something hard to debug with a big code change.


Unit tests are absolutely fantastic during refactoring. I once had to rewrite a piece of code where nobody really knew what it did or what it had to do, and the original author just made some guesses about the intention.

I started out by writing unit tests for everything, which became my handhold and documentation for what the system originally did. Then I started reorganising the code into a more structured and more readable form, without changing the functionality, as proven by my unit tests. Then I started asking domain experts what exactly it should do, unit tests in hand, asking if these answers were correct. If they weren't, I changed the unit test, and then changed the code to match.

Surprisingly painless for something that by all reasonable standards was a terrible mess.


You invented the same refactoring technique Michael Feathers suggests in his book[1]. You write tests to document the current state of the legacy software and then start slowly changing it.

Great book BTW, should be on a top10 must read list for software developers. (#1 will always be Peopleware[2])

[1] https://www.goodreads.com/book/show/44919.Working_Effectivel... [2] https://www.goodreads.com/book/show/67825.Peopleware


Peopleware is great, but seeing that it was written more than 30 years ago, also completely pointless. All the advice that book gives is ignored by literally every leader I’ve ever met.

Our CIO was regaling us with the story about how they were going to change our office today, make it more open, fancy like Google or Facebook, free desk. But when I asked if they’d considered dealing with the noise issues we were having, no, no they hadn’t considered that.

It just blows my mind. I just really wanted to ask them what the hell they thought they were doing modifying the office layout without asking the people who need to work there.


I'm intrigued what level you write these tests at, because one of the most painful things to me is having to do large refactorings in a codebase with lots of little niggly unit tests, but few integration or functional tests. During a refactoring, you're constantly breaking impelementation details, and many people feel compelled to have test coverage of these details. My favourite systems are ones where use cases are explicitly modeled, or at least you have a well-defined (and tested) service layer. This is often something you'll introduce first on top of a legacy system. In systems built this way, it's far easier modify the guts, knowing you're not breaking anything with business value, but without the overhead of small unit tests that are highly coupled to implementation details.


I would argue that having a robust type system and some (not too many) end-to-end tests for your software makes unit testing almost completely useless overhead.


End-to-end tests are more overhead than unit tests. End-to-end tests are also not great for testing the many edge cases that you are likely to mess up if you refactor carelessly.

I also don't see how a robust type really helps there. It might even be part of the thing that needs to be refactored. Besides, many languages don't have a very robust type system.

End-to-end tests and type systems certainly have their uses, but for refactoring messy code, I don't think there's a good substitute for thorough unit tests.

Although there are different kinda of refactoring of course. The case I'm referring to was about one very messy module. This makes it very easy to unit test. If instead it's the entire architecture of your application that needs to be refactored, then you're looking at a very different case, and end-to-end tests become more important than edge cases.


Unit tests are too. What the comment you are replying to is referencing is Integration tests, the type that instead of working on a “this class returns this” instead works on “component A calls Component B to do something, are the two still speaking the same language and expectations?”


There's three levels of testing: unit, integration, and end-to-end.

End-to-end (e2e) tests the entire stack: deploy the site with database and all, run a script that visits a page, clicks and stuff, and checks if the right things become visible.

Unit tests are the other extreme: you take a single unit of code and test whether it does what it's supposed to, while mocking all communication with other units of code.

Integration tests are in between those two extremes, and probably the least understood as a result (at least by me). They look like unit tests, and don't generally test the UI, but they don't mock the other components, and ideally also set up a database to test against.

I think there's a bit of overlap between unit and integration tests; a sloppy unit test where you don't mock (some) other components but treat them as part of the unit you're testing, start to look like integration tests at some point. If you want a clear demarcation, I think you might consider a database connection essential to count as integration test.


Conventional type systems don't really help you when your code is pretty much just taking in vectors/matrices of floats and returning vectors/matrices of floats.


You still have the option of introducing new types, even there.

In Ada, the programmer is discouraged from using the Ada equivalent of int directly, and is encouraged to instead introduce a subtype that reflects the specific use of int (including automatic range checking).

This isn't as natural in C++ but is still possible. Boost offers a BOOST_STRONG_TYPEDEF [0] to deliberately introduce an incompatible type. (I do recall having trouble getting it to behave, but it's been a while.)

Whether this makes sense in most mathematical code, I'm not sure, but it seems like it's an option.

[0] https://www.boost.org/doc/libs/1_73_0/boost/serialization/st...


If you are working with physicial quantities in C++ there are also

[0] https://github.com/nholthaus/units

[1] https://github.com/mpusz/units


You are correct, however that's a niche use case that would warrant using a non-conventional type system. Conventional type systems are mostly for just dumping a bunch of strings and integers to and from a database and formatting them neatly.


The problem is that you want to have tests for your edge cases (e.g. a text field whose contents get stored to the database with specific validation will have a lot of different cases to test for), an end-to-end test will take a LOT longer than a unit test. Unit tests are for rapid feedback on a small section of your application.


Perhaps in some world where you never modify a type (e.g. concatenate/truncate strings, multiply numbers, etc). Type systems check types, they don’t help verify correctness of the logic performed on the types.

For an example, dig into some crypto libraries. They operate on bytes all over the place performing XORs, etc. A type system isn’t really gonna help you ensure you got the correct number of AES rounds and stitched the blocks in the right order.

IMO the only systems where this “type system eliminates most tests” philosophy seriously works are the ones that don’t do anything other than pass data between components without doing anything beyond calling some serialization methods.


> IMO the only systems where this “type system eliminates most tests” philosophy seriously works are the ones that don’t do anything other than pass data between components without doing anything beyond calling some serialization methods.

Which is what 90% of programmers on this website are essentially doing. And for the remaining 10% there is likely a better suited, different programming language or type system available. Even if there isn't unit tests would still be very niche and the general case would be that by default you shouldn't be unit testing.


If the code is a mess the type system can’t help, it is part of the mess.

End to end tests are really slow, but if you can get them into the 300-1600 tests per second range then i have no beef. I value tests but I seriously grudge waiting for tests.


If the code is that bad, unit tests aren't going to help you. Messy spaghetti code with massive structural issues will break every unit test every time you change anything in the code, in ways that make no sense and provide you with little useful information. You will fix things faster if you just let the tests fail, fix the code and then rewrite the tests. Which makes the unit tests useless.

Also, have you ever seen a project where the code is a mess but the unit tests are perfect? Even if somehow you could write unit tests that would cover for super bad code (which you can't), it is extremely unlikely that your unit tests would be that amazing.


What do you mean with "perfect" unit tests? For this case, I wrote the unit tests to document the current functionality. That's basically what unit tests do: they document functionality and enable you to preserve that functionality of that piece of code. Of course once you realise that the functionality is wrong, you should change it and the unit test. And you can't fix the code if nobody knows what it's meant to do.

There's quite a lot you can refactor without breaking unit tests. If you've got a single 200-line function full of nested loops with cryptic variable names, modern IDEs make it really easy to extract those loops to their own functions. Figure out what they're meant to do, give them a descriptive name, and you already improved the code a bit without breaking any unit tests. If your IDE does this well, you could even do this without unit tests, but you really will need those unit tests once you start reorganising the code making use of the excessive number of parameters those extracted functions invariably end up with.

Of course you can write unit tests for super bad code. If it's a function that returns something, it's trivial to unit test, no matter how badly written that function is. If it calls other code, you have to mock those calls and test that those mocks get called under the same conditions. If they mess with global variables, that's terrible, but even that can be mocked.

If the code uses gotos to code outside the module, somebody needs to get shot, and I guess you need a unit testing framework that can mock those gotos. I've never seen one, because nobodoy uses gotos anymore.


Of course they can help, you don’t have to use the existing tests, you can add your own as you learn the system and what its supposed to do


This means you test your code manually ? I can't imagine not doing TDD, except during extremely early prototyping before knowing if a code will be useful at all.


TDD and unit tests aren't exactly the same thing. Also, not doing TDD doesn't mean that there's no automated testing. I prefer to write my code and then do a few end-to-end automated tests for the most important parts of the code to serve as a backup in case some change in the code causes massive failures. But TDD is overall tedious for (usually) little benefit when compared to a few well selected end-to-end tests. And unit testing is even less benefit for even more work, unless you are doing something very very specific.


Interesting, in my experience TDD is easy (it's just that a specific mental process needs to happen, besides learning a xUnit API or something, and also experience tells how to know which tests to write and which not to write, so that maintaining the tests doesn't become a burden) and always provides better ROI on the long run.

With end-to-end tests such as when piloting a browser, it's not really easy to get things like tracebacks into the console output for example.


Exactly, in the time it would take me to write a proper TDD suite, I've written a skeleton of a product from end to end and can start iterating over it.

If you're working on a very specific box that has well defined, well known inputs and outputs then TDD is an excellent tool.

But for anything with a non-specific "We'd like to do X and display the result on Y" it just gets in the way.


I'd TDD that "do X results in Y" and then add an end to end that that verifies that "Y is displayed", unless that code that displays Y is too trivial then an end to end test might not even be necessary.


Test-driven development is a useful tool, but it doesn't remove the need for manual testing completely, especially on frontend projects.

I've worked on mobile apps before with a small team, and inevitably, we'll find bugs that show up in the user interface when the user rotates their phone. It's hard to unit test for rotation changes, and it's also hard to code a rotation change into an end-to-end test on mobile. Animations are also something that's difficult to test in an automated fashion, and all the testing in the world won't be as good as showing the animation in front of a designer. So some level of manual testing is needed on mobile.

I've worked on web frontends where there would be bugs with scrolling jumping back and forth. An end-to-end test using Selenium may not catch the issue, but for a user, it can be painfully obvious. Similarly, animations are also hard to unit test on the web. So some level of manual testing is needed on the web.

The only place where I could see manual testing NOT being needed is for backend development, since the input and output to a backend system is much more controlled. You could write an end-to-end test for any scenario a user could throw at your system.

In summary, don't underestimate the value of manual QA!


"Fantastic" or "The only way" ? has anyone seen a case of refactoring without test end well ? I haven't.

Seems like that "refactoring untested code" would be a well established recipe for disaster.


> "Fantastic" or "The only way"?

Not sure if it counts as a 'way', but a static type system helps too.


What I have realized over past few years, organising code for future is a function of a characteristic of an engineer.

I have noticed people who are extra organized in real life, who keeps every single file in a right directories after download tend to have inclination for prematured code refactoring for future use.

If these guys become code architect then i end up doing so many unnecessary things. The fundamental assumption of the refactoring gets changed very fast and the code needs to be rewritten for the majority of the cases.

From a company level I find the instructions are clear, mostly holding the same contract between services as long as possible and less schema change.

Its the engineer with subjective idea of perfect code / supporting future work makes it even more complex


I've seen this tendency in other areas. There are travellers who plan everything, afraid of encountering a situation that they haven't planned for. And there are travellers who trust in their ability to cope with any situation.

As the GP says, this is about fear. Fear that if you don't plan for it now, you won't be able to deal with it if it happens. Or in architecture terms, fear that your system will be faced with a requirement that it can't cover.

Trying to plan for every eventuality is usually wasted effort. Better to build robustly, and trust in your ability to adapt.


This is actually one of the lessons I had to learn the hard way. Solving for the future makes things unnecessarily complicated and even if that future arrives it will have been code debt and not/poorly maintained because literally nobody cares if it works or not.

Don't do this and solve only for the problems you have now or are about to start in the next 2 sprints.

edit: and make refactoring acceptable and part of your engineering culture so you do it often


Something I've recently realised after having listened to Kevlin Henney talk about software engineering is how much of the existing knowledge we ignore. The early software engineers in the '60s and '70s were discovering pattern after pattern of useful design activities to make software more reliable and modular. Some of this work is really rigorous and well-reasoned.

This is knowledge most engineers I've met completely ignore in favour of the superstitions, personal opinions, and catchy slogans that came out of the '90s and '00s. It's common to dismiss the early software engineering approaches with "waterfall does not work" – as if the people in the '60s didn't already know that?! Rest assured, the published software engineers of the '60s were as strong proponents of agile as anyone is today.

Read this early stuff.

Read the reports on the NATO software engineering conferences.

Read the papers by David Parnas on software modularity and designing for extension and contraction.

Read more written by Ward Cunningham, Alan Perlis, Edsger Dijkstra, Douglas McIlroy, Brian Randell, Peter Naur.

To some extent, we already know how to write software well. There's just nobody teaching this knowledge – you have to seek it yourself.


Agreed. But something I'd like to add:

You can read all the papers you want. At the end of the day, you actually need to practise writing code! Every piece of software involves a different domain, different requirements, etc.

I feel like the majority of developers these days just copy and paste stuff from the internet and glue some npm/nuget/maven packages together, brush their hands together and feel like gods. That is not how you become a good developer!

How do you become a good developer? Write code _yourself_. Then rewrite it again and again. Keep thinking "how can this be done better, cleaner, more elegantly? how could this be more readable, maintainable?" .. "Maybe others have found a better solution for this, let's do some research and investigate all options, weigh the pros and cons, and select the solution that fits best for this particular problem". Don't forget to factor in the cost of complexity. Is the code you are writing too difficult to understand for the next developer? Is the abstraction too rigid and complex? There is a cost to that.

Rinse and repeat. Sooner or later, once you've written enough code and thought critically about every line, you will acquire a "feel" for what it right and what is wrong. Then suddenly you are an industry expert. Suddenly you are training and mentoring others. Because you put in the goddamn effort.


The thing that improved my coding was actually maintaining my own codebase for a few years (4.5 years in the same job working on a system that I wrote from scratch). That way you see which of your decisions worked and which ones were crap. When you need to revisit your own code 6 months later and can't understand it, that's no one else's fault but your own.

Even then I think you have to be able to look at your own code critically and want to make it better. The current tech lead in my company churns out a lot of code quickly, but doesn't take constructive criticism of it at all well. It's not easy code for other people to jump into and there are no docs or tests. He thinks it's close to perfect. I don't


This is an issue I've seen with programmers that hop around companies every year or two - they rarely get to see how their software choices play out in the long term and so never really internalize what works and what doesn't. Same issue to some extent with people who keep hopping around languages/frameworks - if you only have experience writing projects from scratch you'll never really understand what works and what doesn't from a long term maintenance perspective.


This is also why I have doubts about "experienced consultants". It usually means "job-hopping far more frequently than every year."


Part of the thinking about it is not making excuses to yourself. If I have to rewtite something, I don't just dismiss it as refactoring, I also ask myself if I could have got it right the first time. And try to estimate how long it will take, because you will improve with practice, and it is part of doing a professional job, even if it is thoroughly broken and abused at your place of work.


Agreed. I think maintaining your own code for some length of time is at least as important as writing new code.


> feel like gods

I've been disappointed with the industry for the opposite reason: I don't want to be doing this, but this is what is being asked of me.


Couldn't agree with this more.

A current favorite of mine is "The Emperor's old cloths", the Turing award lecture given by C.A.R Hoare. In particular his line "The price of reliability is the pursuit of the utmost simplicity", it applies equally to maintainability and extensibility (I think these are part of what it means for a system to be reliable).

An idea I try to keep in mind while working is not to plan or build for future features but simply leave room for them (meaning don't actively prevent their eventual existence through complexity). It has taken some practice, but it helps guide to a simpler implementation.


> maintainability and extensibility (I think these are part of what it means for a system to be reliable).

Frequently forgotten is the duality of extensibility: subsettability, or contraction. Being able to remove or disable code without rewriting large parts of the application is just as important as being able to extend it!


yes!


This. So much this!

Especially Microsoft has a tendency to throw out frameworks and helpers that just don’t provide the needed flexibility, and once that has been addressed what’s left is a heap of extensibility points that wrap three lines of useful logic in hundreds of lines of framework code.

So if I could ask one thing of platform developers it is to leave the framework design to the user, and just provide useful primitives. The two hours spent on writing the extra glue code is easily saved on the days spent trying to learn the framework.


Thats a gem: "is there room for features" is going on my personal code review checklist. Right after "you arn't going to need it".


And security!


One more unsung hero: Barry Boehm, who, in "A Spiral Model of Software Development and Enhancement" (1986), expounded on a iterative process in which each cycle is driven by an asssessment of the biggest risks threatening the satisfaction of the stakeholders' 'win conditions'. Does that sound familiar?

In answering the original question, one would hope that focusing on the risks to successful completion should de-prioritize the fixing of things that are unlikely to become problems in practice.

https://dl.acm.org/doi/10.1145/12944.12948


Well, in some domains there are big changes, due to hardware working differently. When doing number crunching nowadays cache locality is king (cache is 100 times faster then ram). So one should start from the memory layout of the data (data driven design?).

It even lead to Stroustroup saying in a talk that one shouldnt use linked list at all, but instead vectors, because even when linked lists should shine, they do not anymore.


Yes exactly. I recommend reading the book Design Patterns [1] for some ideas on how early "modular" software was conceptualized. This book was published in 1994 and still has a lot of relevance today.

[1]: https://en.wikipedia.org/wiki/Design_Patterns


One thing I notice is that software is becoming more brute force and less about "design" or "architecture" quality.

Good abstractions at the correct level and the correct data modelling can make the code around it very simple. Instead of nice designs that are easy to reason about, we have lots of unit tests to "prove" that it works. Instead of making a system reliable and trap errors properly, we use Kubernetes to launch another pods as soon as one goes down. Maybe I am just too old, but I feel we are loosing something going down this route.

(Ok, there is code that's functionality will mean that in inherently isn't simple, but I see far more CRUD apps that are built in an over complicatd manner, than code that is actually doing something algorithmically complex).


This is about cost optimization to some extent.

Writing perfect bug-free code is nearly impossible and expensive. Hardware is going to fail, network links can get flakey, packets can get lost. Accepting that failure is going to happen no matter what and building for resiliency to it is a far better approach.

Unit tests are along the same line- its been measured that fixing a bug in production is 10x more expensive than finding it in development. Unit tests help ensure that more bugs are found in development, there is less need for, slow, expensive, and error-prone manual QA processes that hold up release frequency.

Unit tests also help document the code to some extent. As teams turnover and new developers who may not fully understand the nuances of some code come in, Unit tests create guard rails that help catch edge cases that may not be obvious to new members of the team and when the test fails makes you go back and give more thought to those areas and why they behave the way they do.

Rereading this, you claim to be on the older side, which I am surprised about- this seems to me like an opinion that someone who is relatively new to the industry may have. I am continually amazed in the new and creative ways things can fail, building for resilience is a 100x better approach that also reduces stress tremendously- Instead of "All hands on deck no one is going home tonight- we just took the system down with this last release and will need to patch it ASAP" the conversation goes to "Ok- we rolled out the release to 1% of our users and we see error rates spiking. Lets roll it back and look at the logs to see what happened."

Similarly with unit tests- it used to be monthly or weekly release cycles, now we are down to multiple times a day because I don't need a programmer who washed out to get around to clicking on the right buttons to give it a green light.

These are tremendous improvements in the state of the art.


I am not arguing that unit tests are not valuable for the reasons that you have given, just that they seem to be used in place of decent abstractions these days. I have worked on well abstracted code without unit test and badly abstracted code with unit tests. The former is far easier to work with and bugs take a lot less effort to fix - that translates to costs in developer time.

> This is about cost optimization to some extent.

Is all the complexity and developer time that is required with Kubernetes really cost effective? We added it at my work and it took around 6 months of developer time from what I could tell (it wasn't me who set it up). Now we have to debug it and none of our team is an expert, so lots of guesswork and googling. Using number of small pods has given us a lot more problems than one big machine. If you are running out of memory because of a large upload, restarting a pod when it crashes doesn't help you.

Kubernetes has lowered our cloud costs and given us high availability, but it has increased development time in a number of ways. Maybe as we get more experience it will get better, but so far I would not say costs have been optimized as a result.


This is certainly an interesting thought. Ivan Zhao of Notion had some similar thoughts on design. That the early pioneers of computing were able to develop a large amount of amazing insights not just due to the greenfield nature but also because they were less focused on the business aspects.


I don't think they were any less focused on business aspects -- other than reliability, most papers and articles seem to focus on bringing costs down.

If anything, I think the Greenfield nature may have more to do with it. Today, we take it for granted that software is really expensive to make, to the point where we don't always question the basic patterns that make it so expensive. In those early days, it was far from obvious that software would be expensive.


We read some of these papers in school as assignments and there are many cited in software engineering texts like Sommerville and Pressman, so I am not sure no one is teaching it.


This might vary between schools, then. My education was heavily focused on writing algorithmically efficient code that was going to be thrown away weeks later. I don't think we ever deliberately practised modularisation, code reuse, changing requirements, simplicity of implementation, message passing for concurrency, and so on.


The junior developer sees a pattern and thinks why not make this a bit more generic so it handles similar cases. Oh but these cases fit a larger pattern.. and eventually you'll have written yourself an entire framework. This is very valuable, for the experience. Not for the framework, which will never be used. But you should go through it.

The mid-career developer knows this from experience and sticks to the minimum necessary to meet the requirements. Fast and efficient in the short term.

The senior developer mostly agrees with that but also knows that there is nothing new under the sun and all software ideas repeat. So from experience they can selectively pick the additional implementation work to be done ahead of time because it'll save a lot later even if it's not needed yet.

As an aside, this is one of the many reasons why I interview based on discussing past projects and don't care for algorithm puzzles. Unless I specifically need an entry-level developer, I'd prefer to have the person who has written a few silly (in hindsight) complex frameworks and has painted themselves into a corner a few times with overly simplistic initial implementations. That's the person I know I can leave alone and they'll make sane decisions without any supervision. The algorithm puzzle jockey, not so much.


Personally this is not my experience. Plenty of seniors spitballing not useful solutions, premature optimization of sort etc...

The mid-career is perfectly fine, give you are also good at refactoring if needed.

Also, another thing that I'm not seeing stressed enough on this topic is: measure.


One thing I've found throughout my career is having Senior in your title is sometimes totally unrelated to your ability to write code or design solutions.


Most people never make it to your definition of senior engineer


Wow I can clearly see my transition from junior to mid-level based on this.


It really depends on what you're doing.

If you're writing code for a device that's going to hang out in the forest for 10 years with no updates, write just enough code to solve the problem and make it easy to test. Then test the fsck out of it.

If you're writing code for a CRUD web service that you know will get rewritten in 10 months, write just enough code to solve the problem and make it easy to test. Then, test the fsck out of it.

If you're writing an Enterprise app that will be expanded over the next half decade, write just enough code to solve the problem and make it easy to test. Then, test the fsck out of it. You simply cannot know how your code will have to change so you cannot design an architecture to make any arbitrary change easy. Accept that. The best you can do is write clean, testable code. Anyone... anyone who thinks they can design a system that can handle any change is flat wrong. And yet... time and again, Architecture Astronauts try it.


> You simply cannot know how your code will have to change

Nonsense.

Of course you can make reasoned predictions about how your software will change or not in the future.

Some will be wrong, but if you have any sense many will be right enough to have been worthwhile to address up front, and the balance will be clearly positive.


What are your thoughts on the habit of wrapping virtually every recurring piece of code in its own method? In case the implementation has to be overridden?

E.g.:

def current_time

  Time.now
end

vs. just repeating Time.now inline within all of the calling methods?


> In case...

When it happens, deal with it - odds are it won't. Expend the energy making the code base testable not pretending to be an insurance company ("In case sh..").


I don't really want to disagree with anything in here, this all looks good.

But I think it's silly to imagine that there's nothing you can do to anticipate changes and make your code more amenable to change.


The problem is the changes you anticipate often are not the changes that happen. Those anticipated-but-not-happening changes waste your time, and also clutter up your code, getting in the way of the changes that do need to be made.


> ... make your code more amenable to change.

This is where "make it easy to test" comes in.

And, sure, there are some things that are easy to anticipate. If you're processing CSV, you may get asked to process arbitrary delimited files next. But the point is 3 - 5 years out, you'll get asked for something the current architecture cannot support and a major effort will be required. You cannot anticipate those and thus should never try to.


I want that on a poster.


I see what you did here.


When I was young and fresh, I wanted to cater for all sorts of future possibilities at every turn. But what if things change this way? What if things change that way? I came to realise that when things change, unless the design change is trivial (allow use of database Y as well as Z, etc) then you probably haven't anticipated the way in which it is going to change anyway. Better to have clean, straightforward code than layers and layers of abstraction and indirection, to the point where it's hard to tell what's actually going on. You'll probably find that when change does come, your abstractions are an annoyance and a straight-jacket.

A junior engineer recently presented me with some completed work. There was a data schema change on one interface and an API version change on another. He had created a version of the microservice he was working on which could be configured to work on either the old or new version of each of these, with feature flags, switching functionality and maintaining the old and new model hierarchies for both. He viewed this as an achievement. While technically it was, the result was code bloat and unnecessary complexity. The API upgrade and schema change were happening at the same time, and would never be rolled back, his customisability was a net negative.

Code is a liability. Be sparse. Build with some flexibility but overall just build what is required and when the future comes, change with it. Don't build to requirements you can imagine, because the ones you don't will kick your ass.


Less is more. The simpler your code is, the easier it usually is to change to meet new requirements.

If you need to push every variable through 15 layers of interfaces and factories and generators, you'll have a bad time.


> The API upgrade and schema change were happening at the same time

Atomically? What if your DDL times out?

What you describe is, in my experience, extremely standard process for rolling out breaking changes (you do of course remove the switch and old version support after everything is rolled out).


> Atomically? What if your DDL times out?

Effectively yes, during a defined outage period (don't ask, this is finance), on any error anywhere in the platform during rollout, the entire platform would be reset to its prior state.

> What you describe is, in my experience, extremely standard process for rolling out breaking changes

But entirely unnecessary here. There was no requirement for a single version of the microservice to be able to address multiple versions of either dependency. It was a net negative to have that support, added significantly to the software complexity and the raw LoC.


But abstractions done right, can really help with refactoring.

But this is hard to do. I once made a major feature change - where I feared the consequences and bugtracking afterwards - but it worked like a charme, because the complex abstractions I made in the beginning, really helped, so it was done in a couple of days and not weeks or months, like I thought. I was really proud of my older me, that day.

Only, like you said, when you have layer of layer of abstractions, the problems rise. When you really have to dig in to understand what is going on. Or when the abstractions are simply not helpful for the new change, which happens way too often, too.

"Code is a liability. Be sparse. Build with some flexibility but overall just build what is required and when the future comes, change with it"

So yes, I very much agree to that.


I definitely agree with that, and there's an art to getting abstractions right :)


The short answer is: don't design for future use-cases.

Period.

Instead, only build what you need to solve the problems you have in-hand, but do so in such a way that your software and your systems are as easy to change as possible in the future.

Because you very, very rarely know what your future use-cases really are. Markets change over time, and your understanding of the market will also shift, both because some of your assumptions were wrong, and because you can never really have a total understanding of such a complex phenomenon.

Your business needs will change as well; your top priority from 2019 is likely very different than it is today.

That is why you build to embrace change, rather than barricade against it.

As to how, I'd start with Sandi Metz's 99 Bottles of OOP: https://www.sandimetz.com/99bottles

Learning to write readable code is also pretty important; Clean Code is a good starting point (https://amzn.to/3168z3A), but I'd be keen to know of any shorter books that cover the same materials.

Growing Object-Oriented Software Guided By Tests (GOOSGT) is a good read as well: https://amzn.to/3du1sEL


Reading Sandi Metz is like walking into a beautifully tidy room. I can’t explain it any better than that. +1 on that book and POODR.


This.

The new requirements you will need to implement are probably unknown right now.

If your code has no extra fat, it will be easier to adapt.


I disagree on Clean Code. It puts too much importance on superficial "5 easy steps" points that put too much priority on the wrong things. Reading quality open source code is a better way of getting that understanding IMO.


This #2

If you learn anything by having super complex software that tried to anticipate future needs is that you rarely get it right.

Write the code for the problem you have today. Pay attention that, by the choices you make, you don’t walk yourself into a corner.


32 comments so far and no mention of the word budget. There's a great analogy between software engineering and construction. Does your organization build skyscrapers and gorge-spanning bridges? Or does it build driveways and swimming pools? Commercially developed software consumes capital to get something in return. Are the people spending capital budgeting for a driveway or for a skyscraper? Must it be done this month or in two years? Sure, those are false dichotomies, but they illustrate the point: it is desirable for the bosses to clearly define engineering spend.

Over-engineering can be avoided by carefully sticking to a budget.

For a lot of developers, there's a trade-off between rationality (in the sense of ROI) and feeling good. It doesn't feel good to make every engineering decision against a budget. My dopamine levels [0] skyrocket when I visualize making some component generic, or future proof. I come crashing back to earth when I realize the budget is for a driveway. Budgeting earns the bosses / customer / capital a better return. Engineering for future use cases or making things generic (or the mere anticipation thereof) is an easy way to get a massive hit of neurotransmitter that makes you feel good.

[0] Not a neuroscientist. But I think it's useful to label that spike of feel good and motivation. It may have nothing to do with dopamine.


> My dopamine levels [0] skyrocket when I visualize making some component generic, or future proof.

I think I've successfully rewired that impulse in myself now, at least partially. I do get that feeling from imagining great systems, but I also get that dopamine hit from deleting old stuff and simplifying things as much as possible. We can ditch compatibility for API v8? Awesome, I just made my code smaller and we can scrap that entire abstraction layer! The service just lost 20% of its LoC :)


Honestly I think this may be a component: besides code shame (my code is bad and I don't want other people to see it) a lot of programmers also feel protective of their code. They want it to last forever, a paean to their skills. Writing simple code to throw away and refactor feels like a failure -- it should be perfect and eternal.


I totally understand the sentiment of writing elegant code that lasts forever, but I think developers need to manage their expectations a little better.

If you're writing foundational code for an operating system, then yeah, whatever you write is going to stick around for a while. (Which means you need to be a LOT more careful about what you write.)

If you're writing code for a mobile app or website, expect your code to get thrown out in a few months due to shifting requirements. I'm sure the Facebook app has been completely rewritten more than once, since they switched to React Native at one point.

If you're writing a JavaScript library, expect your code to be replaced next week. :D


I've reprogrammed myself to only get the dopamine hit when I delete old or unneeded code :-)


Deleting code is one of the most satisfying things in the world. Puff and gone are all the worries, all the bugs, all the maintenance hassle, all the limitations that code imposed on you.


You'd be able to sum it up as different stages of the reward process:

As you learn to code, it becomes increasingly satisfying to solve problems, so you write more code, which provides more satisfaction, but it gets unwieldy quickly and you start to get lost in the spaghetti.

As you learn to abstract, it becomes increasingly satisfying to create abstractions and manipulate them afterwards, allowing you to not get lost thanks to the now beautiful mental map of your code. But at some point in time it leads to excessive abstractions and issues start to pile up, and you can't quite make sense of the map anymore because everything is a little bit too generic and doesn't carry any more meaning, and such abstractions invariably leak through the lasagna. You usually start by writing more abstractions, but this only feeds the loop.

And finally, as you learn to delete code, you get back to simplicity, and the disappearance of the mental and causality weight associated with that deleted code becomes increasingly satisfying. You start inlining code, you start repeating code, only for the repeating patterns to emerge by themselves, you start thinking about names for those, and the names come naturally. Instead of powering through, there is a state of continuous, unstoppable flow based on a loop of implementation and emergence, where a giant Rubik's cube effortlessly solves itself by throwing it in the air and falling into place.


Very often projects are overbudgeted though. Not just cost and time, but quality of hires, salaries, promises to investors, and a competitor's feature list.

The scope of the project then expands to fill the budget.


Promises to investors and a competitor's feature list define your scope, not your budget. Salaries and quality of hires are correlated (but not causal), and are directly tied to cost as a component of it, along with time. Feature cost estimation can be as simplistic as:

Running Costs + (Weekly cost of Engineers * Weeks Slated) + (Hourly Cost of Managers * Hours in Meetings/Discussions) = Projected cost / required budget.

Increasing scope causes an increase in time spent both in meetings arguing about it and engineers working to build it, which increases the time side. Increasing salaries / quality of hires increases the cost side. Promises to investors or contractual obligations to partners/customers are non-negotiable increases in scope, so directly drive how much time something takes.


This is a good response. One of the things that has helped me temper myself is to explicitly consider the ROI of my work. If I do this, will I save more time in the long term than I expend doing it? Depressingly often, the answer is "no".

Software engineering has the potential to earn a lot of money with relatively little effort.

This is both a blessing and a curse: if I just do anything that I feel like, maybe only 5% of what I do would actually carry a profit – but that profit would be big enough to compensate for the other 95% of the time when I only do things that make me feel good without getting any real return from it.

If your goal is to produce maximum value, you have to think very long and hard about what projects you take on. It is worth taking the time to consider projects and research their viability, because very many of them won't pay off in the long run. You'll have to get used to saying "no" to anything that doesn't have an obvious route to profit.

On the other hand, if your goal is to have a little fun along the way and not only to produce maximum value, you can use this fact to your advantage! If you know you've rejected a bunch of low-value projects recently, you can afford to take on some low-value (but fun!) projects and still sit firm in the knowledge that you're still producing net value in total. Because the 40% of projects you've done that do produce a lot of value will easily pay for the other 60% that aren't obviously profitable. Or whatever ratio you settle at.


Don't design for future use cases unless they are known in detail (in which case they're not really future use cases anymore). Things you don't know, will change. Your extra effort may actually hurt you in the future.

Better is to design for change. Keep everything modular. Keep your concerns separated. When something needs to change, you can just change that thing. When the whole basis of the system needs to change, you can still keep all the components that don't need to change. The only future use case you should design for is change, because that's the only thing you can be certain of.

So don't make things more generic than you need today, but make sure it can be made more generic in the future. And in that future, don't just add new features all over the place, but reassess your design, and look if there's a more generic way to do the thing you're adding.

I do this sort of stuff all the time on my current project, and it works quite well. There's no part of the system we didn't rewrite at some point, but all of them were easy to rewrite.


Additionally, even if you know the use cases, often I find it more productive to form the code of the application one use case at a time. This allows greater focus on the task at hand. People cannot effectively multi-task so breaking focus into different use cases at the same time is slower and causes more mistakes than just coding one use case at a time.


The chapter The Second-System Effect form The Mythical Man-Month book (Brooks, Jr. Frederick P.) talks about this problem:

“An architect's first work is apt to be spare and clean. He knows he doesn't know what he's doing, so he does it carefully and with great restraint.

As he designs the first work, frill after frill and embellishment after embellishment occur to him. These get stored away to be used "next time." Sooner or later the first system is finished, and the architect, with firm, confidence and a demonstrated mastery of that class of systems, is ready to build a second system.

This second is the most dangerous system a man ever designs. When he does his third and later ones, his prior experiences will confirm each other as to the general characteristics of such systems, and their differences will identify those parts of his experience that are particular and not generalizable.”

The overall advice is to practice self-discipline:

“How does the architect avoid the second-system effect? Well, obviously he can't skip his second system. But he can be conscious of the peculiar hazards of that system, and exert extra self-discipline to avoid functional ornamentation and to avoid extrapolation of functions that are obviated by changes in assumptions and purposes.

A discipline that will open an architect's eyes is to assign each little function a value: capability x is worth not more than m bytes of memory and n microseconds per invocation. These values will guide initial decisions and serve during implementation as a guide and warning to all.

How does the project manager avoid the second-system effect? By insisting on a senior architect who has at least two systems under his belt. Too, by staying aware of the special temptations, he can ask the right questions to ensure that the philosophical concepts and objectives are fully reflected in the detailed design.”


Yet 'spare and clean' can mean breaking the problem down into primitives. And primitives can be reusable.

I agree, contriving baroque features and APIs are usually wasted time. But planning simple basic operations that others are built upon is good sense. When possible.


Yeah we call them "building blocks." Instead of building specific individual features we create building blocks that we can use to implement features. The key difference is that by exposing the building blocks to customers they can use the building blocks in ways that we didn't imagine.

The most important reason for us to do this is so that our customers can always accomplish what they need, even if it's somewhat manual or painful. By exposing our building blocks we never have an emergency where we need to implement the feature "copy foo to bar so that customer X can accomplish important thing Y." Instead we can say "you're able to do that manually and we'll talk about a feature that will help you automate if the manual process is too painful."

Sometimes you just need to write some straight business logic, don't get me wrong, and it takes experience to know when to create a new concept and when to just bang out some code. But exposing primitives or building blocks reduces the amount of "critical" development substantially.


I found out about primitives/building blocks after 2 years of dealing with foreign discipline,

but I have problem sharing it with people because I just don't the credential.

Is there a well-known book/resource that I can share with people about this?


Note that the Mythical Man Month was written a long time ago, and in the book the architect was more like what you would call a product manager today: they were in charge of the conceptual integrity of the system from the point of view of the user.

So while you can still draw parallels, just realize that this wasn't written about what we call architects today.


I am a firm believer that the most important superpower a software engineer and their team should have is refactoring.

By refactoring what I mean is continually revisiting the architecture of your code, identifying common functionality, better organisation of code, better abstractions. The right decisions for your codebase change as it evolves and so you need to keep reexamining these (implicit) decisions.

Continuously incrementally refactoring your code enables so many things to work better. Your example here is a great one - don't implement something you don't have a direct need for. Don't design for it. If you have a culture of refactoring then you can be confident that in the future if that need crops up, you can implement it with appropriate refactoring that the result will look at least as good as if you designed it in up front. If you don't refactor as a team, then you have to do it now, or putting it in later will result in a mess.


Which brings up the topic of treating code like cattle, not like art.

Code should be treated as expendable and transient, your clever implementation might be considered deprecated in a month due to changes in requirements/design.

Unfortunately it's not easy to convey this idea as a team lead. People get "attached" to their code and the idea of refactoring is viewed as mostly unnecessary and dangerous (I wrote the code, I'm not gonna rewrite it)


This. I did not decide to do this, but over time I refactor things. As I gain experience doing that, I realize I have started to write "refactorable code" (you heard it hear first. In JavaScript land (I'm a fan by the way) this is inevitable. JQuery, React, hooks, context, promises, async, ES6, on and on.

The churn in JavaScript is painful, as many people note, the code I started with is now both smaller, clearer and easier to refactor. So yes to refactoring.


I was lucky to work with a great and experienced SW engineer early in my career. Among other things we were working on a Bluetooth stack of approximately 1 million lines of C.

I had assumed in my naivety that as I got more experienced was that I would get better at doing things right 'first time' and wouldn't need to keep going back and fiddling with code I'd already written because it now wasn't quite right. What I learned instead was that he spent at least 50% of his time re-working existing code.

There are two important things. The right design changes as you add more code. You learn more about what the right design should be as your work on the code.

You're never going to get it right first time. So instead concentrate on continuously making it better.


Data is the only thing that I personally care about "making future proof". Migrating data is a pain. Most everything else can easily be feature flagged or coded around.

Even then, I don't dig too deeply into things. I look for two things:

* What a migration path to realistically expected features _might_ look like. If I'm debating between two column types or table setups, go with the more flexible option.

* What scenarios will be extremely hard to get out of. Avoid those when reasonable.

----

In other words, instead of proactively planning for some feature to evolve into the unknown in some way. I'm looking to make sure I'm keeping flexibility and upgradability.


Really agree with your two bullet points. When I'm doing design reviews with people those are always the things I bring up. We're not going to actually design or build for requirements that we don't have yet but we can take a guess at what future high-level requirements might look like and use that as an input when talking about the current design.

An example of this conversation might look something like "This design is nice and simple but here are the possible scaling bottlenecks. If we need to scale X then what would be the path for that?" or "There's no reason to make this configurable now but if we needed to in the future how would we do that?"

Those sorts of questions shouldn't drive the design but if you're trying to decide between two designs that feel roughly as good as each other then it's a good way to tip the scale. I also find it a good way to introduce junior devs to thinking through those kinds of ideas.

One thing I had to learn to be careful with was to make sure it's really clear we're talking about future hypotheticals. I've had cases where the conversation about future design ideas ended with "Ok I'll go get started on that" and then we have to talk about how we weren't discussing real implementation changes, just hypotheticals about how the design might work in the future.


Remember that design is about making choices that actually limit the possibilities in design space, rather than extending them.

Building something “generic” to anticipate “future use cases” is not design; it’s the postponement of design.

If you anticipate some unpredictable future feature, then either you do not understand the design space yet, or you are following the directive of a business which does not understand it.

Startups (prior to product/market fit) have a legit reason for not understanding their design space; they’re still exploring it. That makes design pretty hard. Instead of creating The One Generic System To Rule Them All, I’d recommend small, less-generic, low-risk prototypes, that can be easily replaced or refactored. Keep them uncoupled. Meanwhile, use those prototypes to build up your understanding, so that you will be able to make informed design choices at a more mature stage.

I’m speaking in broad strokes; reality may apply.


> How far should we go in making things generic?

My rule of thumb for this is to repeat yourself at least 3 times before trying to build an abstraction to DRY them up.

3 use cases is obviously not always sufficient inform the design of a good abstraction, but IME:

- abstracting at 2 use cases is very often premature and results in leaky/brittle abstractions where the harm from added coupling between fundamentally incompatible usages far outweighs whatever little harm can be introduced by keeping the 2 usages as repeated code, and

- with any more than 3 use cases, the maintenance cost of keeping the usages in sync starts to scale non-linearly, so at the very least thinking about a design for a potential abstraction at that point becomes a worthwhile exercise, even if we end up deciding it's not yet appropriate (hence the "_at least_ 3 times").


>My rule of thumb for this is to repeat yourself at least 3 times before trying to build an abstraction to DRY them up.

Also known as WET (Write Everything Twice).


This is an interesting question. Intuitively, there should be some sweet spot, some golden middle between going 100% ad-hoc and writing for the future with no real use cases. I'm not sure if anyone in this world knows where is this sweet spot though.

In 2017 I started https://wordsandbuttons.online/ as an experiment in unchitecture. There are no generic things there whatsoever. There are no dependencies, no libraries, no shared code. Every page is self-sustaining, it contains its own JS and CSS, and when I want to reuse some interactive widget or some piece of visualization somewhere else, I copy-paste it or rewrite it from scratch. In rare cases when I want multiple changes, I write a script that does repetitive work for me.

This feels wrong, and I'm still waiting when it'll blow up. But so far it doesn't.

Sure, it's a relatively small project, it has about 100 K lines along with the drafts and experiments. It is although inherently modular since it consists of independent pages. But still, this kind of design (or un-design) brings more benefits I could ever hope for.

Since I only code that I want on a page, every one of my pages is smaller than 64KB. Even with animations and interactive widgets.

Since all my pages are small, I have no trouble renovating my old code. Since there is no consistent design, I forget my own code all the time, but 64 KB is small enough to reread anew.

And since there are no dependencies, even within the project, I feel very confident experimenting with visuals. Worst case scenario - I'm breaking a page I'm currently working on. Happens all the time, takes minutes to fix.

I still believe in the golden middle, of course, I'll never choose the same approach in my contract work; but my faith is slowly drifting away from "design for the future" in general and "making things generic" specifically. So far it seems that keeping things simple and adoptable for the future is slightly more effective than designing them future-ready from the start.


Thanks for sharing wordsandbuttons, it is full of very interesting content, which is, after all, the only thing that humans consume from it. It is a very good answer to OP. Furthermore, the site is very satisfying to navigate, specially with the dev console's network tab on. I'd just love if the whole web was like this again. Keep up the good work!


I think a common conflation is seeing "making something future proof" as "making it more generic".

IMO, good future-proof design is about putting in place good components and system boundaries.

Those components and boundaries can be highly specialised and have as few options as possible - it's much easier to make a system boundary more complex than to make it simpler. So start as simple as possible!

Now, with those boundaries, you can easily write tests, and iterate on the different parts of the system. Bad code in one component doesn't "infect" bad code in another part of the system.

Most "balls of mess" systems that I have seen came down to not having clear boundaries between components of the system, rather than being too generic or not generic enough.


David Parnas makes the distinction between general software and flexible software. General software runs without modification in a variety of environments. Flexible software can cheaply be modified to run in a variety of environments.

When you cannot predict the future, flexible is often more efficient than general.


1. Embrace refactoring/rewriting as inevitable;

2. Understand it's very often easier, faster and better to write something trivially but twice than writing it well once;

3. Realize no matter how hard you try, unless this is something you've already done once (which brings you back to #2 above), then your information of the problem is incomplete and therefore any all-encompassing solution you may have will essentially be partial and based on extrapolation of your knowledge and guesswork.

The above also deals with a significant cause of procrastination - an internal subconscious feeling we can't do what we're about to do well, therefore we'd rather not even start it. Just sit down and tell yourself, "today I'm going to write a bad module X", and with enough time you'll be able to sit down again and write a good X.

The one thing you should spend a bit more time on is interfaces and decoupling, but as for internal implementations of things, or even entire system blocks that you can swap later on - don't bother too much. It'll be OK at the end, and you'll get there faster. Even if you need to rewrite the whole god damn thing.


^ This

Also:

1. cut features and enjoy saying no.

2. set deadlines and ship (if not enough time, see 1 above)

3. don't skip tests; makes refactoring/rewriting a breeze


Two things in combination help me fairly well in that regard.

1. Keep It Super Simple (KISS).

Implement the solution which works the easiest first. Copy-pasting code is ok at this point. So if an "if" will do it, use an "if". (Don't start with a BaseClass with an empty default method and a specialized class which overrides it).

Once you have something working (hopefully with some automated tests?) you are allowed to refactor and abstract. But see next point.

2. The Power of Three!

You are only allowed to abstract code once you have copy-pasted it in at least three places. If you only have two, you must resist the urge and move on. Maybe add a comment saying "this is very similar to this other part, consider abstracting if a new case appears".

After abstracting stuff, run tests again (even if they are manual) to make sure you have not broken anything.

Be warned that this method is not guaranteed to produce the "best possible code for future you". If you keep doing this long enough, you might get stuck at "local maxima" in design. Future you might need to do big refactors. Or not. That is the nature of programming IMHO.


Knowing the point where you pass good design and start over-engineering is an art developed through wisdom and experience. Sort of like the art of making good LOE estimates given incomplete requirements and an unknown team.

I would look to senior people who have boots-on-the-ground experience delivering and maintaining projects/products, ask them if they think you're overdoing it.

edit: I want to add, get input from non-engineers too. Ask them "in your experience, how far have projects diverged from the initial requirements/purpose? I'm trying to plan for the future but not over-complicate things"


The stupidest over engineering pattern I've seen is having an interface for every single class in a Java project. This may make sense if you are building an API within a framework or library that people other than you are allowed to implement but when the interface is only used within the project it was created in then it is absolutely meaningless to add the interface before you need it. You can always add a necessary interface later by editing the code. Doing a search for "ClassName" and replacing it by "InterfaceName" isn't exactly difficult.


Some of this is the experience to know what parts of a system are likely matter in the long run and some of it is the discipline to hold yourself and your team accountable.

If I had to boil the first part down, I'd say you need to focus your engineering on the interface points - network protocols, schemas, API sets, and persistence formats immediately come to mind. It's a truism of the profession that those are the most expensive aspects of a system to change, and therefore they're the least likely to be mutable down the road.

That's really the easy part... the harder part is maintaining the team and self discipline to keep things simple. For better or for worse, engineering job placement (and oftentimes personal satisfaction) is highly resume-keyword driven. Organizations and people all tend to chase the next resume keyword on the assumption that it'll help them deliver more efficiently or get the next job placement or write the next blog post or whatever else. The net of this is that there's a very strong built in tendency for projects to veer in the direction of adding components, even before considering whether or not they're appropriate for use. So keeping your eye on actual, real project requirements over all else is both important, and can require convincing and political work throughout a team and organization.


Disclaimer: I have never worked in large teams but this problem arises everywhere with the caveat that in smaller teams and solo development the coordination requirements are much simpler.

That said, I think there are two steps in designing abstractions:

The first is to split up and isolate both code and data into small, simple pieces. These can easily be non-DRY (structurally) and often are. You'll have cases where you often (always?) pass things or do things in sequence in the same way.

The second is to merge the pieces with parametrization (or DI in OO), defining common interfaces, polymorphism etc.

The first part is very often beneficial and makes code more malleable, which makes future features/changes less cumbersome.

The second part however is the dangerous but also powerful one. It can lead to code that is much more declarative, high-level and adding new features becomes faster. But the danger is that a project isn't at the stage where these abstractions are understood well enough, so you end up trying to break them up too often.

I try to default to writing dumb, simple small pieces and deferring abstractions until they become provably apparent. Fighting the urge to refactor into DRY code.

Now there is also another issue with writing "future proof" code, which doesn't involve abstractions but rather defensive programming, which is an entirely different issue.


I would agree with the parent here, try to not go into abstractions until they are apparent but there are some pretty solid patterns that have worked thru the years and still hold up today.

The first being pub-sub, this can be an event system or and observer pattern but it is good to have loose coupling of when something happens it can loosely tell it out into the ether and not care about if anyone is listening.

The second being sole responsibility of change, this does not have to be something as formal as a Redux or some other library if you are using a back-end language but there should be one function that changes one segment of data and that is the one function that owns the writing of that data.

The third is more optional but in certain places it really shines and has become kind of a lost art and that is plug-in style interfaces. Say you have a portion of you app that parses text files and stores them in a common data structure and you know that there will be future text file formats but you do not know what they will be. A plugin architecture is a natural fit here, where you define the interface and the data storage side, then you only need to implement a plugin for the parser side for each new file format. I always write these as true plugins where you can drop a new one in a plugin folder and the application picks it up, no recompiling no configuration, no restarting, just a black box that is self contained.


I fully agree with the first. You usually need some designed way of handling communication and/or IO in a sufficiently large application. Whether these are event queues, channels, commands, pipelines or w/e depends. But thinking about this up-front is the key here.

I would describe the second as a constraint rather than an abstraction. And I fully agree with this. The most obvious and probably most talked about benefit of FP would be avoiding state management complexities.

The third one is neat. But I would say we're already in danger territory here. Just writing a function and naming it properly is the minimal step required to make refactoring into a strategy or plugin pattern later fairly straightforward.


One heuristic I've adopted: when faced with a design decision, I ask the question "what's the simplest possible way to do this (that meets the requirements)?" So, do you need a fully event driven Kafka based system, or would a cron job running a python script be sufficient?

The followup is, if we have to change/replace said system, how difficult would that be? If scaling the simplest solution would be difficult/painful, then one can start look to higher complexity solutions. The idea isn't necessarily to always choose the simplest solution, but having it in mind can be helpful. It crystalizes one of the endpoints of the simple/complex spectrum and helps weigh the pros/cons of various approaches.

As a side note, it's sort of amusing that a lot of design focused interview questions are around things like "design a twitter/instagram like system that'll scale to billions of requests per day." I've never had to do anything like that IRL, but no interviewer has ever asked me to design, say, an invoicing system that gets called rarely. So perhaps one of the reasons the arc of software engineering bends towards complexity is that we're continuously rewarding a "build massive scalable systems" mindset?


If you ask an ice skater how not to fall, they may likely tell you that they have falling tens of thousands of times and continue to fall all the time. But they will know, they know how not to fall because they are intimately aware of what it feels like to fall and know how to avoid it as a result.

I don't know that there is any readable wisdom that will teach you how not to overengineer or underengineer, such that with that knowledge, you automatically know how to achieve the right balance. It is likely a necessary part of the process to build out software projects that are in fact overengineered or underengineered and to intimately learn from that process as well as the aftermath how to tune one's own process to strike an artful balance between the two extremes.

put another way, the MS and Google teams you worked with were screwing up, but screwing up in a way that is necessary for people to learn, if they do in fact learn (they might not learn).

all of that said, starting out by intentionally underengineering, if you know that you are one to start overbuilding, might be a good strategy. but you might have to do some really huge refactorings subsequent to that when you learn your architecture is insufficient.


This by far the main reason to have people with decades of experience in engineering teams.


Well said! This is why older engineers should be prized and actively recruited. When i see engineering teams staffed solely with "Young'uns" i know there are hard times ahead.

Enthusiasm and Energy needs to be controlled and directed by Experience and Good sense.


Make your code modular. Use dependency injections. Write short, clear functions.

When everything is DRY and functions follow SRP, it's hard for your code not to be "future-proof."

Tests only when they make sense, not when you are mocking half the app.


> At what point we should stop designing for future use cases?

Guessing the future and engineering to cope with it is a risky, error-prone business. Good engineering practice should always seek to minimise reliance on unreliable (prediction) data to create future-proof designs.

So I'd go with stopping as soon as it meets current use cases AND then shift gears to make it easy for someone else to pick-up in the future, e.g. write whatever tests are necessary and document thinking.

If it's easy to extend now, it will still be so in the future. Plus, there will be whatever learning has occurred since to make an even better job of it.

> How far should we go in making things generic?

Only if: 0) It's not only about guessing future needs. 1) The maintainers are over 90% certain it will make it easier for them to maintain and test now.

> Are there good resources for this?

There's a mountain of media on "Agile" software development, and it's different flavours. It's not particularly Software Engineering focused, but I enjoyed the "The Lean Startup" by Eric Ries.

Good luck and have fun.


This tendency has been noted since at least the Manhattan Project, described by none other than Richard Feynman. The prescription for oneself is pretty simple - just say no! It is relatively easy to catch yourself whenever you think about some improvement that's irrelevant to delivering so long as you have convinced yourself that you need to wholeheartedly focus on delivering.

It is more difficult to persuade others not to over-engineer. I have tried and failed many times to do so. In fact, if you try too hard you may just make them hate their job. Folks get into software development for a whole host of reasons and only one of them is shipping. They may not be satisfied with a job where they funnel requests from a PM directly into code with little creative input. I'm not sure I can give good advice on this front, other than to look out for certain red flags during hiring.


> The prescription for oneself is pretty simple - just say no! It is relatively easy to catch yourself whenever you think about some improvement that's irrelevant to delivering so long as you have convinced yourself that you need to wholeheartedly focus on delivering.

That's fairly true if you intend a monomaniacal focus on delivery and you are dealing with improvements that are truly irrelevant.

If other concerns like code hygiene have nonzero value, and/or if improvements have some potential relevance to delivery but are not unmistakably essential, then things get more interesting. Doing the least work possible to get by with a work item may be good for initial velocity but at long term cost.

It's probably easier in an environment where things like code hygiene expectations and review standards are well-known from a combination of express standards and team experience, but not all environments are like that.


Two points here. First, I think the primary trap is that developers think they can judge what code is hygienic. This is the evil twin of the notion that developers can judge what part of a system is slow without profiling.

Developers regularly hold religious wars on basic matters of code style. While you may profit from 'hygiene' there be dragons - developers often consider as hygienically critical such tasks as refactoring React class components into functional components with hooks. The industry is not full of Edsger Djikstras. In practice the sub-optimal greedy algorithm gets the job done since it encourages a straightforward approach to the problem at hand.

The secondary trap is that developers think they can amortize up-front development costs over a long period of time. Projects get the axe if they do not deliver. The longer you spend writing a feature, the longer you wait to test your hypothesis that it is valuable. In many cases the odds that the feature is valuable run below 50%. You may be laying a solid foundation when all that was asked of you in the first place was a shantytown.

Everyone thinks they are the reasonable person with discerning taste who only refactors code as necessary to optimize total cost. In practice that person is as mythical as a unicorn. It is very difficult to convince developers this is true, but true it remains.


> First, I think the primary trap is that developers think they can judge what code is hygienic.

If software engineering is engineering, they can. It may be a skill that is weaker and stronger between different engineers varying particularly by relevant experience, and it may require complex and context (including present development team) sensitive evaluations, but it is a real skill that exists.

> This is the evil twin of the notion that developers can judge what part of a system is slow without profiling.

Well, it's not, because there is no analog to the “without profiling” part. It's true that there is an analogous tendency of prejudging problem code, but certainly, at a minimum, hygiene issues can be discovered by experience of problems of development/maintenance stemming from, e.g., code duplication for shared functionality instead of use of shared abstractions.

> The secondary trap is that developers think they can amortize up-front development costs over a long period of time.

Note that when I said long term I don't necessarily mean a “long period of time” but “some period longer than the minimum development time of the present item”...

> Projects get the axe if they do not deliver.

Yes, they do, but plenty of real projects aren’t, especially at initiation, delivering actual value every iteration, just progressively refined demonstrations (this is particularly common on replacements that, for whatever reasons, can't go the strangler/ship of Theseus appproach, and on those projects the team often has an idea where things need to be for real delivery. It's quite possible for code that makes subsequent tasks more costly even though it saves time this iteration to delay real delivery.

> Everyone thinks they are the reasonable person with discerning taste who only refactors code as necessary to optimize total cost.

No, in my experience that's not even approximately true. Most developers I've encountered think that, in their current team environment, they individually have a natural tendency either to excessively favor direct solutions that produce ugly code with outsized downstream cost or to go a few steps too far with overengineering abstraction (and most of them also recognize that that that overall tendency is only on average, and that they also miss on the other side sometimes.)

My point is that when you get beyond the simplistic rejection of things which have no relevance to the task at hand (which isn’t a scenario that occurs all that much), evaluating what is the right balance is nontrivial.


> If software engineering is engineering, they can.

Therein lies the question :)

> Well, it's not, because there is no analog to the “without profiling” part. It's true that there is an analogous tendency of prejudging problem code, but certainly, at a minimum, hygiene issues can be discovered by experience of problems of development/maintenance stemming from, e.g., code duplication for shared functionality instead of use of shared abstractions.

Even in this case we are limited. I've seen many cases where someone correctly identifies a piece of code that is painful to maintain and then dives in only to multiply the problem. We are even better off in performance land because we can test our code immediately after we write it to verify we haven't left things worse.

> No, in my experience that's not even approximately true. Most developers I've encountered think that, in their current team environment, they individually have a natural tendency either to excessively favor direct solutions that produce ugly code with outsized downstream cost or to go a few steps too far with overengineering abstraction (and most of them also recognize that that that overall tendency is only on average, and that they also miss on the other side sometimes.)

I don't want to wade into the thicket on this one except to note that it's possible for someone to articulate something they may even believe on some superficial level without truly internalizing that belief and that IMO I observe this behavior a lot.

I don't want to invalidate your point outright because I think it's valuable and I even agree that it's rare to encounter a code base that would not benefit at all from 'code hygiene.' All models are wrong but I think an aggressive focus on delivering functionality is a useful one in most contexts. Put another way, a good but imperfect rule of thumb is to refactor only in order to gain something tangible for end-users.


Summed up as YAGNI

You aren’t gonna need it.

As with anything tho, ruthlessly cutting stuff or rounding corners is in tension with supporting future capabilities, sometimes you do need it it turns out.


Over-engineering is the curse of smart guys without enough experience.

I believe this is well documented today, but maybe not that much in academia.

What they need to avoid this trap is to talk with people who got burned earlier.

As a rule of thumb, don't design for future use case, ever.

Future-proofing is a fool errand.


It depends on the impact of the change to add flexibility in the future. A schema change to a table with many rows that could involve re-indexing usually makes me think harder about the initial data model and schema design. Refactoring code usually involves less cost and risk, especially with modern CI/CD practices, so I'm less likely to add a layer of abstraction if it's not required.

I've also found that designing for unknowns can sometimes be resolved by asking questions about the unknowns until they're known. Sometimes, business stakeholders have an answer, they just weren't asked.


Designing for the future rarely works out very well. What does work, however, is encapsulation.

For example, in the 80's the linked list was my go-to data structure. It worked very well. But with the advent of cached memory, arrays were better. But my linked list abstractions were "leaky", meaning the fact that they were linked lists leaked into every use of them.

To upgrade to arrays meant changing a great deal of code. Now I encapsulate/abstract the interface to my data structures, so they can be redesigned without affecting the caller.


In my mind it's surprisingly simple: you VERY honestly ask yourself the question of you know exactly what you are building. If you do then don't be afraid to plan ahead. If you don't, then ship the smallest thing that works :)

If you are doing a rewrite of an existing system or have many years of experience with a similar product then thinking ahead can save time. Otherwise you are probably better of not trying to be too clever.

The difficult part for most people is actually being honest in the process :)

Also Kent Becks timeless advice is good to keep in mind

1) make it work

2) make it right

3) make it fast

In that order. You might not need all three :)


I have been in this trap myself. I tried writing only perfect code. I ended up thinking more about the design than actually progressing on the issues. The solution I found was to write explicitly hacky solutions first, then improve them as you go forward. Either hacky by being slow, or by not having all the features you want it to have eventually. Not hacky by having huge bugs, don't do that, as finding them is a huge cost the later you find them! Put TODOs about the aspects that need improvement, this allows you to grep for them. Also, as you write the first implementation, you'll gain more knowledge about the problem domain than any non-coding research could give you. Maybe you don't need that one feature after all, or maybe your approach was totally wrong and you should do a different one. It's better to find that out when you only have invested time into a prototype instead of a full generic solution!

And this is not an obligation. If you are really sure about the design of some component, you can also do it right the first time. But in most places, you usually don't know the design well enough.

Also, the imperfect solution can help as a validator. Have a problem that has a unique solution? Make a slow algorithm for it which you are sure is correct, then build a faster version and use the slow algo to compare for correctness. You can also use the slow algorithm to tell you about non-output values that both algorithms compute implicitly or explicitly.


It's a difficult battle in many teams where some people will just go overboard for multiple reasons, usually in this order: fun, fear of being seen as short-sighted and/or promotions. It mainly happens in big orgs and your examples definitely fit the bill although I haven't worked there specifically. I'm not that old (very late 20s) but I've seen overengineer code (or infra) ending in the trash so many times... many people could see it coming and warned about it, yet it still happened.

I usually try to support the opposite idea by bringing agile to the table. Agile doesn't say "please don't overengineer" but "this can change a lot in a couple sprints, requirements may be different, there are more (or none anymore) use cases now" and hence why to spend so many resources doing something that not only doesn't cover all potential unknown use cases but also adds overhead when reading the code for no benefit. For some teams this has clicked very well. Appropriate agile training can help a lot.

However it doesn't always work and if the person(s) doing this have higher ups backing it up (even if unintentionally) you're fighting the wrong battle. If you can't change it, move teams or to a company where money is a bit tighter and/or where these behaviours aren't tolerated - in other words, where things needs to get done and there's no time for experimenting with things that most likely won't yield returns.


Solve the problem directly in front of you.

Then understand a new problem.

Repeat.

If you then start doing things like externalizing inputs, writing tests then when your past solutions become future problems.. You will have created the guide rails on how to solve both the past problem and the future problem together. This is the futureproofing that you should be aiming for. Make sure you write at least a few unit tests that mock out the important logic parts, and write up a short 1 page document about the intent of the program, maybe with a picture and drop that in the README.


One of the most challenging things in growing a software product is managing complexity. If you are designing a product for future use cases, you add complexity upfront. To keep a healthy balance, I try to follow simple guidelines:

* Focus on the current assignment. Implement it using clean code principles, don't overthink the problem.

* Rather than spending time on design decisions, allocate time on handling edge-cases. These will save you from PageDuty alerts.

* Plan excessive for future use-cases only around data models that insert/read into the database. Data migration is super-painful. A more generic design around your database is almost always preferable.

* Have a feature toggling[1] service in your codebase. It will provide you with a better understanding of how you can implement new features alongside existing codebase in the same branch. Releasing features from long-running separate branches is almost always a wrong decision.

* Always keep in mind time-constraints and how urgent is the requested functionality. Don't let perfection be the enemy of productivity.

* Have a process in place that allows for the technical debt tasks to be tackled independently. It helps fix some bad design decisions, which become apparent in light of new requirements.

[1] https://martinfowler.com/articles/feature-toggles.html


You should immediately stop designing for future use cases.

Use TDD and only write only just enough code for the currently known required functionality. When you get to know more required functionality the tests protect you from breaking existing functionality and you can extend your code to support the new use cases as well. At this point you should make your code just generic enough to support all known use cases without code duplication or too much boilerplate code. If you can support functionality in more than one way you can decide what way to choose based on what you expect in the future. But choosing the most simple solution trumps attempting to future proof your code. It turns out that predicting the future is quite hard and there will be new feature requests that nobody had foreseen and code that has been made as generic as possible will not handle this well.

Spiderman says that with great power comes great responsibility. The converse also holds true. With great responsibility comes great power. You cannot just pick the easy part of TDD, be irresponsible and expect to have any power. The less easy parts of writing a test for every use case and of refactoring all the time make the practice of not attempting to guess the future possible. If you leave out the prerequisites the end result will not be so very pleasant.


TDD has costs. It's expensive and only works on certain types of systems. It also makes exploration of the problem domain extremely costly and makes refactoring a nightmare. I dislike TDD full stop but there are some domains where it's unambiguously a bad idea.

It implies:

- Behaviour can and should be compartmentalised

- Certain types of efficiency are ignorable

- Data structures are better off being relatively simple

- Behaviour is able to be understood before the system emerges

- The system doesn't fundamentally need a lot of mutable state

- Dispersing functionality across small, atomic functions (that obscure sequential flow and state mutation) is good for the code

- It is easy to extract the functionality of this system into pure functions

- A high-level veiw of the system is unnecessary (!)

- In this system, most bugs will come from small units, not interactions between units

- Behaviour of units is likely to be relatively unchanging

- Refactoring primarily happens between interfaces, not to them

- Test rigging is cheap and easy at every level of abstraction, or at least that it's better to contort your system into a structure where that's true

None of these things are a given.


The only thing that somewhat makes sense is that during exploration TDD may not always be the most practical. It immediately starts to be extremely practical once the exploratory phase is over.

I kind of feel you live in some kind of alternate universe. None of these sound true to me, at all. The most important misunderstanding that seems to be going on here is the assumption that in TDD it is a given that one is testing single classes and/or methods. This is not the case. In fact, in most cases it is much more beneficial to test a set of classes/methods at the same time in a way that is representative of something that the customer values.

Honestly, I have seen TDD work so well in so many different circumstances that I have started to consider people who do not do not write test for most possible scenarios as being on the not-so-very professional side of things.



There are a few first principles from which we can design code.

The first is: Requirements change.

For many reasons. User needs change. Business processes change. Technology evolves.

If you're going to keep up, you must design code that is easy to change.

Code that is easy to change tends to be code that can work with lots of different code.

Code that can work with lots of different code tends to lean towards the general end of the spectrum, rather than being too specific.

Designing code that is modular does not mean that you need to over-engineer, and it doesn't even mean that it needs to be used more than once.

It should mean that the code has locality: The ability to understand the full effect of the code without also understanding the full context of the code around it or the full history and future life of every external variable it uses.

It may sound to new developers like I'm talking about something complicated, but it's the opposite:

Code that can be easily adapted to future use-cases tends to be more simple. It tends to know the least about it's environment. It tends to do only one thing, but do it so well as to be perhaps the optimal solution.

What follows from the first principle is perhaps the most important principle in software development. Remember this and you'll find yourself needing to do a small fraction of the work you once did to produce the same value:

A small change in requirements should lead to only a small change in implementation.


I personally like the simplicity of designing for change. Simple rules like TDD help you to think about the design up front.

I was talking to a new guy that just joined my team yesterday on this subject. You really cannot predict the future.

You could also look at it from one other angle. If you are only building the bare minimum to satisfy the requirements, that is a lot less code you are writing. If you need to replace the system, that is a lot less work to go back and rework.


When you have good refactoring tools, a lot of this is significantly less issue. So, design for easy refactorabilitym lack of repetition and readability.

I dont like "used the simplest solution possible" advice, because people who I claimed doing that in real life tended to do unmaintainable spaghetti mess. It was "simple" in the sense of not having abstractions or nested function calls, but hard to read and understand big picture. Sometimes generic is a way to untangle such previous spaghetti mess. As in, it is second step on the road that requires 2 steps till it is really good.

Understand politics. A lot of those "future cases" are things that analysis or management requires initially or indirectly. They are also often result of trying to hit vague requirements - you dont know what customer really needs in enough detail, so you do it configurable in the hope of hitting the right place too. They are also situations in which people burned out in the past or special cases of special cases.

Otherwise said, the complicated thing is often requirement, initially. Someone had reason to ask for it or thought to have reason. It gets forgotten and ignored after a while in which case it is ok to cut it off.


Invest in very good refactoring tools.

Invest in very good testing, especially higher level integration / system testing.

Invest in a good dev/staging setup for your production environment, and also try to make rollouts and rollbacks automated and as painless as possible.

There will always be the need to change stuff, so get the pieces in place to make changes easier code, easier to test, easier to deploy, and easier to back the fuck up when you inevitably cause something to burn.


I find the best designs are simple ones that reflect the underlaying concepts. Most designs that lead to complexity are based on the wrong abstraction. I have found this to be true nearly in all cases.

Of course the issue you run into is that the problem software is trying to solve changes over time and then the original abstraction which were correct become wrong over time. Then you should advocate refactoring to the new abstractions if possible.


By being aware of the true costs: It's not the cost of making the code more generic (typically relatively cheap) -- but refactoring costs when it turns out that the code actually needs to be more generic, but in a different way. It's easy to refactor something simple into something more complex/generic, but it's hard to refactor something complex into something that's still complex, but in a different way.


> At what point we should stop designing for future use cases?

Immediately. Never design for a future use case until it's a present use case and you're implementing it right then.

> How far should we go in making things generic ?

It depends on what it is. By not designing a thing to be generic up front, you have to figure out what n=2 looks like. Is that a function? A class? Copy a little bit of code? Then n=3. Once n=10, I feel like I have a good idea of the problem and how to make it generic, and it's rarely what I would have thought at the beginning.

Sometimes n never reaches 2. Then you've saved a lot of time. Also, you realize when you have to change things - maybe it's once a release, or very frequently. Things that are touched frequently probably need a refactor.

My rule of thumb is: never make tomorrow's possible problem today's complexity. If you design for future use cases, not only will it take you longer, but your code will inevitably be more complex than it needs to be, and therefore have more bugs at the very least due to that complexity.


If there is an actual customer, try to get the scale of users/data the system is expecting. A system handling 1/1k/10M things every day will be quite different and need different solutions.

Problems arise when people reach for the Cool Tools and start building Webscale things that can handle 100M operations every second. ...but the customer only needs the system to handle 4 users who type in everything crap by hand. But hey, it has a clustered database and Kafka and Kubernetes and looks REALLY good on your Resumé.

When the scale is determined, I personally like the mantra of "Make it work first, then make it pretty". First build an MVP (or an ugly Viable Product), that proves your plan actually works, then you can iterate over it to make the implementation cleaner or faster or have better UX.

If you get stuck making everything super-generic and able to handle all possible cases, you'll spend time bikeshedding and never get anything deployed. Just Repeat Yourself with wild abandon and copy/paste stuff all over until your Gizmo actually works. You can spend time figuring out which parts are actually possible to make generic later.


How much time do you spend "making it pretty" after you've got it functionally working? Interested to hear your experiences on that.

For a long time we would "pretty it up at the end", in one of my coworkers words, which lead to a ton of horrible UX decisions early on that required major legwork later. We switched to doing ux-driven development from the start and it's saved us a ton of time and saved ourselves from poor decisions haha.


I make it pretty until I hit a deadline or the customer runs out of money :D

It depends on the customer and project which parts I focus on making "pretty". If it's a data-intensive thing, I might spend time optimising the protocol, compression and making the data pipeline robust. For a web app I'll spend more time making it more usable by streamlining the most relevant flows (which I know of since the client has been testing the (M)VP already).


Disclaimer: the following method may not work if your library is public facing, as breaking changes to public API is usually a terrible idea.

Ask yourself the following questions: * Are you an experienced developer on this particular problem domain? * Do you have good understanding of the future use cases, and the data structure when the new feature is added? * Will the new feature make business or technological sense?

Unless you have a concrete “yes” on all questions above, your design should go minimalistic. Without good understand of the new feature, the code base will likely need refactoring when the feature is implemented. Your effort of going the extra mile (which definitely should be applauded!) is better spent on making the code easier to read and refactor - better tests, better documentation, code review to propagate knowledge and catch bugs.

Another thing, extensibility is sometimes in conflict with readability! Here’s an hilarious example: https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...


This is a typical case I am struggling with during my whole career in software. It is very hard to fight against because it is counter-intuitive. Saying that "let's put aside all those designs and let's do a minimalistic version to satisfy current needs and then we move from there" is always less attractive than "let's do a great architecture which can support our 'exponential growth'".

In my experience following categories of people tend to fall into the trap easily (and it is really hard to convince them):

- Corporate people.

- Business school people.

The following can amplify it:

- Fund raising... you were successful in raising money so that means there is a market and people love your product ... well guess what? you are wrong.

If you are in such a situation, debating is not useful. Changing this mindset is only possible when you hit the wall at least once in your life. BUT ... you can accelerate the process of realizing it (before it is too late). Try to find a way to make people realize that all those bells and whistles and designs were not necessary cause nobody actually asked for and we are changing it anyway now.


There can be no definitive answer to your question but of course some "thumb rules" are applicable.

* The nature of your "system" defines how much attention you should pay towards futureproofing. If it is an end-user app/feature only do what is required and nothing more. You are solving one instance of a (maybe) small class of problems. The future will dictate what needs to be added/modified/refactored when you reach that point and not before i.e. no predictions unless you know the domain well.

* If you are building an "architecture" component like OS/Framework/Library etc. then you need to pay attention to generality and extensibility and design with futureproofing in mind. Use standard best practices like data/api versioning, narrow module interfaces, information hiding etc.

* Always focus first on Readability and Maintainability to the exclusion of everything else. Only when you hit a wall with respect to aspects like Performance etc. do you go back and redo/refactor the code for the newly prioritized requirement.


Don't design for future use cases, unless it's a library that may be extended. Instead, write concise code.

Modify the code when the use cases change. In most of the times, open-closed principle is a trap.

However, over-engineering is not only a technical problem. It's more of a problem on project / people / finance, here are some examples but it would definitely not limited to these:

1. The margin is big and the team is big, so we need to keep these people busy.

2. The current stakeholder will cover the budget these iterations until we finish these features. after that, the maintenance costs maybe partially falls on us, or there's will be no budget at all. The less problems in the future, the better.

3. The current budget is for functionality A and someone pays it. And we plan to implement another feature B that is similar to these, and the budget comes from our own. Better make the solution generic so we can use it almost for free.

4. The list goes on...

Better fix those root problems first.


Demo driven software development oddly enough

* A demo is a thing you can show. The first demo is usually

   - the program will compile, run and print "hello world"
* A demo is a contract between stake holders

* Demos happen frequently. For a developer each day, for a team each week, etc.

Why does this help? A demo, actually showing something, is the only enforceable token that commits both part to the contract. In that way it is almost a currency. No stakeholder, whether manager, sales or developer can argue - either the demo was met, or not or was not clear.

This is much like sprints, test-driven, but a demo has the contract aspect.

Demo-driven reduces many of the causes/motives for over-engineering. * Is the over-engineering bit in the contract? Why not? * New function requires a new contract so no more last minute changes. (Been there done that). * Stakeholders gain experience designing demos. * Demos are adaptive. They provide tactile feedback.

My 2 cents


Historically, this made much more sense. RAM was incredibly scarce as were CPU cycles and the hardware was often tied intrinsically with the software so a modification later down the line was a really big deal.

With modern higher-level languages and scalable cheap hardware, this motivation should have gone away and we should be writing code that is relatively easy to re-factor.

If I don't know that we will need this thing in the future, it doesn't go in. Simple. If 6 months down the line, we now need to add the new feature, I like to think that my code is largely maintainable enough to refactor it to add the feature.

The only exception I can think of is where something is designed to be extendible like e.g. Instagram filters where you might have 10 when you launch but you know that you will have more in the future so you write your code to allow additional filters to be plugged in relatively easily.


I think most of the time this is just Parkinson's Law playing out: "work expands so as to fill the time available for its completion"

Things get over-engineered because too much time is allowed for the task.

The forcing function that's helped me is simply giving me and my teams small but meaningful time boxes to get a feature done. Sort of like "sprints", but with more teeth. E.g. An important feature is going to get announced in a newsletter ever 2 weeks. Sprints too often don't seem to focus on the shipping to customers part.

So we focus on shipping the smallest thing that could possibly work in an arbitrary time box. You know users need X. So you make X or X' or X'' - some version or whittled down version of X that'll relieve user pain. The time box does works wonders from keeping over complexifying from getting out of control.


By having a time limit. When I was new and over excited young guy I started with the assumption that we'll be working on it for a really long time but it rarely proved true.

Truth is most projects will be shelved before even they are complete.

You'll be taken off some projects because of financial or political BS.

So start with the assumption, there is limited time to deliver - now ask yourself - how can I maximize the effectiveness of that time? How can I do what truly matters without going into the rabbit hole of optimizations and using the best thing possible everywhere.

And once you've done that, you can always come back and improve the thing if the project is still around but most likely it will not be. Either you'd have switched the company or company would have switched you or the project has switched the company.


We overengineer when we overestimate how hard it is to modify details. As juniors, we wrongly learn that changing code is hard and must be avoided at all costs. As mid-level and seniors, these refactors are much simpler, but the painful memories stick.

Architectural boundaries are hard to change later. Drawing the dependency graph and isolating nodes that can change independently is where the majority of our effort should go (imo). Even still, simple is less risky than complicated. Anytime there are more moving pieces than necessary, there is a risk of an unexpected requirement blowing a hole in a design. So identifying those dangling pieces and spending a lot of thought and energy in removing them is where I've found it to be rewarding to "overengineer".


I like to build a forward looking document for a team/groups software which extends out to cover the companies goals + N months/years. The goal of the document is to help define the core components, their role, and integration points such that everyone can reason about future looking tradeoffs. Teams can then use the document to see how their specific roadmap fits into the broader picture. This also helps frame use cases terms of current, goals, and dreams. If you're building for a use case beyond the team's dream features then you're probably over engineering the solution.

In practice goals start to move after horizon/2. And a new document needs to be created to capture where the business is doubling down and where it is trimming.


Break the problem you're trying to solve up into simple steps. Each step should be nothing more than a single task.

Ex. if you have a problem to solve that goes like: Must create store with products that are blue only.

Then you'd break it up like:

Create store Filter products (Blue only) Create products in store

Then when you start coding you solve nothing more than what you put down as each task.

Ex. you don't do anything more than what's required. You might think you need to create a store and stores are businesses so you need to create some business wrapper etc. but nope. You don't need that until a client comes one day and requests it. Right now all you need is a store with blue products and that is all you're going to solve.

Often when you over-engineer something then you never need the extra abstractions you created.


Though this "implement one box of the flowchart at a time" approach can lead to inflexible software due to insufficient modularization.


It's a balancing act.

You need to always ask yourself "What are the reasonable future use cases? How much will it cost to add infrastructure now to enable each of them, and how much will completing the feature in the future cost? How much will it cost to refactor my code later to add it if I don't make preparations for it now?" and only prepare now for things where not preparing would cost a lot more.

I'm finding the game of Go (Weiqi, Baduk) to be a great way to train in this skill, because it's all about seeing potential moves and deciding not to play them yet, and judging if a move should be played sooner or later and how much of a shape should be built now to enable it to be built later without wasting resources on building all of it now.


I wrote about this a few years ago.[0] The gist to avoiding premature over-architecting is to keep sticking with as static/hardcoded behavior as possible to meet the requirements. Just make sure it’s well organized and cleanly written in your language of choice. Then over time add configurability to that behavior as specs change. Architecture is easy to change not when it correctly predicts future change (almost impossible), but when it is straightforward enough to follow and reshape.

In my experience, keeping behavior static/hardcoded is the architectural equivalent of avoiding premature optimization.

[0]: https://max.engineer/cms-trap


Imo it is an organizational thing. Big is good, powerful and important. The more subordinates the more powerful a manager is. Allocated resources (e.g. dev hours) must be spent on something and more often than not must deliver just a little bit short in order to allocate more the next time. Overengineering is rewarded, it creates more tickets to work on and keep the people busy. Then on the other end there are the resume driven folks, who are mutually interested in piling up complexity. Still, invoking Hanlon, it can not be ruled out that some people just do not know better. Sometimes I think KISS is directly at odds with enterprise development.


Intelligence signaling. Programmers love to show off to their colleagues how smart they are, so they try to anticipate all possible upgrade paths. "What if the user doesn't have an email address, betcha didn't think of that. That's why you hired me, the tech master, with over 9000 confirmed code commits". This leads to code that is overly generic, but still a big ball of mud. What you want, instead, is for people to be humble and accept that future changes won't be something anyone can anticipate now, and instead adhere to principles that, in general, lead to modular codebases which can respond to changes flexibly.


Management overvalues complex systems and undervalues more elegant approaches. Architects are often more than willing to oblige mostly to live out their fantasies.

Thus, we have every company it seems these days running around trying to implement microservices because “our architecture is like Netflix’s”. No, that LOB CRUD application isn’t like Netflix and you don’t need polyglot storage, Kubernetes, and microservices to capture input from a web form. However, one of the managers between the architect and the CEO (see: Peter Principle) is extremely impressed by this idea and can use it to advance their personal career.


I usually take YAGNI further and avoid using database (sql or nosql) at all (at least in the beginning)

My typical approach is to log all the commands to the file(s), and keep the system state in memory. When the server restarts, it simply replay most of the commands (skipping those intended to cause side effect)

This is like simplified event sourcing, or command sourcing.

I'm not relaying on any framework to build the backend, simple library handing rest and websocket API is enough.

I'll open source it soon but it's mostly as simple as it sounds, with some optimization to reduce disk space consumption after they becomes a problem in practice.


On a module or service scale I recommend using TDD [0]. It takes some time before writing actual code, so I can use that time to think about clean and effective architecture. After that, I have less time to write actual code, so most of my efforts are going into just painting red tests green, and not overengineer or thinking about phantom future use cases. After all, I have code AND tests, and it's always nice to have.

[0] https://en.m.wikipedia.org/wiki/Test-driven_development


No one has added my favourite quote yet so hear goes: "Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.", Antoine de Saint-Exupéry


Make code easy to read and extend without breaking it. By now, many of us are aware of the importance of writing tests. However, there is a problem in making them readable.

Here's one Kevlin Henney's lecture [1] that crystallizes it. It took me a long time to find it, so you are welcome.

Once you start naming things like this, adding future use cases becomes far less risky and thus you don't need to waste time on them.

https://www.youtube.com/watch?v=tWn8RA_DEic


My observation is that as I’m getting older as an SDE the number of classes I create in a project goes down! Like a lot of people in this thread point out: almost most of your code has to be changed anyway in a refactor no matter how clever and abstract the design is, what’s the point in over engineering and thinking too far ahead of success? I would rather focus on my data structures and their flows. I know if I screw up the foundational of the data, I could be in a lot of trouble comparing to just a bad code organization.


Yeah, the organization of data (and the data itself) is really the only thing that matters. Code itself is cheap and disposable. Just write the minimum number of lines of code required to get the job done, but spend time on the data model. And because you're not anticipating any future use cases, your code is often so short it can just be thrown away in the future.

A larger codebase than solves the same problem as a shorter codebase is almost always worse, has more bugs, and is harder to maintain. Code is the enemy: the only good line of code is the line that doesn't exist.


There's a fine balance between over-engineering and under-engineering, right now I'm working on an old piece of software that has been designed without any significant engineering process, while things remain easy to understand and fix, there's a deep sense among us that simple dump code repeating code has brought duplication, inconsistency and bugs that are easy to fix in one place but impossible to fix everywhere. I think most software outside of silicon valley may well be under engineered.


Simple: don't. Do the simplest possibly thing that can work. Only code for today's requirements. You can't predict the future. Creating extension points without a past pattern of data of how the system is being extended is just guesswork, not engineering.

Once you have a established a pattern of how the system is regularly extended, then you can use that to make predictions about the future. Keeping your codebase well-tested, small, and light will do far more to help you respond to change than guessing.


Sometimes the source of the problem is political: one trying to make a more general framework to deprecate another team, or to avoid being deprecated by generalizing in a slightly different direction.

How do people “defend” that in big Corp?

Boat the technical roadmap and requirements with fantasy future features, so they can say “well that framework doesn’t support X, and our framework will support X”. It doesn’t matter whether X is useful in practice or not, since the managers don’t always have the power to make that call.


1: Simple code is easier to refactor than complicated code. Try to keep your design as simple as possible.

2: It's easier to refactor code with good automated tests, like unit tests, because you can push a button and know that it works.

3: Make sure that your startup order is well-defined. It's easier to debug a well-defined startup order than a startup order where everything is implicit.

4: Know the difference between a design pattern and a framework. Frameworks don't replace knowing how to use a design pattern correctly.


"2: It's easier to refactor code with good automated tests, like unit tests, because you can push a button and know that it works. "

One thing too keep in mind is that the tests should also be kept relatively simple. I have seen codebases where there were so many and often complex tests that modifying the actual code was easy but changing the test would have been prohibitively expensive.


Don't design your software for future features you might need. Design it to be easy to change. This means it has to be decoupled and simple. The trick is to make the software decoupled without making it complex. Adding layers and interfaces and abstractions is the easy way to achieve loose coupling, but it also adds complexity. Making software that's simple while still being easy to change is much harder than making some layer lasagna.


> The trick is to make the software decoupled without making it complex

Any insights into how to do exactly this? Any rules of thumb or guidelines? Also what is not considered complex to some adds a cognitive burden to others.


No, I can’t think of any simple guidelines. It’s a tingling sense you get after 10-20 years of doing it that says “I should probably make this simpler and direct” or “I should probably make this more general/layered”.

I think the sense is mostly developed not by successful designs but by mistakes and the subsequent refactorings.

The only “simple” advice I have is that in FP the simple and decoupled seems to happen without added work, while in OO it’s quite a cognitive overhead to avoid tangling things up. So my only insight is perhaps “make everything simple functions and shy away from state whenever possible”.


Yes for FP. It seems that when I join OOP complex projects/systems it is much harder to grok the codebase if the original creator isn't around to tell me how it works and explain the intricacies. FP on the other hand makes that part a whole lot more easier to digest.


Best practice is to follow the example of the subway system and build short spurs at the end of each line in the directions you might want to continue expanding. That way you're not incurring any real additional cost upfront, but you're saving a ton of money in the future if you decide to keep expanding.

Don't do work before you need it, but also don't give up optionality unless you're getting commensurate benefits from doing so.


There’s a lot of folks in here shouting “don’t develop until people ask for it!”

But there is a certain joy that comes when people ask “but what about XYZ” and you can respond “Yup, we thought of that!”

Granted, I work mostly on developer facing tools and services, which makes it much easier to anticipate the needs of my customer since I too am a developer.

And even with that caveat it’s not always a slam dunk... but it certainly is possible to anticipate requirements in a useful way.


It is different if you are working on a team or as an individual. On a team it's important to build ways for people to work autonomously because once that's done the team as a whole will have more throughput. For an individual those same separations might be a drag on productivity. Regardless, careful consideration for what is _needed_ at every step of the way is important. Extreme Programming is worth a look.


Although software development tends to come across as deterministic with rules on what to do when, a lot of it is up to developing good judgement.

The more time you spend thinking about the business and how what you're building will support it the better judgement you'll develop. You might not be able to articulate or argue it but you'll have an instinct on when an abstraction is going to be useful vs brittle.


See https://www.codesimplicity.com/post/the-accuracy-of-future-p... for some thoughts on the accuracy of future predictions. I've found Max Kanat-Alexander's writings on software to be very workable; I've had many successes applying them.


Never design for future use cases. Design only for the use case in front of you by developing a point solution. Accept the tech debt and move on.


Understand and implement hexagonal architecture. https://en.wikipedia.org/wiki/Hexagonal_architecture_(softwa...

If you build systems with these principles in mind, you can create systems that are extensible, without creating technical debt and YAGNIs.


Read this post a month ago about future welcome architecture and found it quite practical (with some philosophical musings though), hope it helps - https://www.twowaystolearn.com/posts/future-welcome-architec...


Just another vote that the right answer is do NOT add any code/plans/schema for “future” functionality.

It’s actually not that hard to refactor things when you do need a new feature... and who doesn’t love refactoring? It’s fun!

It’s also easier to refactor when your code is smaller and simpler because it’s only been coded to do the things you actually wanted it to do now.


Unless mandated, don't design for unknown future use cases. Keep the code as simple and dumb as possible. Cater only to those use cases that are known. Strong-type everything. Lot's of unit tests.

If new requirements come in it's anyway lots of typing. You might as well spend the effort only when the full situation is known.


Big companies like to brag about operating in 'agile' teams like startups. And in startups you can't afford to over-engineer. It kills lots of startups. So, pretend you're running a startup that has a solid amount of cash? Not sure if there's an analogy for that with what you're doing...


I don't mean to be controversial but I think planning like that shows lack of experience and more than likely a problem with how work gets done. The developers might feel they have no chance to develop the software further after the first release. That means you have to capture all scenarios straight away.


Always complete each use case from scratch in the simplest and most efficient non-abstract form. Then try to merge use cases into an abstract framework.

Accept the abstractions if: - the stack trace depth is never greater than double - the total code size is decreased - performance/memory hit is less than 10%

Thank you for bringing this up.


As Kent Beck says [0]: "for each desired change, make the change easy (warning: this may be hard), then make the easy change"

[0] https://twitter.com/KentBeck/status/250733358307500032


Seeing the forest here, if you learn to ask better questions, you will get a better idea of what the business side is trying to accomplish.

It is really hard to know what other people want or what they mean. If you can really understand what someone wants, perhaps you can avoid write 50% of a system you initially imagined.


If possible I try to apply the rule "it's not a problem, if it's not a problem". Its much easier to write new code than to get rid of old one. Its really helpful if you have a strong feedback cycle that challenges the amount of time you want to spend on something (budget or time scope).


Trying to design for many unknown futures is expensive.

Changing in the future costs something.

Designing for future changes up front makes sense if cost(future change) > cost(future proofing)

SAAS? Do virtually no future proofing.

IOT, do some.

Space probe? Do lots.

Also, if you haven't built quite a few relatively similar systems, don't do future proofing without talking to people who have.


You should never design for the future use case. You don't know what the future will be. You should only design for the current use case you have today and deploy. When the future comes, you return to the code that you wrote and update it to fit the new needs.


It’s hard to flip your brain, but abstractions are bad. Copying code for different business purposes is good. Simple patterns are vastly better than complex frameworks, even if you think it improves unit or integration testing.


> abstractions are bad

They're as good as you make them. Abstractions aren't bad by themselves.

> Copying code for different business purposes is good.

Only when it makes sense, ie when you can't make a good abstraction.


Nope. Different domains shouldn’t share code or data. One domain may have a customer record with address info and another domain may have a customer record with past purchases.

Abstractions end up wiring things together that blur business rules.


1 - Define an executable specification of the next smallest thing you can do to move towards your goal;

2 - Write the simplest code to fulfill this specification;

3 - Improve on what you wrote so it will express better exactly the specification you have so far.


IMO if designing for future use cases is hard, you probably don't understand the problem well enough to be designing for future use cases. Any design you make will be wrong. Write what works and budget in the rewrite.


Sometimes designing for hypothetical cases makes software simpler. The general case is often easier to understand.

Other thought: good engineers can guess future cases more accurately, that's why they're good engineers.



I have been learning Haskell and Gluon. 100% worth the time investment. In functional programming, the over engineering is one or two lines of code compared to hundreds.


I lean towards building it to the exact known specification and don't worry about how it's going to change. If you write tests you can refactor anything.


Best way, IMO?

A well-maintained, well-pruned test suite.

Writing tests forces you to clarify -- to yourself and to others -- precisely what this piece of software is and is not going to handle.


I was brought into to help at an early stage startup a few years ago. The company was building an e-commerce platform and the product owner had this idea of 'attributes' that could be attached to any kind of entity in the system (e.g. product, category, order, customer). If they needed a new attribute they would be able to simply set it up through the admin UI without any developer intervention (because developers are expensive!).

When I joined the attribute system had been build with a beautiful UI, and the backend was mostly working for managing attributes, but that was pretty much it. The first feature I was working on was showing products on the store, and for this the idea of attributes made sense. If you are selling a product in the "OLED TV" category you probably want a "Screen Size" attribute, and to be able to use it to compare against different products in that category. Through the platform we had maybe 500 product level attributes, with more being added all the time, so having them hard-coded wouldn't have been manageable. That was pretty much as well as it worked though.

Sellers needed to be able to manage their stock through the system, so on the warehouse entity there were attributes describing the number of products in stock, lead time, how often they restock, etc. The attributes didn't really have validations, but they had types which described what UI element should be displayed when they are entered. However all of the validations around that were at the whim of the front-end, and in some cases it would send what you would think should be a numerical type as a string (and then if you try to change it something would break as that expected it to be a string), so doing any kind of calculations or logic on the attributes was basically impossible. In the end I just added db-level fields to the stock entities, with validations in the backend to make sure these were as expected. The backend was a Rails app, so this took 10 minutes vs days trying to coerce the attribute system into doing what I needed.

As it was a Rails app we couldn't actually name the model of these Attribute, so had to give it another name, and whenever someone new joined (it was an early stage startup, so had high turnover) we had a 30 minute discussion explaining this. I never got the explanation of how the product owner expected logic to be attached to these attributes without a developer doing any work, but I'm sure they had an 'ingenious plan' for that too.

Needless to say, the startup burned through all it's funding without even launching, then managed to convince the investors to give them a little bit more, launched a half working product, and it turned out nobody wanted it.


Minimize LOC, that points true north, everything else is fake and results in absurdities like AbstractProxyFactorySingleton


are they doing something besides making software compasable and modular? because if so the point is those patterns expand the possibility space of what you can do down the line by making the software easy to change. as long as type software is following principles of comparability it is not over engineered it is just good design.


If you have too many engineers on a problem, they will overengineer stuff. This is a management problem.


> How far should we go in making things generic ?

Never make things generic apriori.


my rule of thumb is:

requirements -> tests -> specific code

if i reach the same code more than once, refactor and bother to generalize....


Can you explain this a little more in your own words, I've tried to read through TDD and talked with co workers but never actually seen this in the wild.

How do you go about planning your tests / separation of concerns. As in do you only write tests for your service layer? I find I'd be wasting time to write it at the controller level/route level.

What about the DML schema?

Personally I always start at the database layer and work up because then I know at least what data and models I'll be working with


Here's how I look at it.. when writing code, you need to run it to try it.

Often people refresh the browser or re-run their CLI program until their feature is finished.

But if you think about it, every "refresh to check if it works" is just a manual test. TDD is just making that manual test automated.

1. Write that test (that you'd anyway have to run manually)

2. Write code until test pass

3. Repeat 1 until done.

Code that aren't designed with a test-first mentality is often really hard to test and require complicated tools or need to mock the whole world.

For the examples you've mentioned:

- I'd unit tests the db service layer (I.e. functions to fetch from db, make sure schema is valid)

- I'd unit tests the various API queries (I.e. filtering, pagination, auth)

- At the controller level, I'd just unit test the business logic and data fetching part.

- Then I'd add a few E2E tests for the UI and user interactivity.

But if you think about it, any of these tests would have had to be run manually anyway. I.e. You'd probably have queried your API with various options and refreshed the page(s) a few time to make sure data was fetched correctly.


I think I would also add, try to find a way to make your tests run fast. If you have a huge system and 3-4 minutes to run all the tests, that is really slow. Your feedback look gets limited.


The service layer can be the most practical place to write tests, because you aren't mocking a lot of random services (like you would with a controller) and you can focus on testing methods that do a single thing, if your services are well written.

If your test is more than 5-10 lines long, you are probably trying to do useless things like mock half the app and you should have a method that returns simple data / services that don't take a 1000 other services as dependencies.

TDD is only a good choice when you have a clearly defined problem and know how you solve it.


today i think it has become a disease. now simplicity is disliked. people see complexity more useful.


Preface:

Once upon a time, I worked for a company who rents movies on DVD via kiosks. When I joined the team, pricing was hard coded everywhere as a one (1), because YAGNI. The code was not well factored, because iterations and velocity. The UI was poorly constructed via WinForms and the driver code for the custom robotics were housed inside of a black box with a Visual Basic 6 COM component fronting it. It was a TDD shop, and the tests had ossified the code base to the extent that even simple changes were slow and painful.

As always happens, the business, wanted more. Different price points! (OMG, you mean it won't always be a one (1)!!?) New products (OMG, you mean it won't always just be movies on DVD??!) And there were field operational challenges. The folks who stocked and maintained the machines sometimes had to wait for the hardware if it was performing certain kinds of maintenance tasks (customers too). Ideally, the machine would be able to switch between tasks at a hardware level "on the fly". Oh, and they wanted everything produced faster.

I managed to transform this mess. Technically, I would say it was (mostly) a success. Culturally and politically it was a nightmare. I suffered severe burnout afterwards. The lesson I learned is that doing things "right" often has an extremely high price to be paid, which is why it almost never happens.

On "over-engineering":

I find this trend fascinating, because I do not believe it to be an inherent issue. Rather, what has happened, is that "engineering" has moved ever closer to "the business", to the point of being embedded within it. What I mean by "embedding" here is structurally and culturally. [Aa]gile was the spark that started this madness.

Why does this matter? Engineering culture is distinct and there are lessons learned within we ought not ignore. However, when a group of engineers is subsumed into a business unit, their ability to operate as engineers with an engineering culture becomes vastly more difficult.

The primary lesson I feel we're losing in this madness is the distinction between capability enablement and the application of said abilities.

Think about hardware engineering: I do not necessarily know all of the ways you -- as the software engineer -- will apply the abilities I expose via my hardware. Look at the amazing things people have discovered about the Commodore 64 years after the hardware ceased production. Now, as Bob Ross would say, "Those are Happy Accidents." However, if I'm designing an IC, I need to think in terms of the abilities I expose as fundamental building blocks for the next layer up. Some of those abilities may never be used or rarely used, but it would be short sighted to not include them at all. I'm going to miss things, that's a given. My goal is to cover enough of the operational space of my component so it has a meaningful lifespan; not just one week. (N.B. This in no way implies I believe hardware engineers always produce good components. However, the mindset in play is the important take away.)

Obviously, the velocity of change of an IC is low because physics and economics. This leads everyone to assume that all software should be the opposite, but that's a flawed understanding. What happens today is we take C#, Java, Python, Ruby, etc. and start implementing business functionality at that level. To stretch my above hardware analogy, this is like we're taking a stock CPU/MCU off the shelf and writing the business functionality in assembly -- each and every time. Wait! What happened to all that stuff you learned in your CS undergrad!? Why not apply it?

The first thing to notice is that the "business requirements" are extremely volatile. Therefore, there must be a part of the system designed around the nature of that change delta. That part of the system will be at the highest, most abstract, level. Between, say the Java code, and that highest level, will be the "enablement layers" in service of that high velocity layer.

Next, notice how a hardware vendor doesn't care what you've built on top of their IC component? Your code, your problem. Those high-delta business requirements should be decoupled from software engineers. Give the business the tools they need to solve their own problems. This is going to be different for each business problem, but the pattern is always the same. The outcome of this design is that the Java/C#/whatever code now has a much lower change velocity and the requirements of it are future enablement in service of the tools and abstraction layer you've built for the business. Now they can have one week death march iterations all they want: changing colors, A/B testing, moving UI components around for no reason...whatever.

There are real-life examples of this pattern: Unity, Unreal Engine Blueprints, SAP, Salesforce. The point here isn't about the specifics of any one of these. Yes, a system like Blueprints has limits, but it's still impressive. We can argue that Unity is a crappy tool (poor implementation) but that doesn't invalidate the pattern. SAP suffers from age but the pattern is solid. The realization here is that the tool(s) for your business can be tailored and optimized for their specific use case.

Final thoughts

Never underestimate that the C3 project (where Extreme Programming was born) was written in Smalltalk, with a Gemstone database (persistent Smalltalk). One of the amazing traits of Smalltalk is that the entire environment itself is written in Smalltalk. Producing a system like I describe above, in Smalltalk, is so trivial one would not notice it. Unfortunately, most business applications are not written in environments nearly as flexible so the pattern is obscured. I've held the opinion for a long time that XP "worked" because of the skills of the individual team members and the unique development environment in use.

As I stated at the beginning, this path is fraught with heartache and dragons for human reasons.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: